<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Parsing Arabic using deep learning technology</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rahma Maalej</string-name>
          <email>rahmamaalej1234@gmail.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nabil Khoufi</string-name>
          <email>nabil.khoufi@fsegs.rnu.tn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chafik Aloulou</string-name>
          <email>chafik.aloulou@fsegs.tnu.tn</email>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ANLP Research Group, MIRACL Lab</institution>
          ,
          <addr-line>Sfax</addr-line>
          ,
          <country country="TN">Tunisia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Syntactic Parsing present a fundamental step in the process of automatic analysis of the language since it is the crucial task of determining the syntactic structures sentences. In this paper, we propose to syntactically analyze sentences for the Arabic language using deep learning techniques. We present our methodology and expose evaluation results using several deep learning architectures.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>implementing the deduced grammar (results of the first phase) to perform the syntactic analysis. They
tested this analyzer with a set of sentences taken from the ATB Treebank (1650 sentences). The authors
found an accuracy of 83.59%, a recall equal to 82.98% and an F-measure equal to 83.23%.</p>
      <p>[2] proposed to use an Arabic property grammar to evaluate and enrich the Stanford Parser. In fact,
this parser is based on the verification of the satisfaction of the syntactic constraints, also called
properties, based on the analysis results of the corpus. Moreover, they enriched the simple
representation of the result of the analysis with syntactic properties. This makes it possible to clarify
several implicit information which present the relations between syntactic units.[2] obtained an F-score
of 77.62%. with a recall value of 70.2% and a precision of 86.81%.</p>
      <p>
        Recently,[
        <xref ref-type="bibr" rid="ref6">8</xref>
        ] proposed to construct a formal grammar for use in automatic processing applications
of the Arabic language. Their thesis work aimed to set up a grammar in order to describe the syntax and
semantics of Modern Standard Arabic. They presented a complete meta-grammatical description that
allows to achieve a syntactically and semantically rich representation of this language. For the
evaluation of the syntactic analysis, they constructed manually a test corpus by extracting 1000
syntactically correct sentences from the Tunisian school book (8th grade level). They obtained as
results, an accuracy of 82.33%, a Recall of 88.10%and an F1-score equal to 85.11%.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref2">4</xref>
        ] proposed two approaches; the first consists in detecting and correcting syntactic errors based on
the automatic generation of correct sentences [
        <xref ref-type="bibr" rid="ref16">18</xref>
        ]. The second is aimed at the automatic correction of
case termination errors based on parsing. He built a corpus that consists of 360sentences. The system
achieved an accuracy rate equals to 92.01%, a recall rate which is equal to 84.83% and an F-score of
88.27%.
      </p>
      <p>
        The statistical approach is based on statistical models. We have not found recent works using deep
learning for syntactic parsing of MSA. On the other hand, we found works dealing with French and
English. We can cite the work of [
        <xref ref-type="bibr" rid="ref4">6</xref>
        ] worked on lexicalized parsing for de-constituting grammars.
[
        <xref ref-type="bibr" rid="ref4">6</xref>
        ] had a main objective which consisted in adapting an underlying statistical model to these new
representations. They proposed a study of three neuronal architectures of increasing complexity and
show that the use of a non-linear hidden layer makes it possible to take advantage of the information
given by the embedding. They found as results an F-measure of 80.7% for the given tags and 78.3% for
the predicted tags.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref2">4</xref>
        ] performed the analysis of advantages of a pre-trained language model such as BERT
(bidirectional encoder representations from transformers) to the syntactic analysis in discontinues
constituents in English (PTB, Penn Treebank). They found as result F-score equal to 95%.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref7">9</xref>
        ] have implemented a Hybrid Standard Arabic parser that combines two parsing approaches:
statistical parsing with linguistic parsing. The objective of this analyzer is to reduce the analysis time
which is exponential with the size of the sentence.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref17">19</xref>
        ] present the results of an evaluation on the integration of data from a syntactic lexicon, the
Grammar lexicon, in a parser. They have carried out the evaluation according to the cross-validation
method on 10% of the French Treebank (FTB). They have obtained an F-score of 85.32%.
      </p>
      <sec id="sec-2-1">
        <title>French Corpus SPMRL</title>
      </sec>
      <sec id="sec-2-2">
        <title>English corpus PTB ATB</title>
      </sec>
      <sec id="sec-2-3">
        <title>French Treebank</title>
        <p>= 92%,  = 84% −  =
88.27%
F-score = 80.7 % for the data tags and
78.3 % for the predicted tags</p>
        <p>F-score=95%
F-score=83,24%
F-score=85.32%</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Approach</title>
      <p>This section presents the general architecture of our proposed approach.</p>
      <p>Our statistical method for parsing Arabic based on deep learning techniques is divided into three
steps: the preprocessing step, the training step and the validation and testing step. The first step presents
the data preprocessing by extracting the syntactic levels for each sentence of the data. The second, does
the training of our models and finally, the third step is the step of validation and testing. The steps of
our proposed method are illustrated in the following figure:</p>
    </sec>
    <sec id="sec-4">
      <title>The preprocessing step</title>
    </sec>
    <sec id="sec-5">
      <title>3.1.1. Corpus ATB</title>
      <p>The preprocessing step consists of preparing data for the training step.</p>
      <p>
        In this work, we used the annotated Penn Arabic Treebank (ATB) corpus [
        <xref ref-type="bibr" rid="ref11">13</xref>
        ].
      </p>
      <p>
        The ATB corpus comprises data extracted from linguistic sources written with the standard and
modern Arabic language. It behaves 599 texts taken from the Lebanese newspaper ≪ An Nahar≫. The
texts are non-vowel or partially vowel and segment. Each word is annotated with several information
such as the morphological trait, the part of speech and the English translation. Also, it contains the
syntax trees for each sentence [
        <xref ref-type="bibr" rid="ref12">14</xref>
        ].
      </p>
      <p>In addition, The ATB corpus is very rich in information, such as gender, number and rationality. It
deals with large and important annotations. On the one hand, the corpus encompasses a set of 498
annotations to describe all the morphological functionalities, it also uses 22 annotations to describe
grammatical classes. There are 20 other notations, which describe the semantic relationships between
words. On the other hand, stop words are also marked (stop words exist with a large number in Standard
Arabic) with specific annotations. In our work, we use version 3.2 which contains 402,291 words. This
version has been created with several different formats to help the user for his research: SGM, POS,
XML, penntree, Integreted.</p>
      <p>For the preprocessing step of our ATB corpus, we mainly used the two formats ≪ XML ≫ and ≪
Tree ≫ because they have a hierarchical and readable representation and in addition, they contain the
essential information for our work objective and for experimentation.</p>
      <p>
        We have presented the eight syntactic levels with a set of morpho-syntactic labels for each sentence
extracted from the corpus ATB [
        <xref ref-type="bibr" rid="ref1">3</xref>
        ].
      </p>
      <p>Features. These features indicate the information used from the annotated corpus during the
training step, which is the morphological annotations in the syntactic annotations. In fact, we
extracted the morpho-syntactic labeling from ATB. And, we have reduced the number of labels in order
to avoid complexity and to be prepared to learn and predict.</p>
      <p>For example: NP+ ADJP: NP.
3.2.</p>
    </sec>
    <sec id="sec-6">
      <title>The training step</title>
      <p>Many machine learning algorithms need a vector representation as an input. In this work, we have
used embedding resources such as input from our neural systems. we have represented our data resource
with word2vec.</p>
    </sec>
    <sec id="sec-7">
      <title>3.2.1. Continuous Representations of Words</title>
      <p>
        Word2vec is a popular method of constructing word embedding. It was introduced by [
        <xref ref-type="bibr" rid="ref13">15</xref>
        ] Two
variations of word2vec have been proposed for learning word embedding: Skip-gram and BOW (Bag
of Words). The BOW (Bag Of Words) is an architecture that predicts the current target word (the central
word) based on the source context words (surrounding words) [
        <xref ref-type="bibr" rid="ref13">15</xref>
        ].
      </p>
      <p>
        The skip-gram aims to predict the words of the context given an input word [
        <xref ref-type="bibr" rid="ref13">15</xref>
        ].
      </p>
      <p>In this work, we applied the bag of words (BOW) representation because it is the most popular
method for NLP applications.</p>
      <p>After obtaining the word embedding with the word2vec method, we start the training by applying
the deep learning techniques: LSTM, GRU, BI-LSTM. These models will present results after training
and we will compare them to choose the model having a better result to apply it to perform parsing.</p>
      <p>Today, neural networks present the systems most used in different machine learning tasks. They are
widely used, for example, in the fields of computer vision (image classification, detection of an object,
segmentation, etc.) and automatic language processing (automatic translation, voice recognition,
language models, etc.).</p>
      <p>In this work, we are interested in the creation of a model for each syntactic level. This model has an
important objective, is to determinate the different constituents of a sentence and the different relations
between them.</p>
      <p>• Long Short-Term Memory Network (LSTM) presents a building unit for the layers of a
recurrent neural network (RNN). An RNN made up of LSTM units is often referred to as an
LSTM network. A common LSTM unit is presented with a cell, an input gate, an output
gate and a forget gate. The cell is responsible for” storing” values. Each of the three gates
can be considered as a” conventional” artificial neuron, as in a multilayer neural network
(or feedforward): that is to say, they calculate an activation (using an activation function) of
•
•
a weighted sum. Indeed, they can be considered as regulators of the flow of values. There
are connections between these doors and the cell.</p>
      <p>Gated Recurrent Unit (GRU) is a gated recurrent network similar to the LSTM network, but
it receives fewer parameters than it.</p>
      <p>BILSTM is a Bidirectional LSTM, present a sequence processing model that include two
LSTMs: one is receives the input in a forward direction, and the other in a backwards
direction. BILSTMs increase the amount of information available on the network and
improve the context available for the algorithm.</p>
    </sec>
    <sec id="sec-8">
      <title>4. Results</title>
      <p>The evaluation of our method is realized in a validation and test step. To achieve this, we have
divided the ATB corpus in two parts, one for learning (70%) and one for evaluation (30%) between
validation (0.15%) and testing (0.15%).</p>
      <p>We used deep learning techniques for train and test our models for our eight levels. we applied the
RNN models: LSTM model, GRU model and BILSTM model.</p>
      <p>The results are illustrated in table 2. The table shows analysis performance by levels.</p>
      <sec id="sec-8-1">
        <title>Levels</title>
        <sec id="sec-8-1-1">
          <title>Level0</title>
        </sec>
        <sec id="sec-8-1-2">
          <title>Level1</title>
        </sec>
        <sec id="sec-8-1-3">
          <title>Level2</title>
        </sec>
        <sec id="sec-8-1-4">
          <title>Level3</title>
        </sec>
        <sec id="sec-8-1-5">
          <title>Level4</title>
        </sec>
        <sec id="sec-8-1-6">
          <title>Level5</title>
        </sec>
        <sec id="sec-8-1-7">
          <title>Level6</title>
        </sec>
        <sec id="sec-8-1-8">
          <title>Level7</title>
          <p>LSTM-model
99,00%
99.18%
98,84%
99,09%
99,36%
99,21%
99.25%
99.26%</p>
        </sec>
      </sec>
      <sec id="sec-8-2">
        <title>Accuracy</title>
        <p>GRU-model
99,08%
99.31%
98,91%
99.15%
99.39%
99.28%
99.32%
99.35%</p>
        <p>BILSTM-model
99,60%
99.53%
99,17%
99.36%
99.60%
99.49%
99.57%
99.60%</p>
        <p>We have obtained good results which are encouraging to realize the syntactic parsing for the standard
Arabic language with a deep learning model. We can compare the results obtained for the models:
LSTM, GRU, BILSTM. We deduced that that the models BILSTM have best results.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>5. Conclusion and Perspectives</title>
      <p>In this paper, we presented our numerical method to perform parsing for Standard Arabic using deep
learning techniques. We have built a model and we obtained encouraging results. As a perspective, we
think that we can develop the learning stage by adding other features besides the POS features. we
believe that the enrichment of the features can give better results. We plan to evaluate our method with
another external Arabic corpus.</p>
    </sec>
    <sec id="sec-10">
      <title>6. References</title>
      <p>[1] C. Aloulou.Une approche multi-agent pour l’analyse de l’arabe: Modélisation de la syntaxe.PhD
thesis, Thèse de doctorat en informarique, Ecole Nationale des Sciences de l ..., 2005.</p>
      <p>[2] R. B. Bahloul, N. Kadri, K. Haddar, and P. Blache. Evaluation and enrichment of stanfordparser
using an arabic property grammar. In A. F. Gelbukh, editor,Computational Linguisticsand Intelligent
Text Processing - 18th International Conference, CICLing 2017, Budapest,Hungary, April 17-23, 2017,
Revised Selected Papers, Part I, volume 10761 ofLecture Notes inComputer Science, pages 170–182.
Springer, 2017.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H. B.</given-names>
            <surname>Barhoumi</surname>
          </string-name>
          , Aloulou and
          <string-name>
            <surname>Z.</surname>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Analyse syntaxique statistique de la langue arabe</article-title>
          .
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Chouaib</surname>
          </string-name>
          .
          <article-title>Contributions à la correction automatique des erreurs syntaxiques dans la langueArabe</article-title>
          .
          <source>PhD thesis</source>
          ,
          <source>Faculté des Sciences Ben M'sik Université Hassan II Casablanca</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Coavoux</surname>
          </string-name>
          .
          <article-title>Qu'apporte bert à l'analyse syntaxique en constituants discontinus? une suitede tests pour évaluer les prédictions de structures syntaxiques discontinues en anglais(what does bert contribute to discontinuous constituency parsing? a test suite to evaluate discontinuous constituency structure predictions in english)</article-title>
          .
          <article-title>InActes de la 6e conférenceconjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique desLangues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatiquepour le Traitement Automatique des Langues (RÉCITAL, 22e édition)</article-title>
          .
          <source>Volume 2: TraitementAutomatique des Langues Naturelles</source>
          , pages
          <fpage>189</fpage>
          -
          <lpage>196</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Coavoux</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Crabbé</surname>
          </string-name>
          .
          <article-title>Comparaison d'architectures neuronales pour l'analysesyntaxique en constituants</article-title>
          .
          <source>InActes de la 22e conférence sur le Traitement Automatique desLangues Naturelles. Articles longs</source>
          , pages
          <fpage>291</fpage>
          -
          <lpage>302</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Coavoux</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Crabbé</surname>
          </string-name>
          .
          <article-title>Prédiction structurée pour l'analyse syntaxique en constituantspar transitions: modèles denses et modèles creux</article-title>
          .
          <source>Traitement Automatique des Langues</source>
          ,
          <volume>57</volume>
          (
          <issue>1</issue>
          ),
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C. B.</given-names>
            <surname>Khelil</surname>
          </string-name>
          .
          <article-title>Construction semi-automatique d'une grammaire d'arbres adjoints pour l'analysesyntaxico-sémantique de l'arabe</article-title>
          .
          <source>PhD thesis</source>
          , Université d'Orléans; Université de la Manouba(Tunisie),
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khoufi</surname>
          </string-name>
          .
          <article-title>Une approche hybride pour l'analyse syntaxique de la langue arabe</article-title>
          .
          <source>PhD thesis</source>
          ,Université de Sfax (Tunisie),
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khoufi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Aloulou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Belguith</surname>
          </string-name>
          .
          <article-title>Arabic probabilistic context free grammarinduction from a treebank</article-title>
          .
          <source>Res. Comput. Sci.</source>
          ,
          <volume>90</volume>
          :
          <fpage>77</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khoufi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Aloulou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Belguith</surname>
          </string-name>
          .
          <article-title>A framework for language resource constructionand syntactic analysis: Case of arabic</article-title>
          .
          <source>InInternational Conference on Intelligent TextProcessing and Computational Linguistics</source>
          , pages
          <fpage>356</fpage>
          -
          <lpage>365</lpage>
          . Springer,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khoufi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Aloulou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Belguith</surname>
          </string-name>
          .
          <article-title>Toward hybrid method for parsing modernstandard arabic</article-title>
          .
          <source>In2016 17th IEEE/ACIS International Conference on Software Engineering,Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)</source>
          , pages
          <fpage>451</fpage>
          -
          <lpage>456</lpage>
          .IEEE,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Maamouri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Buckwalter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Mekki</surname>
          </string-name>
          .
          <article-title>The penn arabic treebank: Building alarge-scale annotated arabic corpus</article-title>
          .
          <source>InNEMLAR conference on Arabic language resourcesand tools</source>
          , volume
          <volume>27</volume>
          , pages
          <fpage>466</fpage>
          -
          <lpage>467</lpage>
          . Cairo,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Maamouri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bies</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Kulick</surname>
          </string-name>
          .
          <article-title>Enhancing the arabic treebank: a collaborative efforttoward new annotation guideline</article-title>
          .
          <source>InLREC</source>
          , pages
          <fpage>3</fpage>
          -
          <lpage>192</lpage>
          . Citeseer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          , G. Corrado, and
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          .
          <article-title>Efficient estimation of word representationsin vector space</article-title>
          .
          <source>arXiv preprint arXiv:1301.3781</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [16]
          <string-name>
            <surname>M. A. B. Mohamed</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Mallat</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          <string-name>
            <surname>Nahdi</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Zrigui</surname>
          </string-name>
          .
          <article-title>Exploring the potential ofschemes in building nlp tools for arabic language</article-title>
          .
          <source>International Arab Journal of InformationTechnology (IAJIT)</source>
          ,
          <volume>12</volume>
          (
          <issue>6</issue>
          ),
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [17]
          <string-name>
            <surname>M. A. B. Mohamed</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Zrigui</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Zouaghi</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Zrigui</surname>
          </string-name>
          .
          <article-title>N-scheme model: An approachtowards reducing arabic language sparseness</article-title>
          .
          <source>In2015 5th International Conference onInformation &amp; Communication Technology and Accessibility (ICTA)</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . IEEE,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Moukrim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tragha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Almalki</surname>
          </string-name>
          , et al.
          <article-title>An innovative approach to autocorrecting grammatical errors in arabic texts</article-title>
          .
          <source>Journal of King</source>
          Saud University-Computer and InformationSciences,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sigogne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Constant</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E.</given-names>
            <surname>Laporte</surname>
          </string-name>
          . Int\
          <article-title>'egration des donn\'ees d'un lexique syntax-ique dans un analyseur syntaxique probabiliste</article-title>
          .
          <source>arXiv preprint arXiv:1404</source>
          .
          <year>1872</year>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>E.</given-names>
            <surname>Wehrli</surname>
          </string-name>
          and
          <string-name>
            <surname>L. Nerima.</surname>
          </string-name>
          <article-title>The fips multilingual parser</article-title>
          .
          <source>InLanguage Production, Cognition,and the Lexicon</source>
          , pages
          <fpage>473</fpage>
          -
          <lpage>490</lpage>
          . Springer,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>