<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Question answering system for the French language</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Laura Perret Institut interfacultaire d'informatique University of Neuchâtel</institution>
          ,
          <addr-line>Pierre-à-Mazel 7, 2000 Neuchâtel</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper describes our first participation in the QA@CLEF monolingual and bilingual task, where our objective was to propose a question answering system designed to respond to French queries submitted to search French documents. We wanted to combine a classic information retrieval model (based on the Okapi probabilistic model) with a linguistic approach based mainly on syntactic analysis. In order to utilize our monolingual system in the bilingual task, we automatically translated into French queries written in seven other source languages, namely Dutch, German, Italian, Portuguese, Spanish, English and Bulgarian. For the first time QA@CLEF-2004 has proposed a question-answering track that allows various European languages to be used either as a source or target language. Our aim in this study was to develop a question answering system for the French language and to evaluate its performance. In Section 1, we describe how we developed our question answering system to carry out the monolingual French task. As a first step in this process, we applied a classical information retrieval model (based on the Okapi probabilistic model) to extract a small number of responding paragraphs for each query. We then analyzed the queries and sentences included in retrieved paragraphs using a syntactic analyzer (FIPS) developed at the Laboratoire d'analyse et de Technologie du Langage (LATL) at the University of Geneva. Finally, we suggested a matching strategy that would extract responses from the best-ranked sentences. In Section 2, we describe methods used to overcome language barriers by accessing various translation resources to translate various queries into French and then, with French as target language, utilize our question answering system to carry out this bilingual task. In Section 3, we discuss the results obtained from this technique and in the last section we draw conclusions on what improvements we might envisage for our system.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>1. Monolingual Question Answering</title>
      <p>The monolingual task was designed for six different languages, namely Dutch, French, German, Italian,
Portuguese, and Spanish. Given that our question answering system is language dependant, we only addressed
the French monolingual task.</p>
    </sec>
    <sec id="sec-3">
      <title>1.1 Overview of the Test-Collection</title>
      <p>Given that we did not have previous experience in building a QA system, we developed a test set consisting of
57 homemade factual queries from corpora consisting of the newspapers Le Monde (1994, 157 MB) and SDA
French (1994, 86 MB). Table 1 shows some examples of these queries.</p>
      <sec id="sec-3-1">
        <title>Query</title>
      </sec>
      <sec id="sec-3-2">
        <title>Answer string</title>
      </sec>
      <sec id="sec-3-3">
        <title>Supporting document</title>
        <sec id="sec-3-3-1">
          <title>Où se trouve le siège de l’OCDE ? Qui est le premier ministre canadien ? Combien de collaborateurs emploie ABB ?</title>
        </sec>
        <sec id="sec-3-3-2">
          <title>Paris</title>
          <p>Jean Chrétien
206 000</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>1.2 Information Retrieval Scheme</title>
      <p>Firstly, we split the test collection into paragraphs using the &lt;TEXT&gt; tag as delimiter for Le Monde documents
and the &lt;TX&gt; tag as delimiter for the SDA French documents.</p>
      <p>For each paragraph, we then removed the most frequent words, using the French stopword list available at
www.unine.ch/info/clef/. From this stopword list we removed numeral adjectives such as « premier » (first),
« dix-huit » (eighteen), « soixante » (sixty), assuming that answers to factoid questions may contain numerical
data. The final stopword list contained 421 entries.</p>
      <p>
        After removing high frequency words, we also used an indexing procedure as a stemming algorithm (also
available at www.unine.ch/info/clef/ [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]). We assumed that looking for exact answers requires a lighter stemmer,
one that would not affect the part-of-speech categorization for terms. Our stemmer thus only removed
inflectional suffixes so that singular and plural, and also feminine and masculine forms, would conflate to the
same root. Table 2 describes our stemming algorithm.
if word length greater than 5
if word ends with « aux » then replace « aux » by « al »
else
if word ends with ‘s’ then remove ‘s’
if word ends with ‘r’ then remove ‘r’
if word ends with ‘e’ then remove ‘e’
if word ends with ‘é’ then remove ‘é’
if word ends with a double letter then remove the last letter
For our indexing and search system, we used a classical SMART information retrieval system [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] to retrieve the
ten best paragraphs for each query from the underlying collection. In our experiment, we chose the Okapi
probabilistic model (BM25), setting our constants to the following values: b=0.8, k1=2 and avdl=400.
      </p>
    </sec>
    <sec id="sec-5">
      <title>1.3 French Syntactic Analysis</title>
      <p>
        In a second step, we used the French Interactive Parsing System (FIPS), a robust French syntactic analyzer
developed at the LATL in Geneva [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This tool is based on the Chomsky’s Theory of Principles and
Parameters [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and the Government and Binding model [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. It takes a text as input, splits it into sentences,
and then for each sentence computes a syntactic structure.
      </p>
      <p>We took advantage of this tool to analyze the queries as well as the paragraphs retrieved by our classical IR
system. Table 3 shows the analysis obtained for the Query #1 « Quel est le directeur général de FIAT ? » (Who
is the managing director of FIAT?)</p>
      <p>Term POS Cnuomncbeeprt Named entities nLuemxebmeer Lemma
quel PRO-INT-SIN-MAS 211049516 0 quel
211000095
est VER-IND-PRE-SIN 221111004281855057 4 être
211049530
le DET-SIN-MAS 211045001 8 le
directeur NOM-SIN-MAS 211014688 {0, 13, 24} 11 directeur
général ADJ-SIN-MAS 211014010 21 général
de PRE 211047305 29 de
FIAT NOM-SIN-ING 0 {16} 32 FIAT
? PONC-interrogation 0 37 ?
[CP[DP quel ]i[C [TP[DP ei ][T est [VP [DP le [NP directeur [AP[DP ej ][A général [PP de [DP FIAT ]]]]]j]]]]
?]]
The last row in Table 3 showes a syntactic analysis of the complete sentence while the other rows show items of
information on each word in the sentence. For each word, the first column contains the original term, the second
column the part-of-speech and the third the concept number. The forth column lists the named entities, the fifth
the lexeme number while the last column shows the lemma used as the dictionary entry.</p>
      <p>The original tool was adapted in order to provide two sorts of named entities recognition: numeral named entities
(Table 4) and noun named entities (Table 5).</p>
      <p>Named entity
numeral
percent
ordinal
special number
cardinal
digit
premier
23%
1er
751.04.09
1291
12, douze</p>
      <sec id="sec-5-1">
        <title>Example</title>
        <p>(first)
(twelve)</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>1.4 Matching Strategy</title>
      <p>Once we had the queries and the best responding paragraphs analyzed by FIPS, we developed a matching
scheme, one that allowed our system to find the best answer snippet.</p>
      <sec id="sec-6-1">
        <title>Query Analysis</title>
        <p>We analyzed the queries in order to determine their relevant terms, targets and expected answer types. To
facilitate the retrieval of a response, we selected the relevant terms from a query. A term was considered relevant
if its idf was greater than 3.5 (idf = ln (n / df), where n denotes the number of documents in the collection and df
the number of documents that contain the term). This threshold was chosen empirically according to our
collection size (730,098 paragraphs) and corresponds to a df of about 20,000.</p>
        <p>We then looked within the query for an interrogative word. As our syntactic analyzer was able to supply the
lemma for any known term (last column of Table 3), our interrogative words set was reduced to the following
list {quel, qui, que, quoi, où, quand, combien, pourquoi, comment}. Most queries contain an interrogative word
from this list except queries such as « Donnez le nom d'un liquide inodore et insipide. » (Name an odourless and
tasteless liquid.).
We defined the query target by choosing the first term after the interrogative word, whose part-of-speech tag was
labelled by FIPS as NOM-* (noun). If the query did not contain an interrogative word, the target was searched
from the beginning of the query. Some particular words were however excluded from the allowed targets since
they did not represent relevant information. The list of excluded targets was:
nombre, quantité, grandeur, dimension, date, jour, mois, année, an, époque, période, nom, surnom, titre, lieu
As illustrated in Table 6, using the query interrogative word and target, we categorized queries under six classes.</p>
      </sec>
      <sec id="sec-6-2">
        <title>Interrogative words Specific target</title>
        <p>numeral target:
pourcentage, nombre, quantité,
distance, poids, longueur,
hauteur, largeur, âge, grandeur,
dimension, superficie
time target
date, jour, mois, année, an,
époque, période
function target
président, directeur, ministre,
juge, sénateur,
acteur, chanteur, artiste,
présentateur, réalisateur</p>
        <p>Once we classified the queries into their corresponding classes, we identified the expected answer type for each
class. Their order has no influence on the system. Table 7 shows the details of these classes.</p>
        <p>Expected answer type
all noun named entities
location, country, town, river, mountain, proper name
quantity, weight, length and all numeral named entities
time, day, month, numeral, ordinal, special number, cardinal, digit
human, animate, collective, people, corporation, title, function, proper name
all noun named entities</p>
      </sec>
      <sec id="sec-6-3">
        <title>Sentences Ranking</title>
        <p>Given that the analyzer split the paragraphs into sentences, we ranked the sentences according to the score
computed by the Formula 1 where sentenceRelevant is the number of relevant query terms in the sentence,
sentenceLen is the number of terms in the sentence and queryRelevant is the number of relevant terms in the
query (without stopwords):
score = sentenceRelevant * sentenceLen / (sentenceLen – queryRelevant)
(1)
We then chose the ten sentences having the highest score. Table 8 shows the four best selected sentences for
Query #19 « Où se trouve la mosquée Al Aqsa ? » (Where is the Al Aqsa Mosque?).</p>
      </sec>
      <sec id="sec-6-4">
        <title>Document and sentence</title>
        <p>[ATS.950417.0033] : la police interdit aux juifs de prier sur l' esplanade où se trouve la
mosquée al-Aqsa , troisième lieu saint de l' islam après la Mecque et Médine .
[ATS.940304.0093] : la police a expliqué qu' elle bouclait le site le plus sacré du judaïsme
jusqu' à la fin de la prière du vendredi à la mosquée Al -- Aqsa , laquelle se trouve sur l'
Esplanade du Temple qui domine le Mur des Lamentations .
[ATS.940405.0112] : la mosquée al Aqsa rouverte aux touristes .
[ATS.940606.0081] : cette phrase laisse ouverte la possibilité pour M. Arafat d' aller prier
à la mosquée al-Aqsa à Jérusalem .</p>
      </sec>
      <sec id="sec-6-5">
        <title>Snippets Extraction</title>
        <p>For each selected sentence, we searched the identified query target. If the target was never found, we selected the
first sentence for the rest of the process. We then listed the terms of the expected answer types in a window
containing the 4 terms before and after the target term. Confidence in this sentence was computed according to
Formula 2 where score was the initial score of the sentence and maxScore the score of the best-ranked sentence
for the current query. If the maxScore was equal to zero, the sentence score was also set to zero.
confidence = score / maxScore
(2)
For each expected type term found, we extracted the closest DP (determiner-phrase) or NP (noun-phrase) group
node from the sentence analysis tree. Thus, each sentence may produce one or more nodes (as shown in Table 9,
2nd and 3rd row). From the list obtained in the previous step, we then eliminated all nodes contained in other
nodes whose difference level was less than 7. The level represents the node depth in the syntactic analysis tree.
We then pruned the remaining nodes by extracting the part of the node that did not contain query term. Finally,
following the pruning process, we eliminated any snippets that did not contain expected answer terms. For Query
#19 where the correct answer is “Jérusalem”, Table 9 lists the remaining nodes.</p>
      </sec>
      <sec id="sec-6-6">
        <title>Document</title>
        <p>ATS.940304.0093
ATS.940606.0081
ATS.940606.0081
LEMONDE94-001632-19940514
ATS.941107.0105
ATS.940304.0093
ATS.940304.0093
LEMONDE94-001740-19940820
ATS.940405.0112
ATS.951223.0020
ATS.951223.0020</p>
      </sec>
      <sec id="sec-6-7">
        <title>Voting Procedure Confidence</title>
        <p>We supposed that an answer having a lower confidence than the best candidate could nevertheless be a good
answer if it was supported by more documents. Therefore, the last step of the process was to choose which
remaining snippet should be returned as the response by implementing it with the voting procedure.
First we split each snippet into words, and then we counted the occurrences of each non-stopword in other
snippets. Finally, we ranked the snippets according to their scores computed using Formula 3 where len was
equals to 1 for definition queries and the snippet words count for factoid queries. Indeed, as definition responses
may be longer than factoid responses, we did not want to penalize long definition responses.
score = occurrencesCount / len
(3)
If the occurrencesCount was equal to zero, we chose the first snippet but decreased its confidence. Else, we
chose the snippet with the higher score as answer. Table 10 shows the snipped chosen for Query #19.</p>
      </sec>
      <sec id="sec-6-8">
        <title>Document</title>
        <p>ATS.940606.0081</p>
      </sec>
      <sec id="sec-6-9">
        <title>Confidence</title>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>2. Bilingual Question Answering</title>
      <p>Given that our question answering system was developed for the French language, we only addressed bilingual
tasks in which French was the target language. We therefore submitted results for Dutch, German, Italian,
Portuguese, Spanish, English and Bulgarian as source languages, with French as the target language.</p>
    </sec>
    <sec id="sec-8">
      <title>2.1 Automatic Query Translation</title>
      <p>
        Since our QA system was designed to respond to French queries concerning French documents, we needed to
translate original the queries formulated in other languages into French. In order to overcome language barriers,
we based our approach on free and readily available translation resources that would automatically translate
queries into the desired target language, namely French [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. These resources were:
1.
2.
3.
4.
5.
6.
7.
      </p>
      <p>Reverso (www.reverso.fr)
TranslationExperts.com (intertran.tranexp.com)
Free2Professional Translation (www.freetranslation.com)
AltaVista (babelfish.altavista.com)
SystranTM (www.systranlinks.com)
Google.comTM (www.google.com/language_tools)</p>
      <p>WorldLingoTM (www.worldlingo.com)</p>
    </sec>
    <sec id="sec-9">
      <title>2.2 Translation Examples</title>
    </sec>
    <sec id="sec-10">
      <title>3. Results</title>
      <sec id="sec-10-1">
        <title>Original query</title>
        <p>Кой е управителният директор на
ФИАТ?
Wer ist der Geschäftsführer von FIAT?
Who is the managing director of FIAT?</p>
        <sec id="sec-10-1-1">
          <title>Qui est le directeur de FIAT ?</title>
          <p>Qui est le directeur général de DéCRET ?
¿Quién es el director gerente de FIAT? CQOuiNeSstE-cNeTqEuMieEstNlTe d?irecteur gérant de
Chi è l'amministratore delegato della Fiat? Qui est le directeur exécutif général de Fiat ?
Wie is de bestuursvoorzitter van Fiat? Qui est-il le président d'administration de fiat ?
Quem é o administrador-delegado da Qui est l'agent d'administrateur-commission de
Fiat? Fiat ?</p>
        </sec>
        <sec id="sec-10-1-2">
          <title>Qui å upravitelniiat direktor na FIAT?</title>
        </sec>
      </sec>
      <sec id="sec-10-2">
        <title>Translated query</title>
        <p>Each answer was assessed and marked as correct, inexact, unsupported or wrong, as illustrated in the following
examples. An answer was judged correct by a human assessor when the answer string consisted exactly of the
correct expected answer and this answer was supported by the returned document. For example, the pair
["Cesare Romiti", ATS.940531.0063] was judged correct for the Query #1 « Quel est le directeur général de
FIAT ? » (Who is the managing director of FIAT?), since the supporting document contained the string
« directeur général de Fiat Cesare Romiti ». Secondly, an answer was judged inexact when the answer string
contained more or less than just the correct answer and the answer was supported by the returned document. For
example, the pair ["premier ministre irlandais", ATS.940918.0057] was judged inexact for the Query #177
« Quelle est la fonction d'Albert Reynolds en Irlande ? » (What office does Albert Reynolds hold in Ireland?),
since the adjective « irlandais » was redundant. Thirdly, an answer was judged unsupported when the returned
document didn't support the answer string. Since our system only searched within collection documents
provided, none of our answers was judged unsupported. Finally, an answer was judged wrong when the
answerstring was not a correct answer. For example, the pair ["Underground", ATS.950528.0053] was judged wrong
for the Query #118 « Qui a remporté la palme d'or à Cannes en 1995 ? » (Who won the Cannes Film Festival in
1995?), since « Underground » is the movie title whereas « Emir Kusturica » is the movie director and was the
expected answer. Table 13 shows the results obtained for each source language. Given that the target language
was French, logically the best score was obtained in the monolingual task where no translation was needed.
We can see that the translation process resulted in an important performance decrease compared to the
monolingual French experiment (up to 73.5% for Bulgarian). It was surprising to note that the English
translation was listed as having the next to worst performance, just before the Bulgarian Cyrillic alphabet
language. However, a deeper analysis showed that in 7.5% (15/200) of cases, a majority of the various source
languages translations (&gt; 4) provided a correct answer whereas in 2.5% (5/200) of cases, they agreed on inexact
answers. This might suggest that the translation did not have much affect on the system's ability to find a correct
or inexact answer for about 10% of the queries.</p>
        <p>Looking at the answers marked as wrong in more detail, we detected some possible causes in addition to the
translation problem. First of all, for some queries, we could not retrieve any corresponding document from the
collection. Sometimes, we chose the wrong target and/or expected answer type. Thirdly, we were not able to
account for the time reference, as in Query #22 « Combien a coûté la construction du Tunnel sous la Manche ? »
(How much did the Channel Tunnel cost?) for which we provided the answer ["28,4 milliards de francs",
LEMONDE94-002679-19940621] supported by the sentence "à l'origine, la construction du tunnel devait coûter
28,4 milliards de francs". In this case, our answer gave the initial estimate but not the final cost.</p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>Conclusion</title>
      <p>For our first participation in the QA@CLEF track, we proposed a question answering system designed to search
French documents in response to French queries. To do so we used a French syntactic analyzer and a named
entities recognition technique in order to assist in identifying the expected answers. We then proposed a
matching strategy based on the node extraction from the analysis tree, followed by a ranking process.
In our bilingual task we used automatic translation resources to translate the original queries from Dutch,
German, Italian, Portuguese, Spanish, English and Bulgarian into French. The remainder of this process was the
same as that used in the monolingual task.</p>
      <p>The results showed performance levels of 24.5% for the monolingual task and up to 17% (German) for the
bilingual task. There are several reasons for these results, among them being the selection process for the target
and expected answer types. In the bilingual task, we verified that, as expected, the translation step was a
significant factor in performance level losses, given that for German the performance level had decreased by
about 30%.</p>
      <p>Our system could be improved by using more in-depth syntactic analyses for both queries and paragraphs. Also,
the target identification and queries taxonomy could be extended in order to obtain a more precise expected
answer type.</p>
    </sec>
    <sec id="sec-12">
      <title>Acknowledgments</title>
      <p>The author would like to thank Eric Wehrli, Luka Nerima and Violeta Seretan from LATL (University of
Geneva) for supplying the FIPS French syntactic analyzer as well as the task CLEF-2004 organizers for their
efforts in developing various European languages test-collections. The author would also like to thank C.
Buckley from SabIR for giving us the opportunity to use the SMART system. Furthermore, the author would
like to thank J. Savoy for his advice on the preliminary version of this article as well as Pierre-Yves Berger for
his contributions in the area of automatic translation. This research was supported in part by the SNSF (Swiss
National Science Foundation) under grant 21-66 742.01.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Savoy</surname>
            <given-names>J.,</given-names>
          </string-name>
          <article-title>A stemming procedure and stopword list for general French corpora</article-title>
          .,
          <source>Journal of the American Society for Information Science</source>
          ,
          <year>1999</year>
          ,
          <volume>50</volume>
          (
          <issue>10</issue>
          ), p.
          <fpage>944</fpage>
          -
          <lpage>952</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Savoy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <source>Report on CLEF-2003 multilingual tracks</source>
          ,
          <source>In: Proceedings of CLEF</source>
          <year>2003</year>
          , Trondheim,
          <year>2003</year>
          , p.
          <fpage>7</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Savoy</surname>
            <given-names>J.,</given-names>
          </string-name>
          <article-title>Combining multiple strategies for effective cross-language retrieval</article-title>
          .
          <source>In: Information Retrieval</source>
          ,
          <year>2004</year>
          ,
          <volume>7</volume>
          (
          <issue>1-2</issue>
          ), p.
          <fpage>121</fpage>
          -
          <lpage>148</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Salton</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>The Smart Retrieval System Experiments in automatic document processing</article-title>
          , Prentice-Hall, Englewood Cliffs,
          <year>1971</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Laenzlinger</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wehrli</surname>
            <given-names>E.</given-names>
          </string-name>
          , FIPS :
          <article-title>Un analyseur interctif pour le français</article-title>
          ,
          <source>In: TA Informations</source>
          ,
          <year>1991</year>
          ,
          <volume>32</volume>
          (
          <issue>2</issue>
          ), p.
          <fpage>35</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Wehrli</surname>
            <given-names>E.</given-names>
          </string-name>
          ,
          <article-title>Un modèle multilingue d'analyse syntaxique</article-title>
          , In: A.
          <string-name>
            <surname>Auchlin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Burer</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Filliettaz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Grobet</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Moeschler</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Perrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Rossari</surname>
          </string-name>
          et L. de Saussure, Structures et discours --
          <source>Mélanges offerts à Eddy Roulet</source>
          ,
          <year>2004</year>
          , Québec, Editions, Nota bene, p.
          <fpage>311</fpage>
          -
          <lpage>329</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Wehrli</surname>
            <given-names>E.,</given-names>
          </string-name>
          <article-title>L'analyse syntaxique des langues naturelles : Problèmes et</article-title>
          méthodes,
          <year>1997</year>
          , Paris, Masson.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Chomsky</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lasnik</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <article-title>The theory of principles and parameters</article-title>
          , In: Chomsky N., (
          <year>1995</year>
          )
          <article-title>The Minimalist Program</article-title>
          . Cambridge, MIT Press, pp.
          <fpage>13</fpage>
          -
          <lpage>127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Chomsky</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <source>The Minimalist Program</source>
          ,
          <year>1995</year>
          , Mass., MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Haegeman</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <article-title>Introduction to government and binding theory</article-title>
          ,
          <year>1994</year>
          , Oxford, Basil Blackwell.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Magnini</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Romagnoli</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vallin</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Herrera</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peñas</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peinado</surname>
            <given-names>V.</given-names>
          </string-name>
          , Verdejo F.,
          <string-name>
            <surname>de Rijke</surname>
            <given-names>M.,</given-names>
          </string-name>
          <article-title>The multiple language question answering track at CLEF 2003</article-title>
          ,
          <source>In: Proceedings of CLEF</source>
          <year>2003</year>
          , Trondheim,
          <year>2003</year>
          , p.
          <fpage>299</fpage>
          -
          <lpage>310</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Negri</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tanev</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Magnini</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <article-title>Bridging languages for question answering: DIOGENE at CLEF 2003</article-title>
          ,
          <source>In: Proceedings of CLEF</source>
          <year>2003</year>
          , Trondheim,
          <year>2003</year>
          , p.
          <fpage>321</fpage>
          -
          <lpage>329</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Echihabi</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oard</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marcu</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hermjakob</surname>
            <given-names>U.</given-names>
          </string-name>
          ,
          <article-title>Cross-Language question answering at the USC Information Sciences Institute</article-title>
          ,
          <source>In: Proceedings of CLEF</source>
          <year>2003</year>
          , Trondheim,
          <year>2003</year>
          , p.
          <fpage>331</fpage>
          -
          <lpage>337</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Jijkoun</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mishne</surname>
            <given-names>G</given-names>
          </string-name>
          .,
          <string-name>
            <surname>de Rijke</surname>
            <given-names>M.</given-names>
          </string-name>
          , The University of Amsterdam at QA@CLEF2003,
          <source>In: Proceedings of CLEF</source>
          <year>2003</year>
          , Trondheim,
          <year>2003</year>
          , p.
          <fpage>339</fpage>
          -
          <lpage>342</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Plamondon</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foster</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>Quantum, a French/English cross-language question answering system</article-title>
          ,
          <source>In: Proceedings of CLEF</source>
          <year>2003</year>
          , Trondheim,
          <year>2003</year>
          , p.
          <fpage>355</fpage>
          -
          <lpage>362</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Neumann</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sacaleanu</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <article-title>A Cross-language question/answering-system for German and English</article-title>
          ,
          <source>In: Proceedings of CLEF</source>
          <year>2003</year>
          , Trondheim,
          <year>2003</year>
          , p.
          <fpage>363</fpage>
          -
          <lpage>372</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Sutcliffe</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabbay</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>O</given-names>
            <surname>'Gorman</surname>
          </string-name>
          <string-name>
            <surname>A.</surname>
          </string-name>
          ,
          <article-title>Cross-language French-English question answering using the DLT System at CLEF 2003</article-title>
          ,
          <source>In: Proceedings of CLEF</source>
          <year>2003</year>
          , Trondheim,
          <year>2003</year>
          , p.
          <fpage>373</fpage>
          -
          <lpage>378</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Voorhees</surname>
            <given-names>E. M.</given-names>
          </string-name>
          ,
          <article-title>Overview of the TREC 2003 question answering track</article-title>
          ,
          <source>In: Notebook of the Twelfth Text REtrieval Conference (TREC</source>
          <year>2003</year>
          ), Gaithersburg,
          <fpage>18</fpage>
          -21
          <source>November</source>
          <year>2003</year>
          , p.
          <fpage>14</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Harabagiu</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moldovan</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clark</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bowden</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bensley</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <article-title>Answer mining by combining extraction techniques with abductive reasoning, In: Notebook of the Twelfth Text REtrieval Conference (TREC</article-title>
          <year>2003</year>
          ), Gaithersburg,
          <fpage>18</fpage>
          -21
          <source>November</source>
          <year>2003</year>
          , p.
          <fpage>46</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Voorhees</surname>
            <given-names>E. M.</given-names>
          </string-name>
          ,
          <article-title>Overview of the TREC 2002 question answering track</article-title>
          , In: Voorhees E.M.,
          <string-name>
            <surname>Buckland</surname>
            <given-names>L.P.</given-names>
          </string-name>
          (Eds.):
          <source>Proceedings of the Eleventh Text REtrieval Conference (TREC</source>
          <year>2002</year>
          ), Gaithersburg,
          <fpage>19</fpage>
          -22
          <source>November</source>
          <year>2002</year>
          , p.
          <fpage>115</fpage>
          -
          <lpage>123</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Soubbotin</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soubbotin</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <article-title>Use of patterns for detection of likely answer strings: A systematic approach</article-title>
          , In: Voorhees E.M.,
          <string-name>
            <surname>Buckland</surname>
            <given-names>L.P.</given-names>
          </string-name>
          (Eds.):
          <source>Proceedings of the Eleventh Text REtrieval Conference (TREC</source>
          <year>2002</year>
          ), Gaithersburg,
          <fpage>19</fpage>
          -22
          <source>November</source>
          <year>2002</year>
          , p.
          <fpage>325</fpage>
          -
          <lpage>331</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>