=Paper= {{Paper |id=Vol-1172/CLEF2006wn-adhoc-GoniMenoyoEt2006 |storemode=property |title=Report of MIRACLE Team for the Ad-Hoc Track in CLEF 2006 |pdfUrl=https://ceur-ws.org/Vol-1172/CLEF2006wn-adhoc-GoniMenoyoEt2006.pdf |volume=Vol-1172 |dblpUrl=https://dblp.org/rec/conf/clef/Goni-MenoyoCV06 }} ==Report of MIRACLE Team for the Ad-Hoc Track in CLEF 2006== https://ceur-ws.org/Vol-1172/CLEF2006wn-adhoc-GoniMenoyoEt2006.pdf
         Report of MIRACLE team for the Ad-Hoc track in CLEF 2006
                         José Miguel Goñi-Menoyo1, José Carlos González-Cristóbal1, 3
                                           Julio Villena-Román2, 3
                                     1
                                       Universidad Politécnica de Madrid
                                      2
                                        Universidad Carlos III de Madrid
                              3
                                DAEDALUS - Data, Decisions and Language, S.A.

                  josemiguel.goni@upm.es, josecarlos.gonzalez@upm.es,
                                 julio.villena@uc3m.es

                                                        Abstract
This paper presents the 2006 MIRACLE’s team approach to the AdHoc Information Retrieval track. The
experiments for this campaign keep on testing our IR approach. First, a baseline set of runs is obtained, including
standard components: stemming, transforming, filtering, entities detection and extracting, and others. Then, a
extended set of runs is obtained using several types of combinations of these baseline runs.
The improvements introduced for this campaign have been a few ones: we have used an entity recognition and
indexing prototype tool into our tokenizing scheme, and we have run more combining experiments for the robust
multilingual case than in previous campaigns. However, no significative improvements have been achieved.

For the this campaign, runs were submitted for the following languages and tracks:
  - Monolingual: Bulgarian, French, Hungarian, and Portuguese.
  - Bilingual: English to Bulgarian, French, Hungarian, and Portuguese; Spanish to French and Portuguese; and
     French to Portuguese.
  - Robust monolingual: German, English, Spanish, French, Italian, and Dutch.
  - Robust bilingual: English to German, Italian to Spanish, and French to Dutch.
  - Robust multilingual: English to robust monolingual languages.

We still need to work harder to improve some aspects of our processing scheme, being the most important, to our
knowledge, the entities recognition and normalization.

Categories and Subject Descriptors
H.3 [Information Storage and Retrieval]: H.3.1 Content Analysis and Indexing; H.3.2 Information Storage; H.3.3
Information Search and Retrieval ; H.3.4 Systems and Software. E.1 [Data Structures]; E.2 [Data Storage
Representations]. H.2 [Database Management]

Keywords
Linguistic Engineering, Information Retrieval, Trie Indexing, more keywords

1   Introduction
The MIRACLE team is made up of three university research groups located in Madrid (UPM, UC3M and UAM)
along with DAEDALUS, a company founded in 1998 as a spin-off of two of these groups. DAEDALUS is a
leading company in linguistic technologies in Spain and is the coordinator of the MIRACLE team. This is our
fourth participation in CLEF, after years 2003, 2004, and 2005. As well as bilingual, monolingual and robust
multilingual tasks, the team has participated in the ImageCLEF, Q&A, and GeoCLEF tracks.

The starting point was a set of basic components: stemming, transformation (transliteration, elimination of
diacritics and conversion to lowercase), filtering (elimination of stop and frequent words), proper nouns
detection and extracting, and paragraph extracting, among others. Some of these basic components are used in
different combinations and order of application for document indexing and for query processing. Results
combinations were also tested, mainly by averaging or by selective combination of the documents retrieved by
different approaches for a particular query. When evidence is found of better precision of one system at one
extreme of the recall level (i.e. 1), complemented by the better precision of another system at the other recall end
(i.e. 0), then both are combined to benefit from their complementary results.
Our group has used its own indexing and retrieval engine, which is based on the trie data structure [1]. Tries
have been successfully used by the MIRACLE team for years, as an efficient storage and retrieval of huge
lexical resources, combined with a continuation-based approach to morphological treatment [15]. However, the
adaptation of these structures to manage efficiently document indexing and retrieval for IR applications has been
a hard task, mainly in the issues concerning the performance of the construction of the index.

For this campaign, runs were submitted for the following languages and tracks:

             -   Monolingual: Bulgarian, French, Hungarian, and Portuguese.
             -   Bilingual: English to Bulgarian, French, Hungarian, and Portuguese; Spanish to French and
                 Portuguese; and French to Portuguese.
             -   Robust monolingual: German, English, Spanish, French, Italian, and Dutch.
             -   Robust bilingual: English to German, Italian to Spanish, and French to Dutch.
             -   Robust multilingual: English to robust monolingual languages.

2   Description of the MIRACLE Toolbox
MIRACLE toolbox has already been described in previous campaigns papers [11], [12], [16]. We will say here
that document collections and topics were pre-processed before feeding the indexing and retrieval engine, using
different combinations of elementary processes. We will repeat here some relevant facts about these:

    -   Extraction: The extraction treatment has a special filter for extracting topic queries in the case of the
        use of the narrative field: some patterns that were obtained from the topics of the past campaigns are
        eliminated, since they are recurrent and misleading in the retrieval process. For example, for English,
        we can mention patterns as “… are not relevant.”, or “…are to be excluded”. All the sentences that
        contain such patterns are filtered out.

    -   Paragraphs extraction: We have not used paragraph indexing this year, since the results we have
        obtained in this campaign and past ones have been disappointing.

    -   Tokenization: This process extracts basic text components, detecting and isolating punctuation
        symbols. Some basic entities are also treated, such as numbers, initials, abbreviations, years, and some
        proper nouns (see next item). The outcomes of this process are only single words, years that appear as
        numbers in the text (e.g. 1995, 2004, etc.), or entities.

    -   Entities: We consider that entities detection and normalization plays a central role in Information
        Retrieval, but it is a difficult task. For this year we have integrated a special module in the tokenization
        process that detects and marks some entities that have been previously collected from several sources
        into a lexical database for entities. These entities, which can be people names, place names, initials,
        abbreviations, etc., can consist of one or more words and special symbols, and their correct treatment is
        integrated into the tokenizer. For now, no entity normalization is done, so the same entity can appear in
        different forms and these are treated as different entities.

    -   Filtering: Stopwords lists in the target languages were initially obtained from [38], but were extended
        using several other sources and our own knowledge and resources. We have also compiled other lists of
        words to exclude from the indexing and querying processes, which were obtained from the topics of
        past CLEF editions and from our own background. We consider that such words have no semantics in
        the type of queries used in CLEF. As example, we can mention some of the English list: find, appear,
        relevant, document, report, etc.

    -   Transformation: The items that resulted from tokenization were normalized by converting all
        uppercase letters to lowercase, and accents eliminated. This has not been done for Bulgarian.

    -   Stemming: We used standard stemmers from Porter [28] for most languages, except for Hungarian and
        Bulgarian, where we used stemmers from Neuchatel [38].

    -   Indexing: When all the documents processed through a combination of the former steps are ready for
        indexing, they are fed into our indexing trie engine to build the document collection index.
     -    Retrieval: When all the documents processed by a combination of the former steps are topic queries,
          they are fed to an ad-hoc front-end of the retrieval trie engine to search the previously built document
          collection index. In the 2006 experiments, only OR combinations of the search terms were used. The
          retrieval model used is the well-known Robertson’s Okapi BM-25 [32] formula for the probabilistic
          retrieval model, without relevance feedback.

     -    Combination: After retrieval, some other special combination processes were used to define additional
          experiments: The results from some basic experiments can be combined in different ways. The
          underlying hypothesis is that, to some extent, the documents with a good score in almost all
          experiments are more likely to be relevant than other documents that have a good score in one
          experiment but a bad one in others. Two strategies were followed for combining experiments:

               ►    Average: Relevance figures obtained in all the experiments to be combined for a particular
                    document in a given query are added. This approach combines the relevance figures of the
                    experiments without highlighting a particular experiment.

               ►    Asymmetric WDX combination: In this particular type of combination, two experiments are
                    combined in the following way: The relevance of the first D documents for each query of the
                    first experiment is preserved for the resulting combined relevance, whereas the relevance for
                    the remaining documents in both experiments are combined using weights W and X. For
                    example, for experiments labeled “011”, the most relevant document from the first basic
                    experiment is considered, and then all the remaining documents retrieved from the second
                    basic experiment. Then all the obtained results are re-sorted using the obtained relevance
                    measure values.

3    Description of the experiments
The experiments name reflects the processes made on the documents collections and the topic sets. The naming
scheme we have used this year is, for basic experiments, as follows:

                              

 and  are the standard two letter abbreviations for the documents or topic
languages 1 (i.e. bg, de, en, es, fr, hu, it, nl, and pt), or ml for documents language in multilingual robust runs.
Except for multilingual runs, should be identical.

 reflects the processes made on the documents collection for the experiment. The first letter is
always 2 F (for indexing the full texts). The second letter is S or W. Letter S is used for the standard or baseline
treatment: tokenization, filtering, stemming, and transformation; whereas W is used for a non-stemming
treatment: tokenization, filtering, and transformation.

 reflects the processes made on the topics collection for the experiment. For monolingual runs, it
also consists of two characters, being the second S or W, having the same meaning that in documents texts
processing. The first character is one of the digits 2, 3, 4, 5 or 6; reflecting how many times the title (T),
description (D) or narrative (N) of the topic has been taken into account, according to the following scheme: 2
(TD), 3 (TDN), 4 (TTTDN), 5 (TTTTDN), and 6 (TTTTDDN). For bilingual runs, some information is added
before these two characters: the translation engine used and the standard two-letter code of the source topic
language. For robust runs, the letter R is present in the first position of this field. The translation engine cndes
used have been the following: L (Wordlingo [46]), W (Webtrance [34]), S (Systran [35]), V (Reverso [31]), A
(Atrans [2]), B (Bultra [29]), and M (Mobicat [27]).

For combining experiments, we depart from this scheme. In the  position we indicate the type of
combination: xWDX, for asymmetric WDX combination (see the meaning of the digits D, W, and X in the
previous section); or y, for average combination of runs. In the  position the runs that are
combined are indicated in an ad-hoc, rather weird encoding. For example, run “nlx021nlRLfrFW4FS4” refers to
a combined (robust) experiment made on the Dutch collection, with W=0, D=2, X=1, using French topics

1 These refer to the topic language of monolingual runs or the target language in a translated topic for cross-lingual runs.
2 The letter H was reserved for paragraphs indexing, process which was not made this year, as mentioned.
translated into Dutch using the WordLingo [46] engine. The experiments combined are referred with FW4 and
FS4, respectively. That means that the experiments are “nlFWnlRLfr4W” and “nlFSnlRLfr4S”. 3 Equally, for
average experiments, run “fryfrFS3456” refers to an average-combining experiment run on the the French text
collection, using French topics (monolingual), and averaging the results from runs FS3, FS4, FS5, and FS6; that
is “frFSfr3S”, “frFSfr4S”, “frFSfr5S”, and “frFSfr6S”.

For (robust) multilingual runs, a special naming convention is used. It will be described in a later section.

4   Monolingual and bilingual tasks
The following figures and tables resume the performance of our best experiments in the monolingual and
bilingual tasks. The details of all the experiments run and their performance figures and some graphic
representations can be found in the appendix.

                                    Best monolingual runs                                              Best bilingual runs
              1                                                                       1
                                                             bgx101bgFS4FS5                                                    bgFSbgWen3S
                                                             enx101enFS3FS4                                                      frFSfrVen3S
                                                                      frFSfr6S                                                 huFShuMen4S
                                                             hux101huFS3FS4                                                      ptFSptSfr3S
                                                              ptx101ptFS3FS6
             0.8                                                                     0.8




             0.6                                                                     0.6




             0.4                                                                     0.4




             0.2                                                                     0.2




              0                                                                       0
                   0        0.2     0.4                0.6         0.8           1         0   0.2   0.4                 0.6      0.8          1




                       Best average precision figures for each monolingual and bilingual language pair
          lang                run            avgp                         src-dst         run          avgp
           bg          bgx101bgFS4FS5 0.3119                               en-bg bgFSbgWen3S 0.2120
           en          enx101enFS3FS4 0.3965                               en-hu huFShuMen4S 0.2420
            fr         frFSfr6S             0.4026                         en-fr    frFSfrVen3S       0.3868
           hu          hux101huFS3FS4 0.3089                               de-fr    frFSfrVde3S       0.3805
           pt          ptx101ptFS3FS6       0.4045                          es-fr   frFSfrSes3S       0.3511
                                                                            pt-fr   frFSfrSpt3S       0.3470
                                                                            fr-pt   ptFSptSfr3S       0.3750
                                                                           en-pt ptFSptSen4S          0.2926
                                                                           es-pt    ptFSptLes6S       0.2838

Both in tthe monolingual and the bilingual cases, results obtained for “near” languages, such as French and
Portuguese, are better than those obtained for Bulgarian and Hungarian. In the bilingual case, French
experiments have better average precisions. Combined runs appear in several rows of the monolingual table, but
in some cases, these runs were not submitted. We did not run combined bilingual experiments, so they do not
appear in the bilingual table. Note that in all cases, the experiments having into account the topic narrative
achieve best results.

5   Robust tasks
The following figures and tables resume the performance of our best experiments in the monolingual and
bilingual robust tasks. The details of all the experiments run and their performance figures and some graphic
representations can be also found in the appendix. We have not used a different system or different types of runs
for the robust case, so we just present the results obtained. Please, note that in these tables, geometric mean
average precision figures are given instead average precision figures.




3 We have tested only some WDX sets: 011, 021, 091, 101, and 153. Regarding combined experiments, we tested these

combinations: FS3FS4, FS3FS6, FS4FS5, FW3FS3, FW4FS4, FW4FS5, and FW4FS6.
                                Best robust monolingual runs                                                  Best robust bilingual runs
               1                                                                            1
                                                               dex011deRFW3FS3                                                             esx011esRLitFW3FS3
                                                                   enyenRFS3456                                                                 deFSdeRSen3S
                                                                      esFSesR3S                                                                    nlFSnlRLfr6S
                                                                        frFSfrR3S
                                                                         itFSitR6S
              0.8                                                      nlFSnlR4S           0.8




              0.6                                                                          0.6




              0.4                                                                          0.4




              0.2                                                                          0.2




               0                                                                            0
                    0   0.2       0.4                 0.6              0.8             1         0      0.2    0.4                  0.6             0.8           1




         Best geometric mean average precision figures for each monolingual and bilingual language pair
  lang             run         gmavgp                            src-dst              run             gmavgp
   de       deFSdeR6S          0.1198                             en-de    dex021deRSenFW3FS3 0.0662
   en       enFSenR3S          0.1016                              fr-nl   nlFSnlRLfr3S               0.1253
   es       esFSesR3S          0.2650                             It-es    esFSesRLit3S               0.0833
    fr      frFSfrR3S          0.1369
    it      itFSitR6S          0.1153
    nl      nlFSnlR4S          0.2073

In the monolingual case, results for Spanish are much better than those obtained for the rest of the languages. In
all cases the use of baseline runs has obtained results better than the use of combined ones. Curiously, target
language Dutch runs have results better than the runs in other languages. Note that in all cases, the experiments
having into account the topic narrative have best results, as happened in the non-robust case.

We used the traditional approach to multilingual information retrieval that translates topic queries to each of the
documents collections. The probabilistic BM25 [32] approach used for monolingual retrieval gives relevance
measures that depends heavily on parameters that are too dependent on the monolingual collection, so it is not
very good for this type of multilingual merging, since relevance measures are not comparable among collections.
In spite of this, we made merging experiments using the relevance figures obtained from each monolingual
retrieval process, considering three cases: 4

    -    Using original relevance measures for each document as obtained from the monolingual retrieval
         process. The results are composed of the documents with greater relevance measures.

    -    Normalizing relevance measures with respect to the maximum relevance measure obtained for each
         topic query i (normal normalization):
                                                                                                  reli
                                                                             reli norm =                  .
                                                                                                 reli max
         The results are composed of the documents with greater normalized relevance measures.

    -    Normalizing relevance measures with respect to the maximum and minimum relevance measure
         obtained for each topic query i (alternate normalization):
                                                                                      reli − reli min
                                                               reli alt =
                                                                                     reli max − reli min
         The results are composed of the documents with greater alternate normalized relevance measures.

We denote if normalization is done in the run identifier using the last character: n means normal normalization
whereas l denotes alternate normalization. When neither l nor n is present, no normalization has been made for
that run. For this “standard multilingual approach”, the run naming convention follows this pattern:

                              mlRSFS(de|en|es|fr|it|nl)([23456])S([ln]?)



4 Round-robin merging for results of each monolingual collection has not been used.
where usual regular expression patterns are used, but inclosed in “()”. The meanings of the letters used should be
evident from the described naming conventions for monolingual runs that are combined. Note that S is used both
for “stemmed” and “Systran”.

In addition to all this, we tried a different approach to merging: Considering that the more relevant documents
for each of the topics are usually the first ones in the results list, we will select from each monolingual results file
a variable number of documents, proportional of the average relevance number of the first N documents. Thus, if
we need 1,000 documents for a given topic query, we will get more documents from languages where the
average relevance of the first N relevant documents is greater. We did all this in two cases:

    1.   Using not normalized runs (we call it case X) to calculate the appropriate number of documents to
         aggregate. After having obtained such, the documents sets obtained are optionally normalized before
         merging (we tried not normalizing, and normalizing with both formulae).

    2.   Using normalized runs (we call it case Y) to calculate the appropriate number of documents to agregate.
         After obtaining such documents sets, merging is done. We also used both types of normalization.

The several cases tested are encoded in the run identifier. The first two characters are “ml”, followed by two
characters that indicate one of the two cases described above, and the parameters used:

                                         N      Normalized                   Not normalized
                                          1        1Y                              1X
                                         15        2Y                              2X
                                         75        3Y                              3X
                                        166        4Y                              4X
                                        300        5Y                              5X
                                        1000       6Y                              6X

following the rest of the run identifier. The full run identifier follows one of the patterns:

                               ml([123456]X)RSFSen([23456])S([nl]?)
                               ml([123456]Y)RSFSen([23456])S([nl])

where usual regular expression patterns are used, but inclosed in “()”. The same comments apply that for the
previous naming scheme for multilingual runs.

The following figure resume the performance of our best experiments in the multilingual robust task. The details
of all the experiments run and their performance figures can be found in the appendix. We have not used a
different system or different types of runs for the robust case, so we just present the results obtained. Please, note
that in these tables, geometric mean average precision figures are given instead average precision figures.

                                                      Robust multilingual runs from English
                                       1
                                                                                              ml5XRSFSen3S
                                                                                              ml2XRSFSen3S
                                                                                              ml6XRSFSen4S
                                                                                              ml5XRSFSen4S
                                                                                              ml4XRSFSen4S
                                      0.8                                                       mlRSFSen2S




                                      0.6




                                      0.4




                                      0.2




                                       0
                                            0   0.2          0.4                 0.6             0.8         1




6   Conclusions and future work
This year we have not changed a lot our previous processing scheme, although some improvements have been
incorporated regarding proper nouns and entities detection and indexing. For this reason we think that the
obtained results are quite similar to previous ones. We need to work harder in some stages of processing,
especially these ones that can improve performance substantially.

It is clear that the quality of the tokenization step is of paramount importance for precise document processing.
We still think that a high-quality entity recognition (proper nouns or acronyms for people, companies, countries,
locations, and so on) could improve the precision and recall figures of the overall retrieval, as well as a correct
recognition and normalization of dates, times, numbers, etc. Although we have introduced some improvements
in our processing scheme, a good multilingual entity recognition and normalization tool is still missing. This step
is the one in which we are currently devoting more work.

We are also mproving the architecture of our indexing and retrieval trie-based engine in order to get even better
performance in the indexing and retrieval phases, tuning some data structures and algorithms.

Acknowledgements
This work has been partially supported by the Spanish R+D National Plan, by means of the project RIMMEL
(Multilingual and Multimedia Information Retrieval, and its Evaluation), TIN2004-07588-C03-01; and by the
Madrid’s R+D Regional Plan, by means of the project MAVIR (Enhancing the Access and the Visibility of
Networked Multilingual Information for Madrid Community), S-0505/TIC/000267.

Special mention to our colleagues of the MIRACLE team should be done (in alphabetical order): Ana María
García-Serrano, José Carlos González-Cristóbal, Ana González-Ledesma, José Miguel Goñi-Menoyo, José Mª
Guirao-Miras, Sara Lana-Serrano, José Luis Martínez-Fernández, Paloma Martínez-Fernández, Antonio
Moreno-Sandoval and César de Pablo-Sánchez.

Appendix: Tables and figures
The results from our experiments follow. For each of the monolingual or bilingual tasks, we show a table with
the precision at 0 and 1 points of recall, the average precision, the percentage deviation (in average precision)
from best one obtained, the run identifier, and the precedence of the run, when the run was submitted. The
results are sorted in average precision ascending order, but an asterisk marks all the best precision values for
each column (in average precision, or in precision at 0 or 1 points of recall).

In the case of the robust tasks, in addition to the columns indicated above, the tables include a column with the
geometric mean average precision, and the rows are sorted using this figure in ascending order. The percentage
deviation in this case refers to the average precision, in order to facilitate the comparison with the ordering using
this number.

In all cases a figure that compares the submitted runs and the best one, when it was the case that it was not
submitted, is included for each task and language pair. The best run here refers that one with best average
precision, not geometric mean average precision.
                      Monolingual runs: Bulgarian
 1
                                                    bgx101bgFS4FS5
                                                           bgFSbg3S
                                                    bgx011bgFW4FS4
                                                           bgFSbg2S

0.8




0.6




0.4




0.2




 0
      0   0.2           0.4                 0.6           0.8            1



             at0       at1       avgp            %            run            x
           0.5333    0.0858     0.2080        -33.31%   bgFWbg2W
           0.5544    0.0824     0.2240        -28.18%   bgFWbg4W
           0.5588    0.0730     0.2361        -24.30%   bgFWbg3W
           0.6530    0.0534     0.2476        -20.62%   bgx153bgFW3FS3
           0.5727    0.0572     0.2493        -20.07%   bgx153bgFW4FS5
           0.6136    0.0603     0.2503        -19.75%   bgx153bgFW4FS6
           0.5868    0.0594     0.2512        -19.46%   bgx153bgFW4FS4
           0.5817    0.0919     0.2752        -11.77%   bgx091bgFW4FS6
           0.5837    0.0930     0.2768        -11.25%   bgx091bgFW4FS5
           0.5844    0.0937*    0.2786        -10.68%   bgx091bgFW4FS4
           0.6386    0.0885     0.2786        -10.68%   bgFSbg2S             3
           0.5854    0.0890     0.2807        -10.00%   bgx091bgFW3FS3
           0.6648    0.0855     0.2857         -8.40%   bgybgFS3FW3
           0.6033    0.0908     0.2885         -7.50%   bgx021bgFW4FS6
           0.6136    0.0908     0.2901         -6.99%   bgx011bgFW4FS6
           0.5980    0.0917     0.2911         -6.67%   bgx021bgFW4FS5
           0.6033    0.0917     0.2926         -6.19%   bgx011bgFW4FS5
           0.6145    0.0910     0.2932         -6.00%   bgx021bgFW3FS3
           0.6072    0.0922     0.2939         -5.77%   bgx021bgFW4FS4
           0.6139    0.0922     0.2957         -5.19%   bgx011bgFW4FS4       1
           0.6396    0.0911     0.3001         -3.78%   bgx011bgFW3FS3
           0.6709    0.0908     0.3028         -2.92%   bgFSbg6S
           0.6623    0.0917     0.3052         -2.15%   bgFSbg5S
           0.6915*   0.0936     0.3063         -1.80%   bgx101bgFS3FS6
           0.6843    0.0911     0.3080         -1.25%   bgFSbg3S             2
           0.6837    0.0917     0.3098         -0.67%   bgybgFS3456
           0.6844    0.0936     0.3110         -0.29%   bgx101bgFS3FS4
           0.6721    0.0922     0.3112         -0.22%   bgFSbg4S
           0.6810    0.0919     0.3119*        -0.00%   bgx101bgFS4FS5
                                 Monolingual runs: English
 1
                                                              enx101enFS3FS4
                                                                    enFSen3S
                                                              enx101enFS3FS6
                                                                    enFSen2S

0.8




0.6




0.4




0.2




 0
      0             0.2           0.4                 0.6          0.8         1



            at0       at1      avgp         %                run         x
          0.5895    0.0728    0.2656     -33.01%       enx153enFW3FS3
          0.5839    0.0881    0.2717     -31.48%       enx153enFW4FS5
          0.5973    0.0877    0.2721     -31.37%       enx153enFW4FS6
          0.5950    0.0879    0.2764     -30.29%       enx153enFW4FS4
          0.6641    0.1186    0.3345     -15.64%       enFWen2W
          0.7141    0.1230    0.3553     -10.39%       enFWen4W
          0.6570    0.1251    0.3575      -9.84%       enFSen2S          3
          0.7163    0.1316    0.3583      -9.63%       enFWen3W
          0.7270*   0.1375    0.3766      -5.02%       enyenFS3FW3
          0.7105    0.1316    0.3867      -2.47%       enx091enFW4FS6
          0.7114    0.1338    0.3869      -2.42%       enx091enFW4FS5
          0.6947    0.1316    0.3869      -2.42%       enFSen6S
          0.7096    0.1283    0.3870      -2.40%       enx021enFW4FS6
          0.7188    0.1400    0.3873      -2.32%       enx091enFW3FS3
          0.7124    0.1316    0.3876      -2.24%       enx011enFW4FS6
          0.7106    0.1305    0.3880      -2.14%       enx021enFW4FS5
          0.7117    0.1336    0.3882      -2.09%       enx091enFW4FS4
          0.6972    0.1338    0.3886      -1.99%       enFSen5S
          0.7139    0.1338    0.3893      -1.82%       enx011enFW4FS5
          0.7095    0.1371    0.3901      -1.61%       enx021enFW3FS3
          0.7052    0.1337    0.3907      -1.46%       enx101enFS4FS5
          0.6962    0.1354    0.3914      -1.29%       enyenFS3456
          0.7117    0.1336    0.3923      -1.06%       enx021enFW4FS4
          0.7144    0.1369    0.3930      -0.88%       enx011enFW4FS4
          0.7040    0.1369    0.3930      -0.88%       enFSen4S
          0.6985    0.1363    0.3934      -0.78%       enx101enFS3FS6    2
          0.7170    0.1404*   0.3940      -0.63%       enx011enFW3FS3
          0.7083    0.1404*   0.3945      -0.50%       enFSen3S          1
          0.7023    0.1376    0.3965*     -0.00%       enx101enFS3FS4
                                 Monolingual runs: French
 1
                                                                        frFSfr6S
                                                                frx101frFS3FS6
                                                                        frFSfr4S
                                                                        frFSfr2S

0.8




0.6




0.4




0.2




 0
      0             0.2           0.4                0.6            0.8            1



            at0        at1      avgp         %                 run          x
          0.6559     0.0766    0.2990     -25.73%       frx153frFW4FS5
          0.6562     0.0787    0.2998     -25.53%       frx153frFW3FS3
          0.6810     0.0776    0.3004     -25.38%       frx153frFW4FS4
          0.7025     0.0757    0.3040     -24.49%       frx153frFW4FS6
          0.6710     0.0701    0.3271     -18.75%       frFWfr2W
          0.7171     0.0928    0.3559     -11.60%       frFWfr3W
          0.7137     0.0955    0.3575     -11.20%       frFWfr4W
          0.7901     0.0756    0.3794      -5.76%       frFSfr2S            3
          0.7229     0.1002    0.3803      -5.54%       frx091frFW4FS5
          0.7264     0.1073    0.3808      -5.41%       frx091frFW3FS3
          0.7235     0.1020    0.3821      -5.09%       frx091frFW4FS4
          0.7252     0.0977    0.3838      -4.67%       frx091frFW4FS6
          0.7426     0.1080*   0.3845      -4.50%       frx011frFW3FS3
          0.7420     0.1080*   0.3850      -4.37%       frx021frFW3FS3
          0.7377     0.1019    0.3857      -4.20%       frx021frFW4FS5
          0.7490     0.0996    0.3876      -3.73%       fryfrFS3FW3
          0.7420     0.1034    0.3877      -3.70%       frx021frFW4FS4
          0.7532     0.1019    0.3893      -3.30%       frx011frFW4FS5
          0.7662     0.1080*   0.3913      -2.81%       frFSfr3S
          0.7510     0.0993    0.3917      -2.71%       frx021frFW4FS6
          0.7559     0.1034    0.3924      -2.53%       frx011frFW4FS4
          0.7861     0.1019    0.3954      -1.79%       frFSfr5S
          0.7693     0.0993    0.3967      -1.47%       frx011frFW4FS6
          0.7901     0.1030    0.3973      -1.32%       frx101frFS4FS5
          0.8008     0.1059    0.3984      -1.04%       frx101frFS3FS4
          0.7888     0.1034    0.3992      -0.84%       frFSfr4S            2
          0.7796     0.1044    0.3994      -0.79%       fryfrFS3456
          0.8034*    0.1039    0.4003      -0.57%       frx101frFS3FS6      1
          0.7867     0.0993    0.4026*     -0.00%       frFSfr6S
                                Monolingual runs: Hungarian
 1
                                                              hux101huFS3FS4
                                                                    huFShu4S
                                                                    huFShu2S


0.8




0.6




0.4




0.2




 0
      0             0.2           0.4                0.6           0.8         1


            at0       at1      avgp         %               run          x
          0.5137    0.0449    0.2010     -34.93%      huFWhu3W
          0.5354    0.0450    0.2034     -34.15%      huFWhu2W
          0.5282    0.0464    0.2113     -31.60%      huFWhu4W
          0.6257    0.0358    0.2429     -21.37%      hux153huFW3FS3
          0.6579    0.0500    0.2533     -18.00%      hux153huFW4FS6
          0.6586    0.0527    0.2546     -17.58%      hux153huFW4FS5
          0.6610    0.0508    0.2572     -16.74%      hux153huFW4FS4
          0.5611    0.0640    0.2584     -16.35%      hux091huFW3FS3
          0.5690    0.0734    0.2641     -14.50%      hux091huFW4FS5
          0.5705    0.0730    0.2653     -14.11%      hux091huFW4FS6
          0.5699    0.0759    0.2663     -13.79%      hux091huFW4FS4
          0.6581    0.0644    0.2768     -10.39%      huyhuFS3FW3
          0.6164    0.0692    0.2796      -9.49%      hux021huFW3FS3
          0.7401    0.0693    0.2842      -8.00%      huFShu2S           3
          0.6274    0.0787    0.2853      -7.64%      hux021huFW4FS5
          0.6345    0.0692    0.2865      -7.25%      hux011huFW3FS3
          0.6334    0.0797    0.2871      -7.06%      hux021huFW4FS6
          0.6358    0.0812*   0.2886      -6.57%      hux021huFW4FS4
          0.6458    0.0787    0.2890      -6.44%      hux011huFW4FS5
          0.6588    0.0797    0.2911      -5.76%      hux011huFW4FS6
          0.6572    0.0812*   0.2927      -5.24%      hux011huFW4FS4
          0.7417    0.0692    0.3029      -1.94%      huFShu3S
          0.7227    0.0787    0.3043      -1.49%      huFShu5S
          0.7225    0.0804    0.3058      -1.00%      hux101huFS4FS5
          0.7498    0.0797    0.3074      -0.49%      huFShu6S
          0.7435    0.0737    0.3077      -0.39%      hux101huFS3FS6
          0.7430    0.0807    0.3077      -0.39%      huyhuFS3456
          0.7525*   0.0812*   0.3085      -0.13%      huFShu4S           2
          0.7445    0.0747    0.3089*     -0.00%      hux101huFS3FS4     1
                               Monolingual runs: Portuguese
 1
                                                              ptx101ptFS3FS6
                                                                     ptFSpt3S
                                                              ptx021ptFW3FS3
                                                                     ptFSpt2S

0.8




0.6




0.4




0.2




 0
      0             0.2           0.4                0.6           0.8          1



            at0       at1      avgp         %                run          x
          0.6964    0.0750    0.3406     -15.80%       ptx153ptFW4FS5
          0.7098    0.0773    0.3427     -15.28%       ptx153ptFW4FS4
          0.7157    0.0768    0.3453     -14.64%       ptx153ptFW4FS6
          0.7466    0.0641    0.3531     -12.71%       ptx153ptFW3FS3
          0.7836    0.0563    0.3539     -12.51%       ptFWpt2W
          0.7911    0.0526    0.3633     -10.19%       ptFWpt3W
          0.7872    0.0652    0.3656      -9.62%       ptFWpt4W
          0.7597    0.0677    0.3902      -3.54%       ptFSpt2S           3
          0.7966    0.0621    0.3942      -2.55%       ptx091ptFW3FS3
          0.7974    0.0768    0.3943      -2.52%       ptx091ptFW4FS5
          0.8183    0.0644    0.3950      -2.35%       ptyptFS3FW3
          0.7800    0.0768    0.3964      -2.00%       ptFSpt5S
          0.7947    0.0768    0.3964      -2.00%       ptx021ptFW4FS5
          0.7904    0.0768    0.3965      -1.98%       ptx011ptFW4FS5
          0.7996    0.0776    0.3969      -1.88%       ptx091ptFW4FS4
          0.8148    0.0621    0.3977      -1.68%       ptx011ptFW3FS3
          0.8083    0.0621    0.3980      -1.61%       ptx021ptFW3FS3     1
          0.7860    0.0767    0.3990      -1.36%       ptx101ptFS4FS5
          0.7992    0.0775    0.3995      -1.24%       ptx091ptFW4FS6
          0.8014    0.0776    0.3998      -1.16%       ptx021ptFW4FS4
          0.7970    0.0776    0.3999      -1.14%       ptx011ptFW4FS4
          0.7896    0.0776    0.4000      -1.11%       ptFSpt4S
          0.7891    0.0777*   0.4013      -0.79%       ptyptFS3456
          0.8314*   0.0621    0.4017      -0.69%       ptFSpt3S           2
          0.8068    0.0769    0.4024      -0.52%       ptx101ptFS3FS4
          0.8031    0.0775    0.4028      -0.42%       ptx021ptFW4FS6
          0.7992    0.0775    0.4031      -0.35%       ptx011ptFW4FS6
          0.7846    0.0775    0.4036      -0.22%       ptFSpt6S
          0.8044    0.0764    0.4045*     -0.00%       ptx101ptFS3FS6
                           Bilingual to Bulgarian
                               Bilingual runs: English to Bulgarian
 1
                                                                      bgFSbgWen3S
                                                                      bgFSbgWen2S



0.8




0.6




0.4




0.2




 0
      0             0.2              0.4                  0.6           0.8           1



            at0        at1      avgp             %                run          x
          0.3082     0.0343    0.1151         -45.71%        bgFWbgWen2W
          0.3139     0.0358    0.1316         -37.92%        bgFWbgWen4W
          0.3644     0.0292    0.1428         -32.64%        bgFWbgWen3W
          0.4821     0.0460    0.1739         -17.97%        bgFSbgWen2S       2
          0.4895     0.0510    0.1859         -12.31%        bgFSbgWen5S
          0.5192     0.0524*   0.1929          -9.01%        bgFSbgWen4S
          0.5468     0.0507    0.1966          -7.26%        bgFSbgWen6S
          0.5662*    0.0475    0.2120*         -0.00%        bgFSbgWen3S       1

                                   Bilingual runs: X to French
 1
                                                                        frFSfrVen3S
                                                                        frFSfrSen4S
                                                                        frFSfrSes3S
                                                                        frFSfrXes3S
                                                                        frFSfrSes4S
0.8                                                                     frFSfrSen2S




0.6




0.4




0.2




 0
      0             0.2              0.4                  0.6           0.8           1
  at0       at1      avgp        %           run      x
0.6453    0.0483    0.2649    -31.51%   frFWfrAes2W
0.5469    0.0630    0.2658    -31.28%   frFWfrSen2W
0.6595    0.0482    0.2725    -29.55%   frFWfrSes2W
0.5914    0.0555    0.2794    -27.77%   frFWfrSpt2W
0.6508    0.0532    0.2814    -27.25%   frFWfrAes4W
0.6608    0.0553    0.2885    -25.41%   frFWfrSes4W
0.5810    0.0677    0.2902    -24.97%   frFWfrVen2W
0.6717    0.0553    0.2922    -24.46%   frFWfrAes3W
0.6192    0.0716    0.2930    -24.25%   frFWfrSpt3W
0.6304    0.0676    0.2934    -24.15%   frFWfrSpt4W
0.5856    0.0891    0.2948    -23.78%   frFWfrSen4W
0.6747    0.0611    0.2983    -22.88%   frFWfrSes3W
0.6225    0.0744    0.3058    -20.94%   frFWfrVde2W
0.6783    0.0611    0.3115    -19.47%   frFSfrAes2S
0.6502    0.0883    0.3158    -18.36%   frFWfrSen3W
0.6897    0.0629    0.3179    -17.81%   frFSfrSes2S
0.6421    0.0963    0.3239    -16.26%   frFWfrVen4W
0.6860    0.0688    0.3239    -16.26%   frFSfrSpt2S
0.6877    0.0765    0.3312    -14.37%   frFSfrAes6S
0.6765    0.0723    0.3320    -14.17%   frFSfrSen2S   4
0.6605    0.0752    0.3335    -13.78%   frFWfrVde4W
0.6974    0.0822    0.3353    -13.31%   frFSfrSpt5S
0.7124    0.0795    0.3371    -12.85%   frFSfrAes5S
0.6730    0.0743    0.3380    -12.62%   frFSfrVde2S
0.6626    0.0842    0.3381    -12.59%   frFWfrVde3W
0.6910    0.0792    0.3382    -12.56%   frFSfrSes6S
0.6944    0.0806    0.3385    -12.49%   frFSfrSpt6S
0.6905    0.0828    0.3394    -12.25%   frFSfrVen2S
0.6821    0.0887    0.3409    -11.87%   frFSfrSen5S
0.7005    0.0878    0.3416    -11.69%   frFSfrSpt4S
0.7088    0.0834    0.3425    -11.45%   frFSfrAes4S
0.7095    0.1086    0.3438    -11.12%   frFWfrVen3W
0.6781    0.0906    0.3441    -11.04%   frFSfrAes3S
0.7157    0.0821    0.3443    -10.99%   frFSfrSes5S
0.7157    0.0923    0.3470    -10.29%   frFSfrSpt3S
0.7121    0.0864    0.3503     -9.44%   frFSfrSes4S   3
0.6721    0.0973    0.3505     -9.38%   frFSfrXes3S   2
0.6781    0.0979    0.3511     -9.23%   frFSfrSes3S
0.6906    0.0993    0.3533     -8.66%   frFSfrSen4S   1
0.6890    0.1002    0.3543     -8.40%   frFSfrSen6S
0.6972    0.0879    0.3590     -7.19%   frFSfrVde5S
0.7051    0.0838    0.3602     -6.88%   frFSfrVde6S
0.7629    0.1182    0.3677     -4.94%   frFSfrVen5S
0.7080    0.1067    0.3713     -4.01%   frFSfrSen3S
0.7619    0.1292    0.3750     -3.05%   frFSfrVen4S
0.7524    0.1303    0.3772     -2.48%   frFSfrVen6S
0.7274    0.1027    0.3776     -2.38%   frFSfrVde4S
0.7473    0.1089    0.3805     -1.63%   frFSfrVde3S
0.7797*   0.1360*   0.3868*    -0.00%   frFSfrVen3S
                               Bilingual runs: English to Hungarian
 1
                                                                      huFShuMen4S
                                                                      huFShuMen2S



0.8




0.6




0.4




0.2




 0
      0             0.2              0.4                  0.6           0.8         1



            at0        at1       avgp            %                run         x
          0.4605     0.0222     0.1518        -37.27%        huFWhuMen2W
          0.5529     0.0191     0.1638        -32.31%        huFWhuMen3W
          0.5021     0.0205     0.1774        -26.69%        huFWhuMen4W
          0.5825     0.0445     0.2196         -9.26%        huFShuMen2S      2
          0.6275*    0.0472*    0.2277         -5.91%        huFShuMen3S
          0.5855     0.0465     0.2366         -2.23%        huFShuMen5S
          0.5962     0.0455     0.2417         -0.12%        huFShuMen6S
          0.6105     0.0463     0.2420*        -0.00%        huFShuMen4S      1
                                Bilingual runs: X to Portuguese
 1
                                                                       ptFSptSfr3S
                                                                      ptFSptSen4S
                                                                      ptFSptSen3S
                                                                      ptFSptLes6S
                                                                      ptFSptLes3S
0.8                                                                   ptFSptSen2S




0.6




0.4




0.2




 0
      0             0.2            0.4                  0.6            0.8           1



            at0         at1      avgp            %                 run       x
          0.5779      0.0418    0.2356        -37.17%         ptFWptSen2W
          0.6175      0.0280    0.2451        -34.64%         ptFWptLes2W
          0.6709      0.0257    0.2507        -33.15%         ptFWptLes3W
          0.6435      0.0283    0.2538        -32.32%         ptFWptLes4W
          0.5846      0.0442    0.2550        -32.00%         ptFWptSen4W
          0.6023      0.0437    0.2556        -31.84%         ptFWptSen3W
          0.6318      0.0449    0.2650        -29.33%         ptFSptSen2S    4
          0.6462      0.0325    0.2788        -25.65%         ptFSptLes2S
          0.7038      0.0307    0.2799        -25.36%         ptFSptLes3S    1
          0.6389      0.0343    0.2803        -25.25%         ptFSptLes5S
          0.6759      0.0341    0.2829        -24.56%         ptFSptLes4S
          0.6827      0.0367    0.2838        -24.32%         ptFSptLes6S
          0.6768      0.0501    0.2887        -23.01%         ptFSptSen5S
          0.6572      0.0522    0.2896        -22.77%         ptFSptSen6S
          0.6712      0.0517    0.2898        -22.72%         ptFSptSen3S    3
          0.6646      0.0501    0.2926        -21.97%         ptFSptSen4S
          0.7054      0.0567    0.3190        -14.93%         ptFWptSfr2W
          0.7145      0.0708    0.3352        -10.61%         ptFWptSfr4W
          0.7365      0.0654    0.3430         -8.53%         ptFWptSfr3W
          0.7514      0.0693    0.3501         -6.64%         ptFSptSfr2S
          0.7875      0.0771    0.3693         -1.52%         ptFSptSfr5S
          0.7695      0.0801    0.3712         -1.01%         ptFSptSfr6S
          0.7993      0.0798    0.3743         -0.19%         ptFSptSfr4S
          0.8171*     0.0847*   0.3750*        -0.00%         ptFSptSfr3S    2
                                Robust monolingual runs: German
           1
                                                                  dex011deRFW3FS3
                                                                  dex021deRFW3FS3
                                                                         deFSdeR3S
                                                                         deFSdeR2S

          0.8




          0.6




          0.4




          0.2




           0
                0     0.2            0.4               0.6               0.8         1



    at0        at1       avgp        gmap                 %                  run         x
0.6190     0.0691    0.3005       0.0747                 -22.21%      deFWdeR2W
0.6276     0.0706    0.3266       0.0896                 -15.45%      deFWdeR4W
0.6427     0.0858    0.3479       0.0896                  -9.94%      deFWdeR3W
0.6674     0.0924    0.3406       0.1061                 -11.83%      deFSdeR2S          4
0.6525     0.0844    0.3600       0.1095                  -6.81%      dex021deRFW4FS5
0.6565     0.0844    0.3624       0.1112                  -6.19%      dex011deRFW4FS5
0.6719     0.0844    0.3647       0.1129                  -5.59%      deFSdeR5S
0.6559     0.0900    0.3655       0.1143                  -5.38%      dex021deRFW4FS4
0.6676     0.1067    0.3831       0.1149                  -0.83%      dex021deRFW3FS3    2
0.6602     0.0900    0.3678       0.1155                  -4.79%      dex011deRFW4FS4
0.6865*    0.1034    0.3803       0.1155                  -1.55%      deFSdeR3S          3
0.6709     0.1101*   0.3863*      0.1159                  -0.00%      dex011deRFW3FS3    1
0.6548     0.0934    0.3639       0.1163                  -5.80%      dex021deRFW4FS6
0.6611     0.0951    0.3672       0.1178                  -4.94%      dex011deRFW4FS6
0.6782     0.0950    0.3745       0.1178                  -3.05%      deydeRFS3456
0.6810     0.0900    0.3726       0.1178                  -3.55%      deFSdeR4S
0.6729     0.0984    0.3729       0.1198                  -3.47%      deFSdeR6S
                               Robust monolingual runs: English
           1
                                                                   enyenRFS3456
                                                                      enFSenR5S
                                                                      enFSenR3S
                                                                      enFSenR2S

          0.8




          0.6




          0.4




          0.2




           0
                0     0.2           0.4                0.6            0.8         1



    at0       at1       avgp       gmap                   %               run         x
0.6284    0.1457    0.3657      0.0738                   -15.70%   enFWenR2W
0.6462    0.1587    0.3961      0.0833                    -8.69%   enFWenR4W
0.6570    0.1614    0.3969      0.0893                    -8.51%   enFSenR2S          4
0.6582    0.1623    0.4019      0.0901                    -7.35%   enFWenR3W
0.6604    0.1680    0.4158      0.0937                    -4.15%   enx021enRFW4FS5
0.6583    0.1712    0.4152      0.0951                    -4.29%   enx021enRFW4FS6
0.6664    0.1697    0.4214      0.0952                    -2.86%   enx011enRFW4FS5
0.6597    0.1698    0.4168      0.0954                    -3.92%   enx021enRFW4FS4
0.6684    0.1721    0.4200      0.0966                    -3.18%   enx011enRFW4FS6
0.6665    0.1715    0.4208      0.0967                    -3.00%   enx011enRFW4FS4
0.6798    0.1747    0.4302      0.0968                    -0.83%   enFSenR5S          2
0.6704    0.1765    0.4275      0.0974                    -1.45%   enFSenR4S
0.6791    0.1737    0.4248      0.0976                    -2.07%   enFSenR6S
0.6842*   0.1777*   0.4338*     0.0999                    -0.00%   enyenRFS3456       1
0.6652    0.1716    0.4236      0.1006                    -2.35%   enx011enRFW3FS3
0.6686    0.1711    0.4211      0.1007                    -2.93%   enx021enRFW3FS3
0.6723    0.1725    0.4289      0.1016                    -1.13%   enFSenR3S          3
                               Robust monolingual runs: Spanish
           1
                                                                         esFSesR3S
                                                                  esx011esRFW3FS3
                                                                  esx021esRFW3FS3
                                                                         esFSesR2S

          0.8




          0.6




          0.4




          0.2




           0
                0     0.2           0.4                0.6              0.8          1



    at0       at1       avgp        gmap                  %                 run          z
0.6928    0.0769    0.3523       0.1486                  -23.15%     esFWesR2W
0.7068    0.1027    0.3865       0.1742                  -15.68%     esFWesR4W
0.7246    0.1140    0.3980       0.1874                  -13.18%     esFWesR3W
0.7332    0.0976    0.4040       0.1964                  -11.87%     esFSesR2S           4
0.7334    0.1139    0.4194       0.2113                   -8.51%     esx021esRFW4FS5
0.7386    0.1149    0.4210       0.2125                   -8.16%     esx011esRFW4FS5
0.7351    0.1145    0.4243       0.2148                   -7.44%     esx021esRFW4FS6
0.7423    0.1155    0.4261       0.2163                   -7.05%     esx011esRFW4FS6
0.7718    0.1161    0.4283       0.2168                   -6.57%     esFSesR5S
0.7315    0.1185    0.4235       0.2186                   -7.61%     esx021esRFW4FS4
0.7377    0.1195    0.4255       0.2200                   -7.18%     esx011esRFW4FS4
0.7791    0.1167    0.4338       0.2208                   -5.37%     esFSesR6S
0.7751    0.1207    0.4329       0.2245                   -5.56%     esFSesR4S
0.7730    0.1209    0.4384       0.2252                   -4.36%     esyesRFS3456
0.7825    0.1295    0.4350       0.2407                   -5.10%     esFSesRJ4S
0.7810    0.1286    0.4412       0.2491                   -3.75%     esFSesRJ5S
0.7988    0.1296*   0.4448       0.2548                   -2.97%     esFSesRJ6S
0.7539    0.1255    0.4446       0.2561                   -3.01%     esx021esRFW3FS3     3
0.8034    0.1296*   0.4469       0.2565                   -2.51%     esFSesRJ7S
0.8041    0.1293    0.4473       0.2567                   -2.42%     esFSesRJ9S
0.8069    0.1293    0.4474       0.2567                   -2.40%     esFSesRJ8S
0.8119    0.1262    0.4524       0.2582                   -1.31%     esFSesRJ2S
0.7650    0.1268    0.4509       0.2605                   -1.64%     esx011esRFW3FS3     2
0.8104    0.1274    0.4562       0.2606                   -0.48%     esFSesRJ33S
0.8186*   0.1274    0.4569       0.2638                   -0.33%     esFSesRJ1S
0.8180    0.1281    0.4583       0.2646                   -0.02%     esFSesRJ3S
0.8186*   0.1281    0.4584*      0.2650                   -0.00%     esFSesR3S           1
                               Robust monolingual runs: French
           1
                                                                          frFSfrR3S
                                                                      fryfrRFS3456
                                                                 frx011frRFW3FS3
                                                                          frFSfrR2S

          0.8




          0.6




          0.4




          0.2




           0
                0    0.2           0.4                0.6              0.8            1



    at0       at1       avgp        gmap                  %                run            x
0.6191    0.1397    0.3184       0.0808                  -25.15%    frFWfrR2W
0.6376    0.1618    0.3519       0.0976                  -17.28%    frFWfrR4W
0.6633    0.1734    0.3740       0.1046                  -12.08%    frFWfrR3W
0.6772    0.1822    0.3849       0.1187                   -9.52%    frFSfrR2S             4
0.6656    0.1809    0.3876       0.1238                   -8.89%    frx021frRFW4FS5
0.6730    0.1842    0.3907       0.1251                   -8.16%    frx011frRFW4FS5
0.6653    0.1827    0.3926       0.1266                   -7.71%    frx021frRFW4FS6
0.6649    0.1847    0.3940       0.1271                   -7.38%    frx021frRFW4FS4
0.6742    0.1861    0.3973       0.1286                   -6.61%    frx011frRFW4FS6
0.7029    0.1942    0.4063       0.1288                   -4.49%    frFSfrR5S
0.6731    0.1880    0.3984       0.1290                   -6.35%    frx011frRFW4FS4
0.6930    0.1961    0.4120       0.1321                   -3.15%    frFSfrR6S
0.6956    0.1980    0.4133       0.1325                   -2.84%    frFSfrR4S
0.6954    0.1981    0.4154       0.1336                   -2.35%    fryfrRFS3456          2
0.6910    0.1869    0.4102       0.1337                   -3.57%    frx021frRFW3FS3
0.6969    0.1886    0.4141       0.1350                   -2.66%    frx011frRFW3FS3       3
0.7039*   0.1986*   0.4254*      0.1369                   -0.00%    frFSfrR3S             1
                               Robust monolingual runs: Italian
           1
                                                                            itFSitR6S
                                                                       ityitRFS3456
                                                                            itFSitR4S
                                                                            itFSitR2S

          0.8




          0.6




          0.4




          0.2




           0
                0    0.2           0.4                 0.6              0.8              1



    at0       at1       avgp       gmap                       %               run            x
0.6004    0.1061    0.3107      0.0773                       -17.85%   itFWitR2W
0.6486    0.1245    0.3406      0.0854                        -9.94%   itFWitR3W
0.6460    0.1245    0.3367      0.0870                       -10.97%   itFWitR4W
0.6566    0.1268    0.3511      0.1050                        -7.17%   itFSitR2S             4
0.6594    0.1422    0.3616      0.1082                        -4.39%   itx021itRFW4FS5
0.6680    0.1322    0.3626      0.1092                        -4.12%   itx021itRFW3FS3
0.6644    0.1439    0.3644      0.1095                        -3.65%   itx011itRFW4FS5
0.6603    0.1424    0.3657      0.1099                        -3.31%   itx021itRFW4FS4
0.6608    0.1413    0.3652      0.1108                        -3.44%   itx021itRFW4FS6
0.6779    0.1338    0.3689      0.1112                        -2.46%   itx011itRFW3FS3
0.6682    0.1385    0.3718      0.1113                        -1.69%   itFSitR3S
0.6701    0.1440    0.3684      0.1113                        -2.59%   itx011itRFW4FS4
0.6713    0.1489    0.3726      0.1118                        -1.48%   itFSitR5S
0.6696    0.1430    0.3681      0.1123                        -2.67%   itx011itRFW4FS6
0.6798    0.1490    0.3761      0.1135                        -0.56%   itFSitR4S             3
0.6813    0.1492*   0.3780      0.1146                        -0.05%   ityitRFS3456          2
0.6895*   0.1480    0.3782*     0.1153                        -0.00%   itFSitR6S             1
                               Robust monolingual runs: Dutch
           1
                                                                       nlFSnlR4S
                                                                    nlynlRFS3456
                                                                nlx011nlRFW4FS4
                                                                       nlFSnlR2S

          0.8




          0.6




          0.4




          0.2




           0
                0    0.2           0.4                0.6             0.8            1



    at0       at1       avgp       gmap                  %                run            x
0.7105    0.1420    0.3960      0.1413                  -12.04%    nlFWnlR2W
0.7181    0.1388    0.4113      0.1720                   -8.64%    nlFWnlR3W
0.7212    0.1467    0.4189      0.1722                   -6.95%    nlFWnlR4W
0.7368    0.1451    0.4237      0.1731                   -5.89%    nlFSnlR2S             4
0.7343    0.1476    0.4370      0.1954                   -2.93%    nlx021nlRFW3FS3
0.7372    0.1476    0.4380      0.1963                   -2.71%    nlx011nlRFW3FS3
0.7476    0.1476    0.4393      0.1969                   -2.42%    nlFSnlR3S
0.7413    0.1501    0.4437      0.2025                   -1.44%    nlx021nlRFW4FS5
0.7371    0.1552    0.4446      0.2035                   -1.24%    nlx021nlRFW4FS6
0.7459    0.1491    0.4463      0.2042                   -0.87%    nlx011nlRFW4FS5
0.7520*   0.1491    0.4474      0.2045                   -0.62%    nlFSnlR5S
0.7408    0.1562*   0.4472      0.2049                   -0.67%    nlx011nlRFW4FS6
0.7449    0.1562*   0.4478      0.2050                   -0.53%    nlFSnlR6S
0.7432    0.1504    0.4459      0.2051                   -0.96%    nlx021nlRFW4FS4
0.7454    0.1510    0.4488      0.2067                   -0.31%    nlx011nlRFW4FS4       3
0.7515    0.1544    0.4499      0.2068                   -0.07%    nlynlRFS3456          2
0.7504    0.1510    0.4502*     0.2073                   -0.00%    nlFSnlR4S             1
                              Robust bilingual runs: English to German
           1
                                                                            deFSdeRSen3S
                                                                            deFSdeRSen4S
                                                                         deydeRSenFS3456
                                                                            deFSdeRSen2S

          0.8




          0.6




          0.4




          0.2




           0
                0     0.2             0.4                 0.6                  0.8         1



    at0       at1      avgp         gmap                 %                       run           x
0.5251    0.0409    0.2458       0.0303                 -24.21%          deFWdeRSen2W
0.5548    0.0484    0.2688       0.0378                 -17.11%          deFWdeRSen4W
0.5673    0.0496    0.2809       0.0448                 -13.38%          deFWdeRSen3W
0.5735    0.0667    0.2912       0.0507                 -10.21%          deFSdeRSen2S          4
0.5757    0.0585    0.2997       0.0530                  -7.59%          dex021deRSenFW4FS5
0.5902    0.0610    0.3043       0.0540                  -6.17%          dex011deRSenFW4FS5
0.6054    0.0669    0.3135       0.0549                  -3.33%          deFSdeRSen5S
0.5776    0.0598    0.3023       0.0555                  -6.78%          dex021deRSenFW4FS6
0.5923    0.0629    0.3075       0.0566                  -5.18%          dex011deRSenFW4FS6
0.5789    0.0618    0.3070       0.0575                  -5.33%          dex021deRSenFW4FS4
0.6085    0.0706    0.3185       0.0577                  -1.79%          deFSdeRSen6S
0.5935    0.0644    0.3115       0.0587                  -3.95%          dex011deRSenFW4FS4
0.6208    0.0707    0.3213       0.0598                  -0.93%          deydeRSenFS3456       3
0.6137    0.0709    0.3220       0.0600                  -0.71%          deFSdeRSen4S          2
0.6008    0.0687    0.3164       0.0647                  -2.44%          dex011deRSenFW3FS3
0.6271*   0.0745*   0.3243*      0.0659                  -0.00%          deFSdeRSen3S          1
0.5983    0.0663    0.3133       0.0662                  -3.39%          dex021deRSenFW3FS3
                               Robust bilingual runs: French to Dutch
           1
                                                                               nlFSnlRLfr6S
                                                                        nlx011nlRLfrFW4FS6
                                                                        nlx021nlRLfrFW4FS6
                                                                               nlFSnlRLfr2S

          0.8




          0.6




          0.4




          0.2




           0
                0     0.2             0.4                 0.6                   0.8           1


    at0       at1       avgp        gmap                   %                       run            x
0.6158    0.1164    0.3309       0.0738                   -11.05%          nlFWnlRLfr2W
0.6338    0.1140    0.3507       0.0886                    -5.73%          nlFWnlRLfr4W
0.6523    0.1236    0.3537       0.0975                    -4.92%          nlFSnlRLfr2S           4
0.6632    0.1219    0.3664       0.1025                    -1.51%          nlFSnlRLfr5S
0.6503    0.1237    0.3653       0.1029                    -1.80%          nlx021nlRLfrFW4FS5
0.6506    0.1219    0.3644       0.1029                    -2.04%          nlx011nlRLfrFW4FS5
0.6340    0.1072    0.3536       0.1032                    -4.95%          nlFWnlRLfr3W
0.6528    0.1307*   0.3692       0.1075                    -0.75%          nlx021nlRLfrFW4FS6     3
0.6536    0.1256    0.3682       0.1078                    -1.02%          nlx021nlRLfrFW4FS4
0.6696*   0.1300    0.3720*      0.1078                   --0.00%          nlFSnlRLfr6S           1
0.6544    0.1245    0.3677       0.1080                    -1.16%          nlx011nlRLfrFW4FS4
0.6560    0.1300    0.3699       0.1080                    -0.56%          nlx011nlRLfrFW4FS6     2
0.6657    0.1245    0.3680       0.1085                    -1.08%          nlFSnlRLfr4S
0.6684    0.1246    0.3687       0.1092                    -0.89%          nlynlRLfrFS3456
0.6421    0.1207    0.3621       0.1234                    -2.66%          nlx021nlRLfrFW3FS3
0.6457    0.1207    0.3614       0.1238                    -2.85%          nlx011nlRLfrFW3FS3
0.6658    0.1207    0.3634       0.1253                    -2.31%          nlFSnlRLfr3S
                               Robust bilingual runs: Italian to Spanish
           1
                                                                      esx011esRLitFW3FS3
                                                                      esx021esRLitFW3FS3
                                                                             esFSesRLit3S
                                                                             esFSesRLit2S

          0.8




          0.6




          0.4




          0.2




           0
                0     0.2              0.4                  0.6                0.8          1



    at0       at1       avgp         gmap                    %                     run          x
0.4699    0.0384    0.2113        0.0361                    -31.13%        esFWesRLit2W
0.5072    0.0479    0.2511        0.0448                    -18.16%        esFWesRLit4W
0.5331    0.0548    0.2663        0.0554                    -13.20%        esFWesRLit3W
0.5671    0.0553    0.2689        0.0620                    -12.35%        esFSesRLit2S         4
0.5448    0.0663    0.2804        0.0643                     -8.60%        esx021esRLitFW4FS5
0.5524    0.0663    0.2818        0.0646                     -8.15%        esx011esRLitFW4FS5
0.5461    0.0686    0.2884        0.0668                     -6.00%        esFSesRLit5S
0.5933    0.0663    0.2884        0.0698                     -6.00%        esx021esRLitFW4FS4
0.5568    0.0686    0.2910        0.0709                     -5.15%        esx011esRLitFW4FS4
0.5463    0.0653    0.2881        0.0713                     -6.10%        esx021esRLitFW4FS6
0.5562    0.0653    0.2902        0.0723                     -5.41%        esx011esRLitFW4FS6
0.6019    0.0686    0.2963        0.0730                     -3.42%        esFSesRLit4S
0.6054    0.0691    0.2995        0.0740                     -2.38%        esyesRLitFS3456
0.5903    0.0653    0.2953        0.0743                     -3.75%        esFSesRLit6S
0.5719    0.0704*   0.3042        0.0812                     -0.85%        esx021esRLitFW3FS3   2
0.5836    0.0704*   0.3068*       0.0826                     -0.00%        esx011esRLitFW3FS3   1
0.6140*   0.0637    0.3037        0.0833                     -1.01%        esFSesRLit3S         3
                               Robust multilingual runs from English
           1
                                                                          ml5XRSFSen3S
                                                                          ml2XRSFSen3S
                                                                          ml6XRSFSen4S
                                                                          ml5XRSFSen4S
                                                                          ml4XRSFSen4S
          0.8                                                               mlRSFSen2S




          0.6




          0.4




          0.2




           0
                0    0.2              0.4                 0.6                0.8            1



    at0       at1       avgp          gmap                       %               run            x
0.7026    0.0000    0.2079         0.1047                       -21.13%     mlRSFSen2Sl
0.7246    0.0028    0.2245         0.1072                       -14.83%     mlRSFSen5S
0.7533    0.0031*   0.2267         0.1104                       -14.00%     mlRSFSen2S          4
0.7561    0.0000    0.2219         0.1117                       -15.82%     mlRSFSen5Sl
0.7027    0.0008    0.2113         0.1125                       -19.84%     ml6YRSFSen2Sl
0.7040    0.0005    0.2141         0.1142                       -18.78%     mlRSFSen2Sn
0.7232    0.0028    0.2278         0.1142                       -13.58%     mlRSFSen6S
0.7308    0.0029    0.2300         0.1156                       -12.75%     mlRSFSen4S
0.7027    0.0009    0.2140         0.1158                       -18.82%     ml5YRSFSen2Sl
0.7028    0.0010    0.2146         0.1165                       -18.59%     ml4YRSFSen2Sl
0.7559    0.0000    0.2266         0.1198                       -14.04%     mlRSFSen4Sl
0.7028    0.0011    0.2151         0.1201                       -18.40%     ml3YRSFSen2Sl
0.7040    0.0009    0.2155         0.1208                       -18.25%     ml1YRSFSen2Sn
0.7040    0.0009    0.2161         0.1210                       -18.02%     ml6YRSFSen2Sn
0.7028    0.0011    0.2156         0.1212                       -18.21%     ml1YRSFSen2Sl
0.7040    0.0009    0.2159         0.1213                       -18.10%     ml3YRSFSen2Sn
0.7040    0.0009    0.2160         0.1213                       -18.06%     ml3XRSFSen2Sn
0.7040    0.0009    0.2161         0.1213                       -18.02%     ml4XRSFSen2Sn
0.7040    0.0009    0.2157         0.1214                       -18.17%     ml2YRSFSen2Sn
0.7040    0.0010    0.2155         0.1214                       -18.25%     ml1XRSFSen2Sn
0.7040    0.0009    0.2163         0.1217                       -17.94%     ml5XRSFSen2Sn
0.7040    0.0009    0.2156         0.1219                       -18.21%     ml2XRSFSen2Sn
0.7040    0.0009    0.2161         0.1220                       -18.02%     ml5YRSFSen2Sn
0.7040    0.0009    0.2163         0.1220                       -17.94%     ml6XRSFSen2Sn
0.7040    0.0009    0.2160         0.1221                       -18.06%     ml4YRSFSen2Sn
0.7028    0.0011    0.2155         0.1222                       -18.25%     ml2YRSFSen2Sl
0.7430    0.0000    0.2255         0.1257                       -14.45%     mlRSFSen6Sl
0.7037    0.0019    0.2285         0.1293                       -13.32%     ml2XRSFSen2Sl
0.7041    0.0021    0.2287         0.1293                       -13.24%     ml1XRSFSen2Sl
0.7038    0.0019    0.2286         0.1294                       -13.28%     ml3XRSFSen2Sl
0.7561    0.0027    0.2303         0.1295                       -12.63%     ml1XRSFSen2S
0.7041    0.0018    0.2287         0.1296                       -13.24%     ml4XRSFSen2Sl
0.7561    0.0027    0.2306         0.1296                       -12.52%     ml2XRSFSen2S
0.7040   0.0019   0.2293   0.1297   -13.01%   ml5XRSFSen2Sl
0.7560   0.0027   0.2314   0.1299   -12.22%   ml5XRSFSen2S
0.7044   0.0019   0.2298   0.1300   -12.82%   ml6XRSFSen2Sl
0.7547   0.0017   0.2300   0.1300   -12.75%   mlRSFSen5Sn
0.7559   0.0027   0.2314   0.1300   -12.22%   ml6XRSFSen2S
0.7560   0.0027   0.2312   0.1300   -12.29%   ml4XRSFSen2S
0.7561   0.0027   0.2311   0.1300   -12.33%   ml3XRSFSen2S
0.7669   0.0020   0.2403   0.1320    -8.84%   mlRSFSen3S
0.7629   0.0010   0.2316   0.1329   -12.14%   ml6YRSFSen5Sl
0.7428   0.0013   0.2338   0.1365   -11.31%   mlRSFSen6Sn
0.7629   0.0012   0.2344   0.1372   -11.08%   ml5YRSFSen5Sl
0.7464   0.0000   0.2299   0.1380   -12.78%   mlRSFSen3Sl
0.7546   0.0018   0.2352   0.1380   -10.77%   mlRSFSen4Sn
0.7629   0.0013   0.2349   0.1380   -10.89%   ml4YRSFSen5Sl
0.7629   0.0015   0.2350   0.1383   -10.85%   ml3YRSFSen5Sl
0.7607   0.0015   0.2370   0.1384   -10.09%   ml6YRSFSen5Sn
0.7606   0.0017   0.2359   0.1388   -10.51%   ml1XRSFSen5Sn
0.7607   0.0015   0.2371   0.1388   -10.05%   ml5YRSFSen5Sn
0.7607   0.0016   0.2369   0.1388   -10.13%   ml4XRSFSen5Sn
0.7607   0.0017   0.2364   0.1388   -10.32%   ml3XRSFSen5Sn
0.7607   0.0015   0.2368   0.1389   -10.17%   ml3YRSFSen5Sn
0.7513   0.0009   0.2358   0.1391   -10.55%   ml6YRSFSen6Sl
0.7607   0.0015   0.2366   0.1391   -10.24%   ml2YRSFSen5Sn
0.7629   0.0015   0.2353   0.1391   -10.74%   ml2YRSFSen5Sl
0.7607   0.0015   0.2364   0.1392   -10.32%   ml1YRSFSen5Sn
0.7607   0.0015   0.2369   0.1392   -10.13%   ml4YRSFSen5Sn
0.7607   0.0017   0.2362   0.1394   -10.39%   ml2XRSFSen5Sn
0.7607   0.0016   0.2373   0.1396    -9.98%   ml6XRSFSen5Sn
0.7607   0.0016   0.2372   0.1398   -10.02%   ml5XRSFSen5Sn
0.7629   0.0016   0.2354   0.1398   -10.70%   ml1YRSFSen5Sl
0.7617   0.0010   0.2362   0.1402   -10.39%   ml6YRSFSen4Sl
0.7513   0.0012   0.2386   0.1430    -9.48%   ml5YRSFSen6Sl
0.7513   0.0013   0.2391   0.1438    -9.29%   ml4YRSFSen6Sl
0.7617   0.0012   0.2390   0.1438    -9.33%   ml5YRSFSen4Sl
0.7491   0.0013   0.2405   0.1439    -8.76%   ml3YRSFSen6Sn
0.7491   0.0013   0.2401   0.1440    -8.92%   ml2YRSFSen6Sn
0.7511   0.0000   0.2406   0.1440    -8.73%   ml6YRSFSen3Sl
0.7491   0.0012   0.2405   0.1441    -8.76%   ml6YRSFSen6Sn
0.7491   0.0012   0.2407   0.1441    -8.69%   ml5YRSFSen6Sn
0.7513   0.0015   0.2393   0.1442    -9.22%   ml3YRSFSen6Sl
0.7617   0.0015   0.2399   0.1443    -8.99%   ml3YRSFSen4Sl
0.7617   0.0014   0.2396   0.1445    -9.10%   ml4YRSFSen4Sl
0.7491   0.0014   0.2395   0.1446    -9.14%   ml1XRSFSen6Sn
0.7491   0.0013   0.2399   0.1450    -8.99%   ml1YRSFSen6Sn
0.7513   0.0015   0.2397   0.1450    -9.07%   ml2YRSFSen6Sl
0.7491   0.0013   0.2406   0.1451    -8.73%   ml5XRSFSen6Sn
0.7491   0.0014   0.2398   0.1451    -9.03%   ml2XRSFSen6Sn
0.7491   0.0014   0.2404   0.1452    -8.80%   ml3XRSFSen6Sn
0.7491   0.0014   0.2407   0.1452    -8.69%   ml4XRSFSen6Sn
0.7491   0.0013   0.2408   0.1453    -8.65%   ml6XRSFSen6Sn
0.7827   0.0024   0.2539   0.1453    -3.68%   ml6XRSFSen5S
0.7514   0.0016   0.2398   0.1456    -9.03%   ml1YRSFSen6Sl
0.7603   0.0016   0.2423   0.1458    -8.08%   ml6YRSFSen4Sn
0.7617   0.0016   0.2402   0.1459    -8.88%   ml2YRSFSen4Sl
0.7830   0.0024   0.2520   0.1459    -4.40%   ml1XRSFSen5S
0.7601   0.0023   0.2534   0.1463    -3.87%   ml4XRSFSen4Sl
0.7603   0.0016   0.2425   0.1463    -8.00%   ml5YRSFSen4Sn
0.7830   0.0024   0.2529   0.1463   -4.06%   ml3XRSFSen5S
0.7603   0.0016   0.2420   0.1464   -8.19%   ml2YRSFSen4Sn
0.7603   0.0016   0.2422   0.1464   -8.12%   ml3YRSFSen4Sn
0.7603   0.0016   0.2424   0.1464   -8.04%   ml4YRSFSen4Sn
0.7830   0.0024   0.2525   0.1464   -4.21%   ml2XRSFSen5S
0.7603   0.0016   0.2418   0.1467   -8.27%   ml1YRSFSen4Sn
0.7603   0.0017   0.2412   0.1468   -8.50%   ml1XRSFSen4Sn
0.7828   0.0024   0.2537   0.1468   -3.76%   ml5XRSFSen5S
0.7603   0.0016   0.2426   0.1470   -7.97%   ml6XRSFSen4Sn
0.7617   0.0016   0.2404   0.1470   -8.80%   ml1YRSFSen4Sl
0.7603   0.0016   0.2415   0.1471   -8.38%   ml2XRSFSen4Sn
0.7624   0.0023   0.2471   0.1471   -6.26%   ml1XRSFSen5Sl
0.7603   0.0017   0.2420   0.1472   -8.19%   ml3XRSFSen4Sn
0.7463   0.0010   0.2473   0.1473   -6.18%   ml5YRSFSen3Sn
0.7464   0.0010   0.2470   0.1473   -6.30%   ml4XRSFSen3Sn
0.7603   0.0016   0.2425   0.1473   -8.00%   ml4XRSFSen4Sn
0.7603   0.0016   0.2426   0.1473   -7.97%   ml5XRSFSen4Sn
0.7628   0.0023   0.2480   0.1475   -5.92%   ml4XRSFSen5Sl
0.7491   0.0013   0.2406   0.1477   -8.73%   ml4YRSFSen6Sn
0.7628   0.0023   0.2487   0.1484   -5.65%   ml5XRSFSen5Sl
0.7628   0.0023   0.2494   0.1487   -5.39%   ml6XRSFSen5Sl
0.7418   0.0005   0.2385   0.1498   -9.52%   mlRSFSen3Sn
0.7794   0.0024   0.2549   0.1507   -3.30%   ml1XRSFSen4S
0.7793   0.0024   0.2561   0.1510   -2.85%   ml3XRSFSen4S
0.7792   0.0024   0.2567   0.1512   -2.62%   ml5XRSFSen4S    2
0.7793   0.0024   0.2554   0.1512   -3.11%   ml2XRSFSen4S
0.7767   0.0027   0.2561   0.1514   -2.85%   ml1XRSFSen6S
0.7792   0.0024   0.2565   0.1516   -2.69%   ml4XRSFSen4S    3
0.7767   0.0027   0.2566   0.1517   -2.66%   ml2XRSFSen6S
0.7791   0.0024   0.2567   0.1517   -2.62%   ml6XRSFSen4S    1
0.7766   0.0027   0.2573   0.1521   -2.39%   ml3XRSFSen6S
0.7765   0.0027   0.2576   0.1522   -2.28%   ml4XRSFSen6S
0.7765   0.0027   0.2576   0.1523   -2.28%   ml5XRSFSen6S
0.7511   0.0009   0.2442   0.1524   -7.36%   ml5YRSFSen3Sl
0.7764   0.0027   0.2578   0.1528   -2.20%   ml6XRSFSen6S
0.7511   0.0010   0.2449   0.1532   -7.09%   ml4YRSFSen3Sl
0.7487   0.0024   0.2511   0.1533   -4.74%   ml1XRSFSen6Sl
0.7487   0.0025   0.2512   0.1539   -4.70%   ml2XRSFSen6Sl
0.7491   0.0025   0.2518   0.1542   -4.48%   ml3XRSFSen6Sl
0.7511   0.0011   0.2453   0.1544   -6.94%   ml3YRSFSen3Sl
0.7495   0.0025   0.2523   0.1546   -4.29%   ml4XRSFSen6Sl
0.7599   0.0023   0.2528   0.1550   -4.10%   ml3XRSFSen4Sl
0.7600   0.0023   0.2523   0.1550   -4.29%   ml2XRSFSen4Sl
0.7605   0.0022   0.2523   0.1551   -4.29%   ml1XRSFSen4Sl
0.7496   0.0025   0.2528   0.1552   -4.10%   ml5XRSFSen6Sl
0.7511   0.0011   0.2456   0.1553   -6.83%   ml2YRSFSen3Sl
0.7500   0.0025   0.2535   0.1556   -3.83%   ml6XRSFSen6Sl
0.7829   0.0024   0.2534   0.1556   -3.87%   ml4XRSFSen5S
0.7511   0.0011   0.2457   0.1560   -6.79%   ml1YRSFSen3Sl
0.7597   0.0023   0.2546   0.1560   -3.41%   ml6XRSFSen4Sl
0.7600   0.0023   0.2539   0.1560   -3.68%   ml5XRSFSen4Sl
0.7464   0.0010   0.2462   0.1561   -6.60%   ml1YRSFSen3Sn
0.7464   0.0010   0.2469   0.1565   -6.34%   ml6XRSFSen3Sn
0.7463   0.0010   0.2470   0.1567   -6.30%   ml3YRSFSen3Sn
0.7463   0.0010   0.2471   0.1567   -6.26%   ml4YRSFSen3Sn
0.7464   0.0010   0.2467   0.1567   -6.41%   ml2YRSFSen3Sn
0.7464   0.0010   0.2471   0.1567   -6.26%   ml5XRSFSen3Sn
0.7463    0.0010   0.2471    0.1568   -6.26%   ml6YRSFSen3Sn
0.7465    0.0010   0.2468    0.1568   -6.37%   ml3XRSFSen3Sn
0.7626    0.0024   0.2473    0.1568   -6.18%   ml3XRSFSen5Sl
0.7627    0.0024   0.2470    0.1568   -6.30%   ml2XRSFSen5Sl
0.7465    0.0010   0.2466    0.1570   -6.45%   ml2XRSFSen3Sn
0.7466    0.0011   0.2463    0.1571   -6.56%   ml1XRSFSen3Sn
0.7903    0.0013   0.2620    0.1636   -0.61%   ml1XRSFSen3S
0.7904*   0.0013   0.2626    0.1640   -0.38%   ml2XRSFSen3S
0.7903    0.0013   0.2631    0.1642   -0.19%   ml3XRSFSen3S
0.7901    0.0013   0.2636*   0.1644   -0.00%   ml6XRSFSen3S
0.7902    0.0013   0.2636*   0.1644   -0.00%   ml5XRSFSen3S
0.7903    0.0013   0.2635    0.1644   -0.04%   ml4XRSFSen3S
0.7504    0.0010   0.2611    0.1672   -0.95%   ml3XRSFSen3Sl
0.7505    0.0009   0.2608    0.1672   -1.06%   ml2XRSFSen3Sl
0.7506    0.0009   0.2608    0.1673   -1.06%   ml1XRSFSen3Sl
0.7503    0.0010   0.2614    0.1674   -0.83%   ml4XRSFSen3Sl
0.7497    0.0009   0.2618    0.1676   -0.68%   ml6XRSFSen3Sl
0.7498    0.0009   0.2617    0.1677   -0.72%   ml5XRSFSen3Sl
                                                 References
 [1] Aoe, Jun-Ichi; Morimoto, Katsushi; Sato, Takashi. An Efficient Implementation of Trie Structures.
     Software Practice and Experience 22(9): 695-721, 1992.
 [2] Automatic Trans SL, Spain. Automatic translation server. On line http://www.automatictrans.es [Visited
     18/07/2006].
 [3] BabelFish translation resources. On line http://babelfish.altavista.com [Visited 18/07/2006].
 [4] Babylon.com, Ltd, Israel. On line http://www.babylon.com [Visited 18/07/2006].
 [5] de Pablo, C.; González-Ledesma, A.; Martínez-Fernández, J. L.; Guirao, J.M.; Martínez, P.; and Moreno,
     A. MIRACLE’s Cross-Lingual Question Answering Experiments with Spanish as a Target Language.
     Accessing Multilingual Information Repositories: 6th Workshop of the Cross Language Evaluation Forum
     2005, CLEF 2005, Vienna, Austria, Revised Selected Papers (Peters, C. et al., Eds.). Lecture Notes in
     Computer Science, vol. 4022, Springer (to appear).
 [6] de Pablo, C.; González-Ledesma, A.; Martínez-Fernández, J. L.; Guirao, J.M.; Martínez, P.; and Moreno,
     A. MIRACLE’s 2005 Approach to Cross-Lingual Question Answering. Working Notes for the CLEF
     2005 Workshop. Vienna, Austria, 2005.
 [7] de Pablo, C.; Martínez-Fernández, J. L.; Martínez, P.; and Villena, J. miraQA: Experiments with Learning
     Answer Context Patterns from the Web. Multilingual Information Access for Text, Speech and Images:
     5th Workshop of the Cross-Language Evaluation Forum, CLEF 2004, Bath, UK, September 15-17, 2004,
     Revised Selected Papers (Carol Peters, Paul Clough, Julio Gonzalo, et al., Eds.). Lecture Notes in
     Computer Science, vol. 3491, pp. 494-501. Springer, 2005.
 [8] de Pablo, C.; Martínez-Fernández, J. L.; Martínez, P.; Villena, J.; García-Serrano, A. M.; Goñi, J. M.; and
     González, J. C. miraQA: Initial experiments in Question Answering. Working Notes for the CLEF 2004
     Workshop (Carol Peters and Francesca Borri, Eds.), pp. 371-376. Bath, United Kingdom, 2004.
 [9] Ergane multilingual translation dictionary. On line http://download.travlang.com [Visited 18/07/2006].
[10] Free2Translation. Free text translator. On line http://www.freetranslation.com [Visited 18/07/2006].
[11] Goñi-Menoyo, J.M.; González-Cristóbal, J.C.; and Villena-Román, J. MIRACLE at Ad-Hoc CLEF 2005:
     Merging and Combining without Using a Single Approach. Accessing Multilingual Information
     Repositories: 6th Workshop of the Cross Language Evaluation Forum 2005, CLEF 2005, Vienna, Austria,
     Revised Selected Papers (Peters, C. et al., Eds.). Lecture Notes in Computer Science, vol. 4022, Springer
     (to appear).
[12] Goñi-Menoyo, J.M.; González, J.C.; and Villena-Román, J. Miracle’s 2005 Approach to Monolingual
     Information Retrieval. Working Notes for the CLEF 2005 Workshop. Vienna, Austria, 2005.
[13] Goñi-Menoyo, José M; González, José C.; Martínez-Fernández, José L.; and Villena, J. MIRACLE’s
     Hybrid Approach to Bilingual and Monolingual Information Retrieval. Multilingual Information Access
     for Text, Speech and Images: 5th Workshop of the Cross-Language Evaluation Forum, CLEF 2004, Bath,
     UK, September 15-17, 2004, Revised Selected Papers (Carol Peters, Paul Clough, Julio Gonzalo, et al.,
     Eds.). Lecture Notes in Computer Science, vol. 3491, pp. 188-199. Springer, 2005.
[14] Goñi-Menoyo, José M.; González, José C.; Martínez-Fernández, José L.; Villena-Román, Julio; García-
     Serrano, Ana; Martínez-Fernández, Paloma; de Pablo-Sánchez, César; and Alonso-Sánchez, Javier.
     MIRACLE’s hybrid approach to bilingual and monolingual Information Retrieval. Working Notes for the
     CLEF 2004 Workshop (Carol Peters and Francesca Borri, Eds.), pp. 141-150. Bath, United Kingdom,
     2004.
[15] Goñi-Menoyo, José Miguel; González-Cristóbal, José Carlos and Fombella-Mourelle, Jorge. An
     optimised trie index for natural language processing lexicons. MIRACLE Technical Report. Universidad
     Politécnica de Madrid, 2004.
[16] González, J.C.; Goñi-Menoyo, J.M.; and Villena-Román, J. Miracle’s 2005 Approach to Cross-lingual
     Information Retrieval. Working Notes for the CLEF 2005 Workshop. Vienna, Austria, 2005.
[17] Google language tools. On line http://www.google.com/language_tools [Visited 18/07/2006].
[18] Lana-Serrano, S.; Goñi-Menoyo, J.M.; and González-Cristóbal, J.C. MIRACLE at GeoCLEF 2005: First
     Experiments in Geographical IR. Accessing Multilingual Information Repositories: 6th Workshop of the
     Cross Language Evaluation Forum 2005, CLEF 2005, Vienna, Austria, Revised Selected Papers (Peters,
     C. et al., Eds.). Lecture Notes in Computer Science, vol. 4022, Springer (to appear).
[19] Lana-Serrano, S.; Goñi-Menoyo, J.M.; and González-Cristóbal, J.C. MIRACLE’s 2005 Approach to
     Geographical Information Retrieval. Working Notes for the CLEF 2005 Workshop. Vienna, Austria,
     2005.
[20] Martínez-Fernández, J.L.; Villena-Román, J.; García-Serrano, A.M.; and González-Cristóbal, J.C.
     Combining Textual and Visual Features for Image Retrieval. Accessing Multilingual Information
     Repositories: 6th Workshop of the Cross Language Evaluation Forum 2005, CLEF 2005, Vienna, Austria,
     Revised Selected Papers (Peters, C. et al., Eds.). Lecture Notes in Computer Science, vol. 4022, Springer
     (to appear).
[21] Martínez-Fernández, José L.; García-Serrano, Ana; Villena, J. and Méndez-Sáez, V.; MIRACLE approach
     to ImageCLEF 2004: merging textual and content-based Image Retrieval. Multilingual Information
     Access for Text, Speech and Images: 5th Workshop of the Cross-Language Evaluation Forum, CLEF
     2004, Bath, UK, September 15-17, 2004, Revised Selected Papers (Carol Peters, Paul Clough, Julio
     Gonzalo, et al., Eds.). Lecture Notes in Computer Science, vol. 3491, pp. 699-708. Springer, 2005.
[22] Martínez-Fernández, J. L.; García-Serrano, A.; Villena, J.; Méndez-Sáez, V.D.; González-Tortosa, S.;
     Castagnone, M.; and Alonso, J. MIRACLE at ImageCLEF 2004. Working Notes for the CLEF 2004
     Workshop (Carol Peters and Francesca Borri, Eds.), pp. 545-553. Bath, United Kingdom, 2004.
[23] Martínez, José L.; Villena, Julio; Fombella, Jorge; G. Serrano, Ana; Martínez, Paloma; Goñi, José M.; and
     González, José C. MIRACLE Approaches to Multilingual Information Retrieval: A Baseline for Future
     Research. Comparative Evaluation of Multilingual Information Access Systems (Peters, C; Gonzalo, J.;
     Brascher, M.; and Kluck, M., Eds.). Lecture Notes in Computer Science, vol. 3237, pp. 210-219.
     Springer, 2004.
[24] Martínez, J.L.; Villena-Román, J.; Fombella, J.; García-Serrano, A.; Ruiz, A.; Martínez, P.; Goñi, J.M.;
     and González, J.C. Evaluation of MIRACLE approach results for CLEF 2003. Working Notes for the
     CLEF 2003 Workshop (Carol Peters, Ed.), pp. 115-124. Trondheim, Norway, 21-22 August 2003.
[25] Martínez-González, A.; Martínez-Fernández, J. L.; de Pablo-Sánchez, C.; Villena-Román, J. Jiménez-
     Cuadrado, L.; Martínez, P.; and González-Cristóbal, J.C. MIRACLE at WebCLEF 2005: Combining Web
     Specific and Linguistic Information. Accessing Multilingual Information Repositories: 6th Workshop of
     the Cross Language Evaluation Forum 2005, CLEF 2005, Vienna, Austria, Revised Selected Papers
     (Peters, C. et al., Eds.). Lecture Notes in Computer Science, vol. 4022, Springer (to appear).
[26] Martínez-González, A.; Martínez-Fernández, J. L.; de Pablo-Sánchez, C.; Villena-Román, J. Jiménez-
     Cuadrado, L.; Martínez, P.; and González-Cristóbal, J.C. MIRACLE’s Approach to Multilingual Web
     Retrieval. Working Notes for the CLEF 2005 Workshop. Vienna, Austria, 2005.
[27] Morphological, Hungary. MoBiCAT translation resources. On line http://www.morphologic.hu [Visited
     18/07/2006].
[28] Porter, Martin. Snowball stemmers and resources page. On line http://www.snowball.tartarus.org [Visited
     18/07/2006].
[29] Pro Langs Ltd., Bulgary. BULTRA translation resources. On line http://www.bultra.com [Visited
     18/07/2006].
[30] Prompt-Online free automatic translation service. On line http://translation2.paralink.com [Visited
     18/07/2006].
[31] Reverso translation resources. On line http://www.reverso.net/text_translation.asp [Visited 18/07/2006].
[32] Robertson, S.E. et al. Okapi at TREC-3. In Overview of the Third Text REtrieval Conference (TREC-3).
     D.K. Harman (Ed.). Gaithersburg, MD: NIST, April 1995.
[33] Savoy, Jacques. Report on CLEF-2003 Multilingual Tracks. Comparative Evaluation of Multilingual
     Information Access Systems (Peters, C; Gonzalo, J.; Brascher, M.; and Kluck, M., Eds.). Lecture Notes in
     Computer Science, vol. 3237, pp. 64-73. Springer, 2004.
[34] Skycode Ltd., Bulgaria. Webtrance translation program. On line http://webtrance.skycode.com/
     ?current=&lang=en [Visited 18/07/2006].
[35] SYSTRAN Software Inc., USA. SYSTRAN 5.0 translation resources. On line http://www.systransoft.com
     [Visited 18/07/2006].
[36] Translation Experts Ltd. InterTrans translation resources. On line http://www.tranexp.com [Visited
     18/07/2006].
[37] Travlang translating dictionaries. On line http://www.dictionaries.travlang.com/otherdicts.html [Visited
     18/07/2006].

[38] University of Neuchatel. Page of resources for CLEF (Stopwords, transliteration, stemmers …). On line
     http://www.unine.ch/info/clef [Visited 18/07/2006].
[39] Villena-Román, J.; Goñi-Menoyo, J.M.; González-Cristóbal, J.C.; and Martínez-Fernández, J.L.
     MIRACLE Retrieval Experiments with East Asian Languages. Proceedings of the Fifth NTCIR
     Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question
     Answering and Cross-Lingual Information Access, pp. 138-144. Tokyo, Japan, 2005.
[40] Villena-Román, J.; Crespo-García, R.M.; and González-Cristóbal, J.C. Effect of Connective Functions in
     Interactive Image Retrieval. Accessing Multilingual Information Repositories: 6th Workshop of the Cross
     Language Evaluation Forum 2005, CLEF 2005, Vienna, Austria, Revised Selected Papers (Peters, C. et
     al., Eds.). Lecture Notes in Computer Science, vol. 4022, Springer (to appear).
[41] Villena-Román, J.; González-Cristóbal, J.C.; Goñi-Menoyo, J.M.; Martínez Fernández, J.L.; and
     Fernández, J.J. MIRACLE’s Combination of Visual and Textual Queries for Medical Images
     Retrieval. Working Notes for the CLEF 2005 Workshop. Vienna, Austria, 2005.
[42] Villena-Román, J.; González-Cristóbal, J.C.; Goñi-Menoyo, J.M.; and Martínez-Fernández, J.L. An
     Information Retrieval Approach to Medical Image Annotation. Accessing Multilingual Information
     Repositories: 6th Workshop of the Cross Language Evaluation Forum 2005, CLEF 2005, Vienna, Austria,
     Revised Selected Papers (Peters, C. et al., Eds.). Lecture Notes in Computer Science, vol. 4022, Springer
     (to appear).
[43] Villena-Román, J.; González-Cristóbal, J.C.; Goñi-Menoyo, J.M.; and Martínez Fernández, J.L.
     MIRACLE’s Naive Approach to Medical Images Annotation. Working Notes for the CLEF 2005
     Workshop. Vienna, Austria, 2005.
[44] Villena, Julio; Martínez, José L.; Fombella, Jorge; G. Serrano, Ana; Ruiz, Alberto; Martínez, Paloma;
     Goñi, José M.; and González, José C. Image Retrieval: The MIRACLE Approach. Comparative
     Evaluation of Multilingual Information Access Systems (Peters, C; Gonzalo, J.; Brascher, M.; and Kluck,
     M., Eds.). Lecture Notes in Computer Science, vol. 3237, pp. 621-630. Springer, 2004.
[45] Villena-Román, J.; Martínez, J.L.; Fombella, J.; García-Serrano, A.; Ruiz, A.; Martínez, P.; Goñi, J.M.;
     and González, J.C. MIRACLE results for ImageCLEF 2003. Working Notes for the CLEF 2003
     Workshop (Carol Peters, Ed.), pp. 405-411. Trondheim, Norway, 21-22 August 2003.
[46] WorldLingo Translations LLC, USA. WorldLingo free online translator. On line http://www.world-
     lingo.com/en/products_services/worldlingo_translator.html [Visited 18/07/2006].