<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Task 3 Patient-Centred Information Retrieval: Team CUNI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Shadi Saleh</string-name>
          <email>saleh@ufal.mff.cuni.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavel Pecina</string-name>
          <email>pecina@ufal.mff.cuni.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Charles University Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics</institution>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper describes our systems that we submitted to the 2017 CLEF eHealth information retrieval (IR) task. We submitted runs to the monolingual and multilingual tasks. In the monolingual task, we investigate the performance of two IR models: probabilistic model and a model based on language-model. In addition, we experiment query expansion based on blind relevance feedback. In the multilingual task, we submitted runs for all the languages. We employ a Statistical Machine Translation (SMT) system to translate the given queries into English and get the n-best-list. Then we use this list of translations for our baseline system by getting 1-best-list to generate queries, we also use n-bestlist reranker that was developed by us to predict 1-best-list for better IR performance. Finally, we present our method for query expansion approach based on a machine learning model that predicts a term from a translation pool to be added to the original query.</p>
      </abstract>
      <kwd-group>
        <kwd>Multilingual information retrieval</kwd>
        <kwd>Machine Translation</kwd>
        <kwd>Machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Internet searches for medical topics had been increasing recently, and have
gotten the attention of information retrieval researchers. Fox [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] reported that about
80% of Internet users in the United States look for medical information online.
The main challenge in the medical information retrieval systems that people with
di erent experience, express their information need in di erent way [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
Laypeople express their medical information need using non-medical terms, while
medical experts express it using speci c medical terms, thus, information retrieval
systems need to be stable for such di erent query variations.
      </p>
      <p>
        The signi cant increasing of non-English digital content on the World Wide
Web has been followed by an increase in looking for this information by
internet users. Grefenstette and Nioche [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] presented an estimation of language size
in 1996, late 1999 and early 2000 for documents captured from the internet.
Their study showed that the English content has grown 800%, German 1500%,
and Spanish 1800% in the same period. Further more, users started to look for
information needs represented in documents which are not available in their
native languages. The system that searches for information in a language di erent
from the one of user is called Cross-Lingual (multilingual) Information Retrieval
(CLIR) system. It enables users to write queries (information need) represented
in a language (lang. A), and returns results from a document collection written
in a di erent language (lang. B).
      </p>
      <p>
        Usually, the baseline system in CLIR is to take the 1-best-list translation
returned by a statistical machine translation (SMT) system and perform the
retrieval as shown in the CLEF eHealth Information Retrieval tasks before [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
However, researchers recently started to investigate looking inside the box of
the machine translation system rather than using it as a black box [
        <xref ref-type="bibr" rid="ref17 ref6">17, 6</xref>
        ] and
showed that involving the internal components of the SMT in the retrieval
process signi cantly improved the baseline system.
      </p>
      <p>
        Nikoulina et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] presented an approach to develop Cross-lingual
information retrieval (CLIR) system which is based on reranking the hypotheses given
from the SMT system. Saleh and Pecina [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] considered Nikoulina's work as a
starting point and expanded it by adding a rich set of features for training. They
presented approach covered translating queries from Czech, French and German
into English and rerank the alternative translations to predict the hypothesis
that gives better CLIR performance.
      </p>
      <p>
        In this paper, we describe our participation at the 2017 CLEF eHealth
Information Retrieval Task [
        <xref ref-type="bibr" rid="ref13 ref4">13, 4</xref>
        ]. In the IRTask1, participants were provided with
English queries representing medical information need and were asked to provide
ranked list of documents from the ClueWeb collection sorted by their relevance.
While IRTask4 is a multilingual IR task, the original English queries were
translated into seven languages: Czech, French, Hungarian, German, Polish, Spanish
and Swedish by medical native speakers. Participants in this task were required
to provide a ranked list of relevant documents from the English collection. We
focus in our participation in the multilingual IR Task. We present our machine
learning model which reranks the alternative translations given by the machine
translation system for better IR results. We also present our new approach to
expand translated queries using our machine learning model.
2
2.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>System description</title>
      <sec id="sec-2-1">
        <title>Retrieval model</title>
        <p>
          In our experiment we use ClueWeb12 collection indexed and released by the
orgnisers of this task. The index was created using Terrier open source engine
[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. We use mainly BM25 as a retrieval model. Documents in this model are
ranked for a given query as shown in Equation 1. k1 and k3 are tuning
parameters, and we leave these parameters as their default values in Terrier. While tfd is
the normalised term frequency in document d, normalised by Equation 2. dl and
avgdl are document length and the average of document length in the collection
respectively. b is a free parameter, we tune this parameter using the 2016 CLEF
eHealth IR monolingual queries and the provided assessment information, then
we set this parameter to 0:6.
        </p>
        <p>RSV (d; q) =
t2d T q
X (k1 + 1)tfd</p>
        <p>K + tfd
tfd =</p>
        <p>tf
(1 + b) + b
(k3 + 1) tfq
k3 + tfq</p>
        <p>idf (t)
dl
avgdl
(1)
(2)
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Translation System</title>
      <p>
        We employ Khresmoi statistical machine translation (SMT) system [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], for
language pairs: Czech-English, French-English, German-English, Hungarian-English,
Polish-English, Spanish-English and Swedish-English, to translate the queries
into English. Khresmoi SMT system was trained to translate queries, where
most general SMT systems fail, and tuned on parallel and monolingual data
taken from the medical domain resources like Wikipedia, UMLS concept
descriptions and UMLS metathesaurus. Such domain speci c data made Khresmoi
perform well when translating sentences in the medical domain like the queries
in our case. Generally, feature weights in SMT systems are tuned toward BLEU
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] , a method for automatic evaluation of SMT systems correlates with human
judgments. It is not necessary to have correlation between the quality of general
SMT system and the quality of CLIR performance [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]; therefore Khresmoi SMT
system was tuned using MERT [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] towards PER (position-independent word
error rate) because it does not penalise word reorder; which is not important for
the performance of IR systems.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Hypothesis reranking</title>
      <p>
        For each input sentence, Khresmoi SMT system returns a list of alternative
translations in the target language, we refer to this list as an n-best-list. Saleh
and Pecina [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] presented an approach to rerank an n-best-list and predict a
translation that gives the best retrieval performance in terms of P@10. The
reranker is a generalized linear regression model that uses a set of features which
can be divided according to their sources into: 1) The SMT system: This
includes features that are derived from the verbose output of the Khresmoi SMT
system (e.g. phrase translation model, the target language model, the reordering
model and word penalty). 2) Document collection: The collection is employed
to derive features like IDF scores and features that are based on the
blindrelevance feedback approach. 2) External resources: Resources like Wikipedia
articles, document collection and UMLS metathesaurus are employed to create a
rich set of features for each query hypothesis. 3) Retrieval status value: This
feature is used to involve the retrieval model in the reranking. It is based on
how the Dirichlet model scores the retrieved documents for a given query. This
approach is similar to the work of Nottelman et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], where they investigated
the correlation between the RSV and the probability of relevance.
      </p>
      <p>To train the model, we used queries and assessment information from the
2016 CLEF eHealth IR task.
5
5.1</p>
    </sec>
    <sec id="sec-5">
      <title>Query expansion</title>
      <sec id="sec-5-1">
        <title>Blind relevance feedback</title>
        <p>Query expansion is de ned as the procedure of reformulating a user's query
for better retrieval e ciency. Blind Relevance Feedback (BRF), also known as
Pseudo Relevance Feedback, is the process of automatically expand user's query.
It considers the top k documents as relevant to the original query, and then
expands the query with terms from these documents. However, the assumption
of considering these documents as relevant is risky, because they might not be
relevant, thus resulting the original query to be drifted way from its information
need. The top k documents are chosen from an initial retrieval that is done using
the original query. From these documents we create bag-of-words (BOW) and
then we choose from this BOW m terms to be added to the original query. These
terms are chosen based on their inverse document frequency from the collection
and their frequencies in this BOW. Both k and m need to be tuned based on
the used collection and using test queries and assessment information. We use
Terrier implementation of BRF and tune k and m using the 2016 CLEF eHealth
IR task queries and their assessment information and then based on the results
we set k = 3 and m = 10.
5.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Term reranking</title>
        <p>
          In this experiment, we present our approach for query expansion in the
multilingual task. When we translate the query into English using SMT system, we
get n-best-list translations. These translations contain di erent synonyms in the
target language for a give term in the source language. The motivation of this
experiment is that using more than one of these synonyms, and expanding the
original query, could lead to improved retrieval. One of the feature we use in
this model is based on the word2vec open source tool developed by [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. They
presented two models: Continuous Bag-of-Words Model (CBOW) and
Continuous Skip-gram model. These models showed very powerful ability to measure the
similarity between words in the collection. We used for our experiment trained
model of word2vec on 25 millions articles from PubMed using their titles and
abstracts, the model available online 1. To investigate the hypothesis of
expanding queries from the translation pool, we use the queries that were provided in
CLEF eHealth IR task 2013{2015 by translating them into English and then: 1-)
Get 20-best-list translations for each query. 2-) Create a translation pool as
bagof-words from these translations. 3-) Then we use 1-best-list translation as an
1 https://www.ncbi.nlm.nih.gov/CBBresearch/Wilbur/IRET/DATASET/
original query, and expand it with one term from the translation pool. 4-) Then
we run the retrieval using our baseline setting using the expanded queries. After
evaluating the results and collecting the expanded queries that give maximum
P @10 among all the other expanded queries, we nd that the results from
expanded queries outperform signi cantly the results when using only the original
queries. To expand the original query with a term from the translation pool, we
build regression model that predicts the change of P @10 when a term is added
to the original query. In order to train the model we present set of features for
each term as follows:
{ IDF: Inverse document frequency of that term from the indexed collection.
{ RSV: First we conduct retrieval using the original query and then we take
the RSV of the document that is ranked rstly using our baseline setting,
then we add a term to that original query, and conduct the retrieval again,
then the feature value is the di erence of these two RSVs.
{ Similarity: First we use word2vec to get word embeddings for each term in
the original query and we sum these embeddings to get vector that represents
the entire query. Then we take the embeddings for the candidate term and
we calculate the cosine similarity between the query vector and the term
vector.
        </p>
        <p>The model is built to predict a term that will give the highest P @10 when
it is added to the original query, and trained on test queries that are taken from
CLEF eHealth IR task 2013{2015.
6</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Experiments</title>
      <p>This year we submit runs to the Ad-Hoc task in its monolingual and multilingual
subtask.
6.1</p>
      <sec id="sec-6-1">
        <title>Monolingual Ad-Hoc search</title>
        <p>Run1 This run uses Terrier implementation of BM25 IR model, with
normalisation parameter b tuned and set to 0:6.</p>
        <p>Run2 For comparison with BM25 model ( a probabilistic IR model), we submit
this run based on Terrier implementation of Dirichlet Bayesian smoothed model
(language-model based IR model).</p>
        <p>Run3 In this run, we use Terrier implementation of Blind relevance feedback
(Bo1) where k is set to 3 documents and m is set to 10 terms.
6.2</p>
      </sec>
      <sec id="sec-6-2">
        <title>Multilingual task</title>
        <p>Run1 In this run, we translate the query variant into English using Khresmoi
SMT then we take only the 1-best-list to generate the topics, then we perform
the retrieval using BM25 model.</p>
        <p>Run2 First we translate the query into English and take the 15-best-list
translations, then the reranker with all features predicts the translation that gives the
highest P@10, the predicted translations are used next to generate the topics
and perform the retrieval using BM25 model.</p>
        <p>Run3 First we use 1-best-list to generate queries then we add to each query one
term from the translation pool as described in Section 5.2.</p>
        <p>Run4 This run uses 1-best-list English translations to generate queries, then we
conduct the retrieval after doing query expansion using Terrier implementation
of BRF approach.
7</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Conclusion and future work</title>
      <p>In this paper we presented our participation in the CLEF eHealth 2017 Task3
Patient-Centred Information Retrieval as the team of Charles university. We
submitted runs into the Ad-hoc task including its monolingual and multilingual
subtasks. For the monolingual task, we investigated the performance when
using probabilistic IR model (BM25) and language-model based IR model, also we
submitted run based on BRF approach. We tuned all the parameters for these
models using queries and assessment information from the 2016 CLEF eHealth
IR task. While for the multilingual task, we employ an SMT system to translate
the queries into English and use 1-best-list to generate queries for our baseline
system. We also used our reranker to predict new 1-best-list for better IR
performance. We presented new approach to expand queries with a term from the
translation pool using machine learning model.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments References</title>
      <p>This research was supported by the Czech Science Foundation (grant n. P103/12/G084)
and the EU H2020 project KConnect (contract n. 644753).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Dusek</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hajic</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hlavacova</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Novak</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pecina</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosa</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al.:
          <article-title>Machine translation of medical texts in the Khresmoi project</article-title>
          .
          <source>In: Proceedings of the Ninth Workshop on Statistical Machine Translation</source>
          . pp.
          <volume>221</volume>
          {
          <fpage>228</fpage>
          .
          <string-name>
            <surname>Baltimore</surname>
          </string-name>
          , USA (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Fox</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          : Health Topics:
          <volume>80</volume>
          %
          <article-title>of internet users look for health information online</article-title>
          .
          <source>Tech. rep.</source>
          , Pew Research Center (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kelly</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suominen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hanlen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nevaol</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grouin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palotti</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zuccon</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Overview of the CLEF eHealth evaluation lab 2015</article-title>
          .
          <source>In: The 6th Conference and Labs of the Evaluation Forum</source>
          . Springer, Berlin, Germany (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kelly</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suominen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nvol</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Robert</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kanoulas</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spijker</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palotti</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zuccon</surname>
          </string-name>
          , G.:
          <article-title>CLEF 2017 eHealth evaluation lab overview</article-title>
          .
          <source>In: CLEF 2017 - 8th Conference and Labs of the Evaluation Forum, Lecture Notes in Computer Science (LNCS)</source>
          . Springer (
          <year>September 2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Grefenstette</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nioche</surname>
          </string-name>
          , J.:
          <article-title>Estimation of english and non-english language use on the www</article-title>
          .
          <source>In: Content-Based Multimedia Information Access - Volume 1</source>
          . pp.
          <volume>237</volume>
          {
          <fpage>246</fpage>
          . RIAO '00,
          <string-name>
            <surname>LE CENTRE DE HAUTES ETUDES INTERNATIONALES D'INFORMATIQUE</surname>
            <given-names>DOCUMENTAIRE</given-names>
          </string-name>
          , Paris, France, France (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Magdy</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Should MT systems be used as black boxes in CLIR</article-title>
          ? In: Clough,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Foley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Gurrin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Kraaij</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Mudoch</surname>
          </string-name>
          , V. (eds.)
          <source>Advances in Information Retrieval</source>
          , vol.
          <volume>6611</volume>
          , pp.
          <volume>683</volume>
          {
          <fpage>686</fpage>
          . Springer, Berlin, Germany (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Mikolov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corrado</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dean</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>E cient estimation of word representations in vector space</article-title>
          .
          <source>arXiv preprint arXiv:1301.3781</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Nikoulina</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovachev</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lagos</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Monz</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Adaptation of statistical machine translation model for cross-lingual information retrieval in a service context</article-title>
          .
          <source>In: Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics</source>
          . pp.
          <volume>109</volume>
          {
          <fpage>119</fpage>
          .
          <string-name>
            <surname>Avignon</surname>
          </string-name>
          , France (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Nottelmann</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fuhr</surname>
          </string-name>
          , N.:
          <article-title>From retrieval status values to probabilities of relevance for advanced IR applications</article-title>
          .
          <source>Information retrieval 6</source>
          , 363{
          <fpage>388</fpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Och</surname>
            ,
            <given-names>F.J.:</given-names>
          </string-name>
          <article-title>Minimum error rate training in statistical machine translation</article-title>
          .
          <source>In: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1</source>
          . pp.
          <volume>160</volume>
          {
          <fpage>167</fpage>
          .
          <string-name>
            <surname>Sapporo</surname>
          </string-name>
          ,
          <string-name>
            <surname>Japan</surname>
          </string-name>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Ounis</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amati</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plachouras</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Macdonald</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lioma</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Terrier: A high performance and scalable information retrieval platform</article-title>
          .
          <source>In: Proceedings of Workshop on Open Source Information Retrieval</source>
          . Seattle, WA, USA (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Palotti</surname>
            ,
            <given-names>J.R.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hanbury</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Jr.</surname>
            ,
            <given-names>C.E.K.</given-names>
          </string-name>
          :
          <article-title>How users search and what they search for in the medical domain - understanding laypeople and experts through query logs</article-title>
          .
          <source>Inf. Retr. Journal</source>
          <volume>19</volume>
          (
          <issue>1-2</issue>
          ),
          <volume>189</volume>
          {
          <fpage>224</fpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Palotti</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zuccon</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jimmy</surname>
            , Pecina,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lupu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kelly</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hanbury</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>CLEF 2017 task overview: The IR Task at the eHealth evaluation lab</article-title>
          . In: Working Notes of Conference and
          <article-title>Labs of the Evaluation (CLEF) Forum</article-title>
          . CEURWS, Dublin, Ireland (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Papineni</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roukos</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ward</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>W.J.:</given-names>
          </string-name>
          <article-title>BLEU: A method for automatic evaluation of machine translation</article-title>
          .
          <source>In: Proceedings of the 40th annual meeting on Association for Computational Linguistics</source>
          . pp.
          <volume>311</volume>
          {
          <fpage>318</fpage>
          . Philadelphia, USA (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Pecina</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dusek</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hajic</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hlavarova</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>G.J.</given-names>
          </string-name>
          , et al.:
          <article-title>Adaptation of machine translation for multilingual information retrieval in the medical domain</article-title>
          .
          <source>Arti cial Intelligence in Medicine</source>
          <volume>61</volume>
          (
          <issue>3</issue>
          ),
          <volume>165</volume>
          {
          <fpage>185</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Saleh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pecina</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Reranking hypotheses of machine-translated queries for crosslingual information retrieval</article-title>
          .
          <source>In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. The 7th International Conference of the CLEF Association</source>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2016</year>
          . Springer, Evora, Portugal (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Ture</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oard</surname>
            ,
            <given-names>D.W.:</given-names>
          </string-name>
          <article-title>Looking inside the box: Context-sensitive translation for cross-language information retrieval</article-title>
          .
          <source>In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          . pp.
          <volume>1105</volume>
          {
          <fpage>1106</fpage>
          .
          <string-name>
            <surname>Portland</surname>
          </string-name>
          , Oregon, USA (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>