<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Dublin City University at CLEF 2005: Multilingual Merging Experiments</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Adenike M. Lam-Adesina, Gareth J. F. Jones School of Computing, Dublin City University</institution>
          ,
          <addr-line>Dublin 9</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This year the Dublin City University group participated in the CLEF 2005 Multilingual merging task. We tested different a range of standard merging techniques for merging the provided ranked result lists and show that the success of these techniques can sometimes be dependent on the retrieval system used.</p>
      </abstract>
      <kwd-group>
        <kwd>Multilingual information retrieval</kwd>
        <kwd>Retrieved list merging</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Multilingual information retrieval (MIR) refers to a process of retrieving relevant documents from collections in
different languages in response to a user request in a single language. Standard approaches to MIR involve
either translating the topics into the document languages or the document collections into the expected topic
language. In CLEF 2003 we showed that translating the document collections into the query language using a
standard machine translation system and then merging them to form a single collection, can result in better
retrieval performance than translating the topics [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, this method is not always practical, particularly if
the collection is very large or the translation resources are limited. For the second method whereby the topics are
translated, the topics are used to retrieve ranked lists of potentially documents from the separate collections.
These result lists then need to be merged together to form a single ranked list for the system output. The different
statistics of the individual collections and the varied topic translations mean that the scores of documents in the
separate lists will generally be incompatible, and thus that merging is a non-trivial problem. The CLEF 2005
Multilingual merging task aims to encourage researchers to focus directly on the merging problem, since
merging strategies explored previously for multilingual retrieval tasks at CLEF and elsewhere have generally
produced disappointing results. Previously work on multilingual merging has been combined with the document
retrieval stage, the idea of the CLEF merging task is to explore the merging of provided precomputed ranked lists
to enable direct comparison of the behaviour of merging strategies between different retrieval systems.
      </p>
      <p>Many different techniques for merging separate result lists to form a single list have been proffered and
tested in recent years. All of the techniques suggest that making an assumption that the distribution of relevant
documents in the results sets of retrieval from individual collections is similar is not true [2]. Hence, straight
merging of relevant documents from the sources will result in poor combination. However, none of the proposed
more complex merging techniques have really been demonstrated to be consistently effective.</p>
      <p>For our participation in the merging track at CLEF 2005 we applied a range of standard merging
strategies to the two provided sets of ranked lists. Our aim was to compare the behaviour of these methods for
the two sets of ranked documents in order to learn something about concepts that might be consistently useful or
poor when merging ranked lists.</p>
      <p>This paper is organized as follows: Section 2 overviews the merging techniques explored in this paper,
Section 3 gives our experimental results, and Section 4 draws conclusions and considers strategies for further
experimentation.
The aim of a merging strategy is typically to include as many relevant documents at the highest ranks in the
merged list. This section overviews the merging strategies used in our experiments. The basic idea is to modify
the scored weight of each document to take account of some aspect of the maximum and minimum values of the
matching scores or the distribution of scores in the lists to improve the compatibility of scores to form a more
effective ranked list. The schemes used in our experiments were as follows:
p = doc _ wgt
t = doc _ wgt * rank
d =
r = (
q = (
doc _ wgt − min_ wt
max_ wt − min_ wt
doc _ wgt − min_ wt
max_ wt − min_ wt</p>
      <p>) * rank
doc _ wgt − g min_ wt
g max_ wt − g min_ wt</p>
      <p>) * rank
b =</p>
      <p>doc _ wgt − min_ wt
max_ wt − min_ wt * rank
m1 = (
m2 = (m1) * rank
doc _ wgt − gmean _ wt</p>
      <p>gmean _ wt − g min_ wgt
) + (</p>
      <p>)
gstd _ wt
gstd _ wt
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
gmean _ wt = i=0
n
∑ doc _ wgt1i</p>
      <p>totdocs
where p, d, r, q, b, m1 and m2 are the new document weight for all document in all collections and
corresponding results are labelled * where * can be p, d, r, q, b, m1 and m2 depending on the merging scheme
used
doc_wgt = the initial document weight
gmax_wt = the global maximum weight, i.e the highest document weight from all collections for a given query
gmin_wt = the global minimum weight, i.e the lowest document weight from all collections for a given query
gmean_wt = the global median weight, i.e the mean document weight from all collections for a given query
max_wt = the individual collection maximum weight for a given query
min_wt = the individual collection minimum weight for a given query
rank = a parameter to control the effect of size of collection - a collection with more documents gets a higher
rank (value ranges between 1.5 and 1).</p>
      <p>Method p is used as a baseline using the raw document scores from the retrieved lists without modification. A
useful merging scheme should be expected to improve on the performance of the p scheme. The rank was
adjusted using the 20 training topics provided for the merging task.
Results for our experiments using these merging schemes are shown in Tables 1 and 2. Our official submissions
to CLEF 2005 are marked *.
% chg.</p>
      <p>-14.9
-7.4
-3.9
-23.5
-16.9
-4.7
-11.5
chg.</p>
      <p>-17
-18
-230
-32
-540
-109
-136</p>
    </sec>
    <sec id="sec-2">
      <title>4 Conclusions References</title>
      <p>Results of our merging experiments for CLEF 2005 indicate that the behaviour of merging schemes varies for
different sets of ranked lists. The reasons for this behaviour are not obvious and further analysis is planned to
attempt to better understand this behaviour as a basis for the extension of these techniques for merging or the
proposal of new ones.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] [2]
          <string-name>
            <given-names>A.M.</given-names>
            <surname>Lam-Adesina</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.J.F.</given-names>
            <surname>Jones</surname>
          </string-name>
          . Exeter at CLEF 2003:
          <article-title>Experiments with Machine Translation for Monolingual, Bilingual and Multilingual Retrieval</article-title>
          ,
          <source>Proceedings of the CLEF 2003 Workshop on CrossLanguage Information Retrieval and Evaluation</source>
          , Trondheim, Norway,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Jacques</given-names>
            <surname>Savoy</surname>
          </string-name>
          ,
          <source>Report on CLEF-2003 Multilingual Tracks,Proceedings of the CLEF 2003 Workshop on Cross-Language Information Retrieval and Evaluation</source>
          , Trondheim, Norway,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>