<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Test Collection</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2001</year>
      </pub-date>
      <abstract>
        <p>The Cross-Language Evaluation Forum (CLEF)1 aims at promoting research and development in CrossLanguage Information Retrieval (CLIR) by (i) providing an infrastructure for the testing and evaluation of information retrieval systems operating on European languages, and (ii) creating test-suites of reusable data which can be employed by system developers for benchmarking purposes. These objectives are being achieved through the organisation of a series of annual system evaluation campaigns. The Working Notes report the preliminary results of CLEF 2001 - the second campaign in the series. The results will be presented and discussed in the CLEF 2001 Workshop, 3-4 September, Darmstadt, Germany. The main features of this year's campaign are briefly outlined here below.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>European or three Asian languages. A condition in this year’s CLEF was that, for each task attempted, a
mandatory run using the title and description fields had to be submitted. The objective was to facilitate
comparison between the results of different systems.</p>
      <p>Relevance Judgments: Relevance assessment was distributed over six different sites and performed in all
cases by native speakers. The results were then analysed and run statistics produced and distributed.</p>
    </sec>
    <sec id="sec-2">
      <title>Participants</title>
      <p>Participation in CLEF 2001 was up approximately 50% from last year, with more than 40 groups registering to
participate in one or more of the main tasks. Many were participants from last year but there were also a good
number of newcomers. In the end, 31 groups actually submitted results: 8 from N.America; 19 from Europe,
and 4 from Asia – compared with 20 groups for CLEF2000. A total of 193 runs were received; runs were
submitted for all tasks (multilingual, bilingual, monolingual and domain-specific) and for all topic languages.
Twenty-one groups tried a cross-language task, while ten preferred to remain with the monolingual track. Only
eight groups were brave enough to attempt the multilingual track (processing a document collection in five
languages is certainly a challenging task) and of these just two were CLEF newcomers. An additional three
groups tackled the experimental interactive task.</p>
    </sec>
    <sec id="sec-3">
      <title>Working Notes and Workshop</title>
      <p>The Working Notes provide a first description of the different experiments run by the participating groups. The
Appendix gives a summary of the characteristics of all runs together with overview graphs for the different tasks
and individual statistics for each run. Other papers in this volume include a report on the NTCIR Workshop
series for Asian languages system evaluation and presentations on cross-language evaluation at TREC-9 and the
NIST perspective on the implications of information retrieval system evaluation. The final papers - revised and
extended as a result of the discussions at the Workshop - together with a comparative analysis of the results will
appear in the CLEF 2001 Proceedings. These will be published by Springer in their Lecture Notes for Computer
Science series.</p>
      <p>The aim of the Workshop is to give all the groups that have participated in the CLEF evaluation campaign
the opportunity to get together in order to compare approaches and to exchange ideas. It will also provide the
opportunity for an open discussion on the organisation and scheduling of future CLEF evaluation campaigns.
We very much hope that this event will prove an interesting, worthwhile and enjoyable experience to all those
who participate.</p>
      <p>Carol Peters, 1 September 2001</p>
    </sec>
    <sec id="sec-4">
      <title>The Workshop Steering Committee</title>
      <p>Martin Braschler, Eurospider, Switzerland
Julio Gonzalo Arroyo, UNED, Madrid, Spain
Donna Harman, National Institute for Standards and Technology, USA
Djoerd Hiemstra, University of Twente, The Netherlands
Noriko Kando, National Institute of Informatics, Japan
Michael Kluck, IZ Sozialwissenschaften, Bonn, Germany
Carol Peters, IEI-CNR, Pisa, Italy
Peter Schäuble, Eurospider, Switzerland
Ellen Voorhees, National Institute for Standards and Technology, USA</p>
      <p>Christa Womser-Hacker, University of Hildesheim, Germany
2 CLEF is a continuation and extension of the track for cross-language information retrieval, which was included
in TREC from 1997-1999.
We have many people and organisations to thank for their help in the running of CLEF 2001.
First of all, we should like to express our gratitude to the ECDL 2001 Conference organisers for their
assistance in the organisation of the CLEF Workshop.</p>
      <p>With the single exception of the Thai experiment, the topic sets were prepared by independent groups, i.e. by
groups not participating in the system evaluation tasks. The main topic sets (DE, EN, FR, IT, NL, SP) plus
Russian were prepared by the project partners. Here, we should like to thank the following organisations that
voluntarily engaged translators to provide topic sets in Chinese, Finnish, Japanese and Swedish, working on
the basis of the set of source topics:
• Department of Information Studies (University of Tampere, Finland) which engaged the UTA</p>
      <p>Language Centre for the Finnish topics;
• SICS Human Computer Interaction and Language Engineering Laboratory for the Swedish topics.
• National Institute of Informatics (NII), Tokyo for the Japanese topics
• Natural Language Processing Lab, Department of Computer Science and Information Engineering,</p>
      <p>National Taiwan University for the Chinese topics</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>