<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Organization</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Nicola Ferro, University of Padua</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The CLEF 2019 conference is the twentieth edition of the popular CLEF campaign and workshop series that has run since 2000 contributing to the systematic evaluation of multilingual and multimodal information access systems, primarily through experimentation on shared tasks. In 2010 CLEF was launched in a new format, as a conference with research presentations, panels, poster and demo sessions and laboratory evaluation workshops. These are proposed and operated by groups of organizers volunteering their time and e ort to de ne, promote, administrate and run an evaluation activity. To celebrate the 20th anniversary of CLEF we prepared a book1 focusing on the lessons learnt in 20 years of CLEF and its impact over time. CLEF 20192 was organized and hosted by the University of Lugano, Switzerland from 9 to 12 September 2019. Nine evaluation laboratories were selected and run during CLEF 2019. To identify the best proposals, besides the well-established criteria from the editions of previous years of CLEF such as topical relevance, novelty, potential impact on future world a airs, likely number of participants, and the quality of the organizing consortium, this year we further stressed the connection to real-life usage scenarios and we tried to avoid as much as possible overlaps among labs in order to promote synergies and integration. This year, for the rst time, we set up a mentorship program to support the preparation of lab proposals for newcomers to CLEF. The CLEF newcomers mentoring program o ered help, guidance, and feedback on the writing of draft lab proposals by assigning a mentor to proponents, who helped them in preparing and maturing the lab proposal for submission. If the lab proposal fell into the scope of an already existing CLEF lab, the mentor helped proponents to get in touch with those lab organizers and team up forces. Building on previous experience, the Labs at CLEF 2019 demonstrate the maturity of the CLEF evaluation environment by creating new tasks, new and larger data sets, new ways of evaluation or more languages. Details of the individual Labs are described by the Lab organizers in these proceedings. Below is a short summary of them.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>tribute back to the community the additional components and resources
developed to reproduce the results in order to improve existing open source systems.</p>
      <p>Identi cation and Veri cation of Political Claims { CheckThat!4
aims to foster the development of technology capable of both spotting and
verifying check-worthy claims in political debates in English and Arabic.</p>
      <p>CLEF eHealth5 aims to support the development of techniques to aid
laypeople, clinicians and policy-makers in easily retrieving and making sense
of medical content to support their decision making. The goals of the lab are
to develop processing methods and resources in a multilingual setting to enrich
di cult-to-understand eHealth texts, and provide valuable documentation.</p>
      <p>Early Risk Prediction on the Internet { eRisk6 explores challenges
of evaluation methodology, e ectiveness metrics and other processes related to
early risk detection. Early detection technologies can be employed in several
areas, particularly those related to health and safety. For instance, early alerts
can be sent when a predator starts interacting with a child for sexual purposes,
or when a potential o ender starts publishing antisocial threats on a blog, forum
or social network. The main goal is to pioneer a new interdisciplinary research
area that would be potentially applicable to a wide variety of situations and to
many di erent personal pro les.</p>
      <p>Multimedia Retrieval { ImageCLEF7 provides an evaluation forum for
visual media analysis, indexing, classi cation/learning, and retrieval in medical,
nature, security and lifelogging applications. A focus of the task has always been
on multimodal data, so the combination of image data with data from other
sources.</p>
      <p>Biodiversity Identi cation and Prediction { LifeCLEF8 aims at
boosting research on the identi cation and prediction of living organisms in order to
solve the taxonomic gap and improve our knowledge of biodiversity. Through
its biodiversity informatics related challenges, LifeCLEF is intended to push the
boundaries of the state-of-the-art in several research directions at the frontier of
multimedia information retrieval, machine learning and knowledge engineering.</p>
      <p>Digital Text Forensics and Stylometry { PAN9 is a networking
initiative for the digital text forensics, where researchers and practitioners study
technologies that analyze texts with regard to originality, authorship, and
trustworthiness. PAN provides evaluation resources consisting of large-scale corpora,
performance measures, and web services that allow for meaningful evaluations.
4 https://sites.google.com/view/clef2019-checkthat/
5 http://clef-ehealth.org/
6 http://erisk.irlab.org/
7 https://www.imageclef.org/2019
8 http://www.lifeclef.org/
9 http://pan.webis.de/
The main goal is to provide for sustainable and reproducible evaluations, to get
a clear view of the capabilities of state-of-the-art-algorithms.</p>
      <p>Personalised Information Retrieval { PIR-CLEF10 provides a
framework for the evaluation of Personalised Information Retrieval (PIR). Current
approaches to the evaluation of PIR are user-centric, mostly based on user studies,
i.e., they rely on experiments that involve real users in a supervised environment.
PIR-CLEF aims to develop and demonstrate a methodology for the evaluation
of personalised search that enables repeatable experiments. The main aim is to
enable research groups working on PIR to both experiment with and provide
feedback on the proposed PIR evaluation methodology.</p>
      <p>Extracting Protests from News { ProtestNews11 aimed to test and
improve state-of-the-art generalizable machine learning and natural language
processing methods for text classi cation and information extraction on English
news from multiple countries such as India and China for creating comparative
databases of contentious political events (riots, social movements), i.e. the
repertoire of contention that can enable large scale comparative social and political
science studies.</p>
      <p>CLEF has always been backed by European projects that complement the
incredible amount of volunteering work performed by Lab Organizers and the
CLEF community with the resources needed for its necessary central
coordination, in a similar manner to the other major international evaluation initiatives
such as TREC, NTCIR, FIRE and MediaEval. Since 2014, the organisation of
CLEF no longer has direct support from European projects and are working
to transform itself into a self-sustainable activity. This is being made possible
thanks to the establishment of the CLEF Association12, a non-pro t legal entity
in late 2013, which, through the support of its members, ensures the resources
needed to smoothly run and coordinate CLEF.</p>
    </sec>
    <sec id="sec-2">
      <title>Acknowledgments</title>
      <p>We would like to thank the mentors who helped in shepherding the preparation
of lab proposals by newcomers:
Julio Gonzalo, National Distance Education University (UNED), Spain;
Paolo Rosso, Universitat Politecnica de Valencia, Spain.</p>
      <p>We would like to thank the members of CLEF-LOC (the CLEF Lab
Organization Committee) for their thoughtful and elaborate contributions to assessing
the proposals during the selection process:
Martin Braschler, Zurich University of Applied Sciences, Switzerland;
Donna Harman, National Institute of Standards and Technology (NIST), USA;
Martin Potthast, Leipzig University, Germany;
10 http://www.ir.disco.unimib.it/pir-clef2019/
11 https://emw.ku.edu.tr/clef-protestnews-2019/
12 http://www.clef-initiative.eu/association
Maarten de Rijke, University of Amsterdam, The Netherlands.</p>
      <p>Last but not least, without the important and tireless e ort of the
enthusiastic and creative proposal authors, the organizers of the selected labs and
workshops, the colleagues and friends involved in running them, and the
participants who contribute their time to making the labs and workshops a success,
the CLEF labs would not be possible.</p>
      <p>Thank you all very much!
July, 2019
CLEF 2019, Conference and Labs of the Evaluation Forum { Experimental IR
meets Multilinguality, Multimodality, and Interaction, was hosted by the
University of Lugano, Switzerland.</p>
    </sec>
    <sec id="sec-3">
      <title>General Chairs</title>
    </sec>
    <sec id="sec-4">
      <title>Program Chairs</title>
      <p>Fabio Crestani, Universita della Svizzera italiana (USI), Switzerland
Martin Braschler, Zurich University of Applied Sciences (ZHAW), Switzerland</p>
      <sec id="sec-4-1">
        <title>Jacques Savoy, Universite de Neuch^atel, Switzerland Andreas Rauber, Vienna University of Technology (TU Wien), Austria</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Lab Chairs</title>
      <p>Henning Muller, University of Applied Sciences Western Switzerland (HES-SO),
Switzerland
David E. Losada, University of Santiago de Compostela, Spain</p>
    </sec>
    <sec id="sec-6">
      <title>Lab Mentorship Chair</title>
      <sec id="sec-6-1">
        <title>Lorraine Goeuriot, Universite Grenoble Alpes, France</title>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Industry Chair</title>
      <p>Gundula Heinatz, Swiss Alliance for Data-Intensive Services, Switzerland</p>
    </sec>
    <sec id="sec-8">
      <title>Proceedings Chairs</title>
      <sec id="sec-8-1">
        <title>Linda Cappellato, University of Padua, Italy Nicola Ferro, University of Padua, Italy</title>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Local Organization</title>
    </sec>
    <sec id="sec-10">
      <title>Steering Committee Chair</title>
    </sec>
    <sec id="sec-11">
      <title>Deputy Steering Committee Chair for the Conference</title>
      <p>Paolo Rosso, Universitat Politecnica de Valencia, Spain</p>
    </sec>
    <sec id="sec-12">
      <title>Deputy Steering Committee Chair for the Evaluation Labs</title>
      <p>Martin Braschler, Zurich University of Applied Sciences, Switzerland</p>
    </sec>
    <sec id="sec-13">
      <title>Members</title>
      <sec id="sec-13-1">
        <title>Jaana Kekalainen, University of Tampere, Finland</title>
        <p>Seamus Lawless, Trinity College Dublin, Ireland
Carol Peters, ISTI, National Council of Research (CNR), Italy
(Steering Committee Chair 2000{2009)
Emanuele Pianta, Centre for the Evaluation of Language and Communication
Technologies (CELCT), Italy</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>