=Paper= {{Paper |id=Vol-2380/xclef2019_ceurws_preface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-2380/clef2019_ceurws_preface.pdf |volume=Vol-2380 |authors=Linda Cappellato,Nicola Ferro,David E. Losada,Henning Müller }} ==None== https://ceur-ws.org/Vol-2380/clef2019_ceurws_preface.pdf
                                    Preface


The CLEF 2019 conference is the twentieth edition of the popular CLEF cam-
paign and workshop series that has run since 2000 contributing to the systematic
evaluation of multilingual and multimodal information access systems, primarily
through experimentation on shared tasks. In 2010 CLEF was launched in a new
format, as a conference with research presentations, panels, poster and demo
sessions and laboratory evaluation workshops. These are proposed and operated
by groups of organizers volunteering their time and effort to define, promote,
administrate and run an evaluation activity. To celebrate the 20th anniversary
of CLEF we prepared a book1 focusing on the lessons learnt in 20 years of CLEF
and its impact over time.
    CLEF 20192 was organized and hosted by the University of Lugano, Switzer-
land from 9 to 12 September 2019.
    Nine evaluation laboratories were selected and run during CLEF 2019. To
identify the best proposals, besides the well-established criteria from the editions
of previous years of CLEF such as topical relevance, novelty, potential impact
on future world affairs, likely number of participants, and the quality of the
organizing consortium, this year we further stressed the connection to real-life
usage scenarios and we tried to avoid as much as possible overlaps among labs
in order to promote synergies and integration.
    This year, for the first time, we set up a mentorship program to support
the preparation of lab proposals for newcomers to CLEF. The CLEF newcomers
mentoring program offered help, guidance, and feedback on the writing of draft
lab proposals by assigning a mentor to proponents, who helped them in preparing
and maturing the lab proposal for submission. If the lab proposal fell into the
scope of an already existing CLEF lab, the mentor helped proponents to get in
touch with those lab organizers and team up forces.
    Building on previous experience, the Labs at CLEF 2019 demonstrate the
maturity of the CLEF evaluation environment by creating new tasks, new and
larger data sets, new ways of evaluation or more languages. Details of the indi-
vidual Labs are described by the Lab organizers in these proceedings. Below is
a short summary of them.
   CLEF/NTCIR/TREC Reproducibility – CENTRE@CLEF3 aims to
run a joint CLEF/NTCIR/TREC task on challenging participants: 1) to re-
produce best results of best/most interesting systems in previous editions of
CLEF/NTCIR/TREC by using standard open source IR systems; 2) to con-
1
  Ferro, N., Peters, C. (eds.): Information Retrieval Evaluation in a Changing World –
  Lessons Learned from 20 Years of CLEF, The Information Retrieval Series, vol. 41.
  Springer International Publishing, Germany (2019).
2
  http://clef2019.clef-initiative.eu/
3
  http://www.centre-eval.org/clef2019/
tribute back to the community the additional components and resources devel-
oped to reproduce the results in order to improve existing open source systems.
    Identification and Verification of Political Claims – CheckThat!4
aims to foster the development of technology capable of both spotting and veri-
fying check-worthy claims in political debates in English and Arabic.
    CLEF eHealth5 aims to support the development of techniques to aid
laypeople, clinicians and policy-makers in easily retrieving and making sense
of medical content to support their decision making. The goals of the lab are
to develop processing methods and resources in a multilingual setting to enrich
difficult-to-understand eHealth texts, and provide valuable documentation.
    Early Risk Prediction on the Internet – eRisk6 explores challenges
of evaluation methodology, effectiveness metrics and other processes related to
early risk detection. Early detection technologies can be employed in several
areas, particularly those related to health and safety. For instance, early alerts
can be sent when a predator starts interacting with a child for sexual purposes,
or when a potential offender starts publishing antisocial threats on a blog, forum
or social network. The main goal is to pioneer a new interdisciplinary research
area that would be potentially applicable to a wide variety of situations and to
many different personal profiles.
    Multimedia Retrieval – ImageCLEF7 provides an evaluation forum for
visual media analysis, indexing, classification/learning, and retrieval in medical,
nature, security and lifelogging applications. A focus of the task has always been
on multimodal data, so the combination of image data with data from other
sources.
    Biodiversity Identification and Prediction – LifeCLEF8 aims at boost-
ing research on the identification and prediction of living organisms in order to
solve the taxonomic gap and improve our knowledge of biodiversity. Through
its biodiversity informatics related challenges, LifeCLEF is intended to push the
boundaries of the state-of-the-art in several research directions at the frontier of
multimedia information retrieval, machine learning and knowledge engineering.
    Digital Text Forensics and Stylometry – PAN9 is a networking ini-
tiative for the digital text forensics, where researchers and practitioners study
technologies that analyze texts with regard to originality, authorship, and trust-
worthiness. PAN provides evaluation resources consisting of large-scale corpora,
performance measures, and web services that allow for meaningful evaluations.

4
  https://sites.google.com/view/clef2019-checkthat/
5
  http://clef-ehealth.org/
6
  http://erisk.irlab.org/
7
  https://www.imageclef.org/2019
8
  http://www.lifeclef.org/
9
  http://pan.webis.de/
The main goal is to provide for sustainable and reproducible evaluations, to get
a clear view of the capabilities of state-of-the-art-algorithms.
     Personalised Information Retrieval – PIR-CLEF10 provides a frame-
work for the evaluation of Personalised Information Retrieval (PIR). Current ap-
proaches to the evaluation of PIR are user-centric, mostly based on user studies,
i.e., they rely on experiments that involve real users in a supervised environment.
PIR-CLEF aims to develop and demonstrate a methodology for the evaluation
of personalised search that enables repeatable experiments. The main aim is to
enable research groups working on PIR to both experiment with and provide
feedback on the proposed PIR evaluation methodology.
    Extracting Protests from News – ProtestNews11 aimed to test and
improve state-of-the-art generalizable machine learning and natural language
processing methods for text classification and information extraction on English
news from multiple countries such as India and China for creating comparative
databases of contentious political events (riots, social movements), i.e. the reper-
toire of contention that can enable large scale comparative social and political
science studies.

    CLEF has always been backed by European projects that complement the
incredible amount of volunteering work performed by Lab Organizers and the
CLEF community with the resources needed for its necessary central coordina-
tion, in a similar manner to the other major international evaluation initiatives
such as TREC, NTCIR, FIRE and MediaEval. Since 2014, the organisation of
CLEF no longer has direct support from European projects and are working
to transform itself into a self-sustainable activity. This is being made possible
thanks to the establishment of the CLEF Association12 , a non-profit legal entity
in late 2013, which, through the support of its members, ensures the resources
needed to smoothly run and coordinate CLEF.


Acknowledgments
We would like to thank the mentors who helped in shepherding the preparation
of lab proposals by newcomers:
Julio Gonzalo, National Distance Education University (UNED), Spain;
Paolo Rosso, Universitat Politècnica de València, Spain.
    We would like to thank the members of CLEF-LOC (the CLEF Lab Organi-
zation Committee) for their thoughtful and elaborate contributions to assessing
the proposals during the selection process:
Martin Braschler, Zurich University of Applied Sciences, Switzerland;
Donna Harman, National Institute of Standards and Technology (NIST), USA;
Martin Potthast, Leipzig University, Germany;
10
   http://www.ir.disco.unimib.it/pir-clef2019/
11
   https://emw.ku.edu.tr/clef-protestnews-2019/
12
   http://www.clef-initiative.eu/association
Maarten de Rijke, University of Amsterdam, The Netherlands.

    Last but not least, without the important and tireless effort of the enthu-
siastic and creative proposal authors, the organizers of the selected labs and
workshops, the colleagues and friends involved in running them, and the partic-
ipants who contribute their time to making the labs and workshops a success,
the CLEF labs would not be possible.
    Thank you all very much!




July, 2019

                                                             Linda Cappellato
                                                                 Nicola Ferro
                                                             David E. Losada
                                                              Henning Müller
                             Organization


CLEF 2019, Conference and Labs of the Evaluation Forum – Experimental IR
meets Multilinguality, Multimodality, and Interaction, was hosted by the Uni-
versity of Lugano, Switzerland.


General Chairs

Fabio Crestani, Universitá della Svizzera italiana (USI), Switzerland
Martin Braschler, Zurich University of Applied Sciences (ZHAW), Switzerland


Program Chairs

Jacques Savoy, Université de Neuchâtel, Switzerland
Andreas Rauber, Vienna University of Technology (TU Wien), Austria


Lab Chairs

Henning Müller, University of Applied Sciences Western Switzerland (HES-SO),
Switzerland
David E. Losada, University of Santiago de Compostela, Spain


Lab Mentorship Chair

Lorraine Goeuriot, Université Grenoble Alpes, France


Industry Chair

Gundula Heinatz, Swiss Alliance for Data-Intensive Services, Switzerland


Proceedings Chairs

Linda Cappellato, University of Padua, Italy
Nicola Ferro, University of Padua, Italy
Local Organization

Monica Landoni, USI, Lugano
Ali Bahreinian, USI, Lugano
Mohammad Alian Nejadi, USI, Lugano
Maram Barifah, USI, Lugano
Manajit Chakraborty, USI, Lugano
Esteban Andrés Rı́ssola, USI, Lugano
                  CLEF Steering Committee


Steering Committee Chair
Nicola Ferro, University of Padua, Italy

Deputy Steering Committee Chair for the Conference
Paolo Rosso, Universitat Politècnica de València, Spain

Deputy Steering Committee Chair for the Evaluation Labs
Martin Braschler, Zurich University of Applied Sciences, Switzerland

Members
Khalid Choukri, Evaluations and Language resources Distribution Agency (ELDA),
France
Paul Clough, University of Sheffield, United Kingdom
Norbert Fuhr, University of Duisburg-Essen, Germany
Lorraine Goeuriot, Université Grenoble Alpes, France
Julio Gonzalo, National Distance Education University (UNED), Spain
Donna Harman, National Institute for Standards and Technology (NIST), USA
Djoerd Hiemstra, University of Twente, The Netherlands
Evangelos Kanoulas, University of Amsterdam, The Netherlands
Birger Larsen, University of Aalborg, Denmark
Mihai Lupu, Vienna University of Technology, Austria
Josiane Mothe, IRIT, Université de Toulouse, France
Henning Müller, University of Applied Sciences Western Switzerland (HES-SO),
Switzerland
Jian-Yun Nie, Université de Montréal, Canada
Maarten de Rijke, University of Amsterdam UvA, The Netherlands
Eric SanJuan, University of Avignon, France
Giuseppe Santucci, Sapienza University of Rome, Italy
Jacques Savoy, University of Neuchêtel, Switzerland
Laure Soulier, Pierre and Marie Curie University (Paris 6), France
Christa Womser-Hacker, University of Hildesheim, Germany
Past Members

Jaana Kekäläinen, University of Tampere, Finland
Séamus Lawless, Trinity College Dublin, Ireland
Carol Peters, ISTI, National Council of Research (CNR), Italy
(Steering Committee Chair 2000–2009)
Emanuele Pianta, Centre for the Evaluation of Language and Communication
Technologies (CELCT), Italy
Alan Smeaton, Dublin City University, Ireland