<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Preface: The 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Joeran Beel</string-name>
          <email>beelj@tcd.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lars Kotthoff</string-name>
          <email>larsko@uwyo.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Trinity College Dublin - School of Computer Science &amp; Statistics - Artificial Intelligence Discipline - ADAPT Centre -</institution>
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Wyoming - Department of Computer Science - Meta-Algorithmics, Learning and Large-scale Empirical Testing Lab - USA</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Algorithm selection is a key challenge for most, if not all, computational problems. Typically, there are several potential algorithms that can solve a problem, but which algorithm would perform best (e.g. in terms of runtime or accuracy) is often unclear. In many domains, particularly artificial intelligence, the algorithm selection problem is well-studied, and various approaches and tools exist to tackle it in practice. Especially through meta-learning, impressive performance improvements have been achieved. The information retrieval (IR) community, however, has paid relatively little attention to the algorithm selection problem. The 1st Interdisciplinary Workshop on Algorithm Selection and MetaLearning in Information Retrieval (AMIR) brought together researchers from the IR community as well as from the machine learning (ML) and meta-learning community. Our goal was to raise the awareness in the IR community of the algorithm selection problem; identify the potential for automatic algorithm selection in information retrieval; and explore possible solutions for this context. AMIR was co-located with the 41st European Conference on Information Retrieval (ECIR) in Cologne, Germany, and held on the 14th of April 2019. Out of ten submissions, five (50%) were accepted at AMIR, and an estimated 25 researchers attended the workshop.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        There is a plethora of algorithms for information retrieval applications, such as search
engines and recommender systems. There are about 100 approaches to recommend
research papers alone [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The question that researchers and practitioners alike are faced
with is which one of these approaches to choose for their particular problem. This is a
difficult choice even for experts, compounded by ongoing research that develops ever
more approaches.
      </p>
      <p>
        The challenge of identifying the best algorithm for a given application is not new.
The so-called “algorithm selection problem” was first mentioned in the 1970s [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and
has attracted significant attention in various disciplines since then, especially in the last
decade. Particularly in artificial intelligence, impressive performance achievements
have been enabled by algorithm selection systems. A prominent example is the
awardwinning SATzilla system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        More generally, algorithm selection is an example of meta-learning, where the
experience gained from solving problems informs how to solve future problems.
Metalearning and automating modelling processes has gained significant traction in the
machine learning community, in particular with so-called AutoML approaches that aim to
automate the entire machine learning and data mining process from ingesting the data
to making predictions. An example of such a system is Auto-WEKA [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. There have
also been multiple competitions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and workshops, symposia and tutorials [
        <xref ref-type="bibr" rid="ref10 ref11 ref7 ref8 ref9">7–11</xref>
        ]
including a Dagstuhl seminar [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The OpenML platform was developed to facilitate
the exchange of data and machine learning models to enable research into meta-learning
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        Despite the significance of the algorithm selection problem and notable advances in
solving it in many domains, the information retrieval community has paid relatively
little attention to it. There are a few papers that investigate the algorithm selection
problem in the context of information retrieval, for example in the field of recommender
systems [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref16 ref17 ref18 ref19 ref20 ref21">13–21</xref>
        ]. Also, the field of query performance prediction (QPP) has
investigated how to predict algorithm performance in information retrieval [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
However, the number of researchers interested in this topic is limited and results so far have
been not as impressive as in other domains.
      </p>
      <p>There is potential for applying IR techniques in meta-learning as well. The algorithm
selection problem can be seen as a traditional information retrieval task, i.e. the task of
identifying the most relevant item (an algorithm) from a large corpus (thousands of
potential algorithms and parameters) for a given information need (e.g. classifying
photos or making recommendations). We see great potential for the information retrieval
community contributing to solving the algorithm selection problem.
2</p>
    </sec>
    <sec id="sec-2">
      <title>The 1st AMIR</title>
    </sec>
    <sec id="sec-3">
      <title>Workshop</title>
      <p>
        The 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in
Information Retrieval (AMIR)2 was accepted to be held at the 41st European Conference
on Information Retrieval (ECIR) in Cologne, Germany, on the 14th of April 2019 [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
AMIR aimed at achieving the following goals:
• Raise awareness in the information retrieval community of the algorithm
selection problem.
      </p>
      <sec id="sec-3-1">
        <title>2 http://amir-workshop.org/</title>
        <p>• Identify the potential for automated algorithm selection and meta learning in</p>
        <p>IR applications.
• Familiarize the IR community with algorithm selection and meta-learning tools
and research that has been published in related disciplines such as machine
learning.</p>
        <p>• Find solutions to address and solve the algorithm selection problem in IR.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Topics of interest to AMIR included:</title>
        <p>• Algorithm Configuration
• Algorithm Selection
• Algorithm Selection as User Modeling</p>
        <p>Task
• Auto* Tools in Practice (e.g. AutoWeka,</p>
        <p>AutoKeras, librec-auto, auto-sklearn,</p>
        <p>AutoTensorFlow, …)
• Automated A/B Tests (AutoA/B)
• Automated Evaluations (AutoEval)
• Automated Information Retrieval</p>
        <p>(AutoIR)
• Automated Machine Learning /</p>
        <p>Automatic Machine Learning / AutoML
• Automated Natural Language Processing</p>
        <p>(AutoNLP)
• Automated Recommender Systems</p>
        <p>(AutoRecSys)
• Automated User Modelling (AutoUM)
• Benchmarking
• CASH Problem (Combined Algorithm</p>
        <p>Selection and Hyper Parameter</p>
        <p>Optimization)
• Evaluation Methods and Metrics
• Evolutionary Algorithms
• Hyper-Parameter Optimization and</p>
        <p>Tuning
• Learning to Learn
• Meta-Heuristics
• Meta-Learning
• Neural Network Architecture Search /</p>
        <p>Neural Architecture Search (NAS) /</p>
        <p>Neural Network Search
• Recommender Systems for Algorithms
• Search Engines for Algorithms
• Transfer Learning, Few-Shot Learning,</p>
        <p>One-Shot Learning, …
Our vision is to establish a regular workshop at ECIR or related venues (e.g. SIGIR,
UMAP, RecSys) and eventually – in the long run – solve the algorithm selection
problem in information retrieval. We hope to stimulate collaborations between researchers
in IR and meta-learning through presentations and discussions at the workshop, which
will ultimately lead to joint publications and research proposals.
3</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Accepted Papers</title>
      <p>
        We received a total of ten submissions, of which the following five (50%) were
accepted to be presented at the workshop [
        <xref ref-type="bibr" rid="ref25 ref26 ref27 ref28 ref29">25–29</xref>
        ]:
3.1
      </p>
      <sec id="sec-4-1">
        <title>Algorithm selection with librec-auto</title>
        <p>Masoud Mansoury and Robin Burke
Due to the complexity of recommendation algorithms, experimentation on
recommender systems has become a challenging task. Current recommendation algorithms,
while powerful, involve large numbers of hyperparameters. Tuning hyperparameters
for finding the best recommendation outcome often requires execution of large numbers
of algorithmic experiments particularly when multiples evaluation metrics are
considered. Existing recommender systems platforms fail to provide a basis for systematic
experimentation of this type. In this paper, we describe librec-auto, a wrapper for the
well-known LibRec library, which provides an environment that supports automated
experimentation.
3.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Investigating Ad-Hoc Retrieval Method Selection with Features Inspired by IR Axioms</title>
        <p>Siddhant Arora and Andrew Yates
We consider the algorithm selection problem in the context of ad-hoc information
retrieval. Given a query and a pair of retrieval methods, we propose a meta-learner that
predicts how to combine the methods’ relevance scores into an overall relevance score.
These predictions are based on features inspired by IR axioms that quantify properties
of the query and its top rank documents. We conduct an evaluation on TREC
benchmark data and find that the meta-learner often significantly improves over the
individual methods in terms of both nDCG@20 and P@30. Finally, we conduct a feature
weight analysis to investigate which features the meta-learner uses to make its
decisions.
3.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Augmenting the DonorsChoose.org Corpus for Meta-Learning</title>
        <p>Gordian Edenhofer, Andrew Collins, Akiko Aizawa, and Joeran Beel
The DonorsChoose.org dataset of past donations provides a big and feature-rich corpus
of users and items. The dataset matches donors to projects in which they might be
interested in and hence is intrinsically about recommendations. Due to the availability of
detailed item-, user- and transaction-features, this corpus represents a suitable candidate
for meta-learning approaches to be tested. This study aims at providing an augmented
corpus for further recommender systems studies to test and evaluate meta-learning
approaches. In the augmentation, metadata of collaborative and content-based filtering
techniques is amended to the corpus. It is further extended with aggregated statistics of
users and transactions and an exemplary meta-learning experiment. The performance
in the learning subsystem is measured via the recall of recommended items in a Top-N
test set. The augmented dataset and the source code are released into the public domain
at https://github.com/BeelGroup/Augmented-DonorsChoose.org-Dataset.
3.4</p>
      </sec>
      <sec id="sec-4-4">
        <title>RARD II: The 94 Million Related-Article Recommendation Dataset</title>
        <p>Joeran Beel, Barry Smyth and Andrew Collins
The main contribution of this paper is to introduce and describe a new
recommendersystems dataset (RARD II). It is based on data from a recommender-system in the
digital library and reference management software domain. As such, it complements
datasets from other domains such as books, movies, and music. The RARD II dataset
encompasses 94m recommendations, delivered in the two years from September 2016
to September 2018. The dataset covers an item-space of 24m unique items. RARD II
provides a range of rich recommendation data, beyond conventional ratings. For
example, in addition to the usual ratings matrices, RARD II includes the original
recommendation logs, which provide a unique insight into many aspects of the algorithms that
generated the recommendations. The recommendation logs enable researchers to
conduct various analyses about a real-world recommender system. This includes the
evaluation of meta-learning approaches for predicting algorithm performance. In this paper,
we summarise the key features of this dataset release, describe how it was generated
and discuss some of its unique features. Compared to its predecessor RARD, RARD II
contains 64% more recommendations, 187% more features (algorithms, parameters,
and statistics), 50% more clicks, 140% more documents, and one additional service
partner (JabRef).
3.5</p>
      </sec>
      <sec id="sec-4-5">
        <title>An Extensive Checklist for Building AutoML Systems</title>
        <p>Thiloshon Nagarajah and Guhanathan Poravi
Automated Machine Learning is a research area which has gained a lot of focus in the
recent past. But the required components to build an autoML system is neither properly
documented nor very clear due to the differences and the recentness of researches. If
the required steps are analyzed and brought under a common survey, it will assist in
continuing researches. This paper presents an analysis of the components and
technologies in the domains of autoML, hyperparameter tuning and meta learning and, presents
a checklist of steps to follow while building an AutoML system. This paper is a part of
an ongoing research and the findings presented will assist in developing a novel
architecture for an autoML system.
4</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Keynote and Hands-on Sessions</title>
      <p>
        We were delighted to hear the keynote [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] from Marius Lindauer and having two
hands-on sessions about automated algorithm selection tools [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ].
4.1
      </p>
      <sec id="sec-5-1">
        <title>Automated Algorithm Selection: Predict which algorithm to use!</title>
        <p>Marius Lindauer
To achieve state-of-the-art performance, it is often crucial to select a suitable algorithm
for a given problem instance. For example, what is the best search algorithm for a given
instance of a search problem; or what is the best machine learning algorithm for a given
dataset? By trying out many different algorithms on many problem instances,
developers learn an intuitive mapping from some characteristics of a given problem instance
(e.g., the number of features of a dataset) to a well-performing algorithm (e.g., random
forest). The goal of automated algorithm selection is to learn from data, how to
automatically select a well-performing algorithm given such characteristics. In this talk, I
will give an overview of the key ideas behind algorithm selection and different
approaches addressing this problem by using machine learning.
4.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Hands-on Session with ASlib</title>
        <p>Lars Kotthoff
ASlib is a standard format for representing algorithm selection systems and a bechmark
library with example problems from many different application domains. I will give an
overview of what it is, example analyses available on its website, and the algorithm
selection competitions 2015 and 2017 that were based on it. ASlib is available at
http://www.aslib.net./
4.3</p>
      </sec>
      <sec id="sec-5-3">
        <title>Hands-On Automated Machine Learning Tools: Auto-Sklearn and Auto</title>
      </sec>
      <sec id="sec-5-4">
        <title>PyTorch</title>
        <p>Marius Lindauer
To achieve state-of-the-art performance in machine learning (ML), it is very important
to choose the right algorithm and its hyperparameters for a given dataset. Since finding
the correct settings needs a lot of time and expert knowledge, we developed AutoML
tools that can be used out-of-the-box with minimal expertise in machine learning. In
this session, I will present two state-of-the-art tools in this field: (i) auto-sklearn
(www.automl.org/auto-sklearn/) for classical machine learning and (ii) AutoPyTorch
(www.automl.org/autopytorch/) for deep learning.
5
5.1</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Organization</title>
      <sec id="sec-6-1">
        <title>Organizers</title>
        <p>Joeran Beel3 is Assistant Professor in Intelligent Systems at the School of Computer
Science and Statistics at Trinity College Dublin. He is also affiliated with the ADAPT
Centre, an interdisciplinary research centre that closely cooperates with industry
partners including Google, Deutsche Bank, Huawei, and Novartis. Joeran is further a
Visiting Professor at the National Institute of Informatics (NII) in Tokyo. His research
focuses on information retrieval, recommender systems, algorithm selection, user
modelling and machine learning. He has developed novel algorithms in these fields and
conducted research on the question of how to evaluate information retrieval systems.
Joeran also has industry experience as a product manager and as the founder of three
business start-ups he experienced the algorithm selection problem first hand. Joeran is
serving as general co-chair of the 26th Irish Conference on Artificial Intelligence and
Cognitive Science and served on program committees for major information retrieval
venues including SIGIR, ECIR, UMAP, RecSys, and ACM TOIS.</p>
        <p>
          Lars Kotthoff4 is Assistant Professor at the University of Wyoming. He leads the
Meta-Algorithmics, Learning and Large-scale Empirical Testing (MALLET) lab and
has acquired more than $400K in external funding to date. Lars is also the PI for the
Artificially Intelligent Manufacturing center (AIM) at the University of Wyoming. He
3 https://www.scss.tcd.ie/joeran.beel/
4 http://www.cs.uwyo.edu/~larsko/
co-organized multiple workshops on meta-learning and automatic machine learning
(e.g. [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]) and the Algorithm Selection Competition Series [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. He was workshop and
masterclass chair at the CPAIOR 2014 conference and organized the ACP summer
school on constraint programming in 2018. His research combines artificial intelligence
and machine learning to build robust systems with state-of-the-art performance. Lars’
more than 60 publications have garnered &gt;1111 citations and his research has been
supported by funding agencies and industry in various countries.
5.2
        </p>
        <p>Programme Committee
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•</p>
        <p>Akiko Aizawa, National Institute of Informatics, Tokyo
Andreas Nürnberger, University of Magdeburg
Andreas Weiler, ZHAW School of Engineering
Corinna Breitinger, University of Konstanz
Dietmar Jannach, University of Klagenfurt
Douglas Leith, Trinity College Dublin
Felix Beierle, Technical University of Berlin
Felix Hamborg, University of Konstanz
Heike Trautmann, University of Münster
Johann Schaible, GESIS
Katharina Eggensperger, University of Freiburg
Marius Lindauer, University of Freiburg
Mark Collier, University of Edinburgh
Matthias Feurer, University of Freiburg
Moritz Schubotz, University of Konstanz
Nicola Ferro, University of Padua
Owen Conlan, Trinity College Dublin
Pascal Kerschke, University of Münster
Pavel Brazdil, University of Porto
Rob Brennan, Trinity College Dublin
Roman Kern, Know-Center, Austria
Tiago Cunha, University of Porto
Vincent Wade, Trinity College Dublin</p>
        <p>Zeljko Carevic, GESIS
6</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>This publication has emanated from research conducted with the financial support of
Science Foundation Ireland (SFI) under Grant Number 13/RC/2106 and funding from
the European Union and Enterprise Ireland under Grant Number CF 2017 0303-1. Lars
Kotthoff is supported by NSF award 1813537.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Beel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gipp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Langer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Breitinger</surname>
          </string-name>
          , “Research Paper Recommender Systems: A Literature Survey,”
          <source>International Journal on Digital Libraries, no. 4</source>
          , pp.
          <fpage>305</fpage>
          -
          <lpage>338</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Rice</surname>
          </string-name>
          , “
          <article-title>The algorithm selection problem</article-title>
          ,”
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hutter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Hoos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Leyton-Brown</surname>
          </string-name>
          , “
          <article-title>SATzilla: portfolio-based algorithm selection for SAT,”</article-title>
          <source>Journal of artificial intelligence research</source>
          , vol.
          <volume>32</volume>
          , pp.
          <fpage>565</fpage>
          -
          <lpage>606</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kotthoff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Thornton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Hoos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hutter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Leyton-Brown</surname>
          </string-name>
          , “
          <article-title>Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA,”</article-title>
          <source>The Journal of Machine Learning Research</source>
          , vol.
          <volume>18</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>826</fpage>
          -
          <lpage>830</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lindauer</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. N. van Rijn</surname>
          </string-name>
          , and L. Kotthoff, “
          <source>The Algorithm Selection Competition Series</source>
          <year>2015</year>
          -
          <volume>17</volume>
          ,” arXiv preprint arXiv:
          <year>1805</year>
          .01214,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>W.-W.</given-names>
            <surname>Tu</surname>
          </string-name>
          , “
          <article-title>The 3rd AutoML Challenge: AutoML for Lifelong Machine Learning</article-title>
          ,” in NIPS 2018 Challenge,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Brazdil</surname>
          </string-name>
          , “Metalearning &amp; Algorithm Selection,
          <source>” 21st European Conference on Artificial Intelligence (ECAI)</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Calandra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hutter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Larochelle</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Levine</surname>
          </string-name>
          , “Workshop on Meta-Learning (
          <year>MetaLearn 2017</year>
          ) @NIPS,” in http://metalearning.ml,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Hoos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Neumann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Trautmann</surname>
          </string-name>
          , “
          <source>Automated Algorithm Selection and Configuration,” Report from Dagstuhl Seminar</source>
          <volume>16412</volume>
          , vol.
          <volume>6</volume>
          , no.
          <issue>11</issue>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Miikkulainen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stanley</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Fernando</surname>
          </string-name>
          , “Metalearning Symposium @NIPS,” in http://metalearning-symposium.ml,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanschoren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Brazdil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Giraud-Carrier</surname>
          </string-name>
          , and L. Kotthoff, “
          <article-title>Meta-Learning and</article-title>
          Algorithm Selection Workshop at ECMLPKDD,”
          <source>in CEUR Workshop Proceedings</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanschoren</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. N. van Rijn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bischl</surname>
          </string-name>
          , and L. Torgo, “
          <source>OpenML: Networked Science in Machine Learning,” SIGKDD Explorations</source>
          , vol.
          <volume>15</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>49</fpage>
          -
          <lpage>60</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ahsan</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Ngo-Ye</surname>
          </string-name>
          ,
          <article-title>“A Conceptual Model of Recommender System for Algorithm Selection,” AMCIS 2005 Proceedings</article-title>
          , p.
          <fpage>122</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Beel</surname>
          </string-name>
          , “
          <article-title>A Macro/Micro Recommender System for Recommendation Algorithms</article-title>
          [Proposal],” ResearchGate https://www.researchgate.net/publication/322138236_A_MacroMicro_Recommende r_System_for_Recommendation_Algorithms_Proposal,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Collins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tkaczyk</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Beel</surname>
          </string-name>
          , “
          <article-title>A Novel Approach to Recommendation Algorithm Selection using Meta-Learning,”</article-title>
          <source>in Proceedings of the 26th Irish Conference on Artificial Intelligence and Cognitive Science (AICS)</source>
          ,
          <year>2018</year>
          , vol.
          <volume>2259</volume>
          , pp.
          <fpage>210</fpage>
          -
          <lpage>219</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>T.</given-names>
            <surname>Cunha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Soares</surname>
          </string-name>
          , and A. C. de Carvalho, “
          <article-title>Metalearning and Recommender Systems: A literature review and empirical study on the algorithm selection problem for Collaborative Filtering,”</article-title>
          <source>Information Sciences</source>
          , vol.
          <volume>423</volume>
          , pp.
          <fpage>128</fpage>
          -
          <lpage>144</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>T.</given-names>
            <surname>Cunha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Soares</surname>
          </string-name>
          , and A. C. de Carvalho, “
          <article-title>CF4CF: recommending collaborative filtering algorithms using collaborative filtering,”</article-title>
          <source>in Proceedings of the 12th ACM Conference on Recommender Systems (RecSys)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>357</fpage>
          -
          <lpage>361</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>T.</given-names>
            <surname>Cunha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Soares</surname>
          </string-name>
          , and A. C. de Carvalho, “
          <article-title>Selecting Collaborative Filtering algorithms using Metalearning,”</article-title>
          <source>in Joint European Conference on Machine Learning and Knowledge Discovery in Databases</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>393</fpage>
          -
          <lpage>409</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>P.</given-names>
            <surname>Matuszyk</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Spiliopoulou</surname>
          </string-name>
          , “
          <article-title>Predicting the performance of collaborative filtering algorithms</article-title>
          ,” in
          <source>Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics (WIMS14)</source>
          ,
          <year>2014</year>
          , p.
          <fpage>38</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>M. M</surname>
          </string-name>
          <article-title>s r and M. Sebag, “ALORS: An algorithm recommender system</article-title>
          ,
          <source>” Artificial Intelligence</source>
          , vol.
          <volume>244</volume>
          , pp.
          <fpage>291</fpage>
          -
          <lpage>314</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vartak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Thiagarajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Miranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bratman</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Larochelle</surname>
          </string-name>
          , “
          <article-title>A MetaLearning Perspective on Cold-Start Recommendations for Items,”</article-title>
          <source>in Advances in Neural Information Processing Systems</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>6907</fpage>
          -
          <lpage>6917</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>B.</given-names>
            <surname>He</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Ounis</surname>
          </string-name>
          , “
          <article-title>Inferring query performance using pre-retrieval predictors</article-title>
          ,” in
          <source>International symposium on string processing and information retrieval</source>
          ,
          <year>2004</year>
          , pp.
          <fpage>43</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>C.</given-names>
            <surname>Macdonald</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. Ounis</surname>
          </string-name>
          , “
          <article-title>Predicting query performance in intranet search,”</article-title>
          <source>in SIGIR 2005 Query Prediction Workshop</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>J.</given-names>
            <surname>Beel</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Kotthoff</surname>
          </string-name>
          , “
          <article-title>Proposal for the 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR),”</article-title>
          <source>in Proceedings of the 41st European Conference on Information Retrieval (ECIR)</source>
          ,
          <year>2019</year>
          , vol.
          <volume>11438</volume>
          , pp.
          <fpage>383</fpage>
          -
          <lpage>388</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>S.</given-names>
            <surname>Arora</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Yates</surname>
          </string-name>
          , “
          <article-title>Investigating Ad-Hoc Retrieval Method Selection with Features Inspired by IR Axioms,”</article-title>
          <source>in Proceedings of The 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>J.</given-names>
            <surname>Beel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Smyth</surname>
          </string-name>
          , and A. Collins, “RARD II: The 94
          <string-name>
            <given-names>Million</given-names>
            <surname>Related-Article Recommendation</surname>
          </string-name>
          <string-name>
            <surname>Dataset</surname>
          </string-name>
          ,”
          <source>in Proceedings of the 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>G.</given-names>
            <surname>Edenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Collins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aizawa</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Beel</surname>
          </string-name>
          , “
          <article-title>Augmenting the DonorsChoose.org Corpus for Meta-Learning,”</article-title>
          <source>in Proceedings of The 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mansoury</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Burke</surname>
          </string-name>
          , “
          <article-title>Algorithm selection with librec-auto,”</article-title>
          <source>in Proceedings of The 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>T.</given-names>
            <surname>Nagarajah</surname>
          </string-name>
          and G. Poravi, “
          <article-title>An Extensive Checklist for Building AutoML Systems</article-title>
          ,” in
          <source>Proceedings of The 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lindauer</surname>
          </string-name>
          , “Automated Algorithm Selection:
          <article-title>Predict which algorithm to use! (Keynote</article-title>
          ),
          <source>” in 1st Interdisciplinary Workshop on Algorithm Selection and MetaLearning in Information Retrieval (AMIR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kotthoff</surname>
          </string-name>
          , “
          <article-title>Hands-on Session with ASlib,”</article-title>
          <source>in 1st Interdisciplinary Workshop on Algorithm Selection and Meta-Learning in Information Retrieval (AMIR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lindauer</surname>
          </string-name>
          , “Hands-On
          <source>Automated Machine Learning Tools: Auto-Sklearn and Auto-PyTorch,” in 1st Interdisciplinary Workshop on Algorithm Selection and MetaLearning in Information Retrieval (AMIR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>