<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>CLEF 2019 Technology Assisted Reviews in Empirical Medicine Overview</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Evangelos Kanoulas</string-name>
          <email>E.Kanoulas@uva.nl</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dan Li</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Leif Azzopardi</string-name>
          <email>leif.azzopardi@strath.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rene Spijker</string-name>
          <email>R.Spijker-2@umcutrecht.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cochrane Netherlands and UMC Utrecht, Julius Center for Health Sciences and Primary Care</institution>
          ,
          <country country="NL">Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Computer and Information Sciences, University of Strathclyde</institution>
          ,
          <addr-line>Glasgow</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Informatics Institute, University of Amsterdam</institution>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Systematic reviews are a widely used method to provide an overview over the current scientific consensus, by bringing together multiple studies in a systematic, reliable, and transparent way. The large and growing number of published studies, and their increasing rate of publication, makes the task of identifying all relevant studies in an unbiased way both complex and time consuming to the extent that jeopardizes the validity of their findings and the ability to inform policy and practice in a timely manner. The CLEF 2019 e-Health TAR Lab accommodated two tasks. Task 1 focused on retrieving relevant studies from PubMed without the use of a Boolean query, while Task 2 focused on the efficient and effective ranking of studies during the abstract and title screening phase of conducting a systematic review. In the 2019 lab we also expanded upon the type of systematics reviews considered. Hence, beyond Diagnostic Test Accuracy reviews, we also included Intervention, Prognosis, and Qualitative systematic reviews. We constructed a benchmark collection of 31 reviews published by Cochrane, and the corresponding relevant and irrelevant articles found by the original Boolean query. Three teams participated in Task 2, submitting automatic and semi-automatic runs, using information retrieval and machine learning algorithms over a variety of text representations, in a batch and iterative manner. This paper reports both the methodology used to construct the benchmark collection, and the results of the evaluation.</p>
      </abstract>
      <kwd-group>
        <kwd>Evaluation</kwd>
        <kwd>Information Retrieval</kwd>
        <kwd>Systematic Reviews</kwd>
        <kwd>TAR</kwd>
        <kwd>Text Classification</kwd>
        <kwd>Active Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Evidence-based medicine has become an important pillar in current health care
and policy making. In order to practice evidence-based medicine, it is important
Copyright c 2019 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0). CLEF 2019, 9-12
September 2019, Lugano, Switzerland.
to have a clear overview of the current scientific consensus. These overviews
are preferably provided in systematic reviews, that appraise, summarize, and
synthesize all available evidence regarding a certain topic (e.g., a treatment or
a diagnostic test). To write a systematic review, researchers have to conduct a
search that will retrieve all studies that are relevant to a topic. The large and
growing number of published studies, and their increasing rate of publication,
makes the task of identifying relevant studies in an unbiased way both complex
and time consuming to an extent that jeopardizes the validity of their findings
and the ability to inform policy and practice in a timely manner. Hence, the need
for automation in this process becomes of the utmost importance. Finding all
relevant studies in a corpus is a difficult task, known in the Information Retrieval
(IR) domain as the “total recall” problem [?].</p>
      <p>To this date, the retrieval of studies that contain the necessary evidence to
inform systematic reviews is being conducted in multiple stages:
1. Identification: At the first stage a systematic review protocol, which describes
the rationale, hypothesis, and planned methods of the review, is prepared.
The protocol is used as a guide to carry out the review, by doing so
prospectively one tries to minimize risk of bias during conduct of a systematic review.
Beyond other information, it provides the criteria that need to be met for
a study to be included in the review. Further, a Boolean query that
attempts to express these criteria is constructed by an information specialist.
The query is then submitted to a medical bibliographic database containing
titles, abstracts, and indexing terms of a controlled vocabulary of medical
studies. The result is a set, A, of potentially relevant studies.
2. Screening: At a second stage experts are screening the titles and abstracts
of the returned set and decide which one of those meet the inclusion criteria
for their systematic review, a set D. If screening an abstract has a cost Ca,
screening all |A| abstracts has a cost of Ca jAj.
3. Eligibility: At a third stage experts are downloading the full text of the
potentially relevant abstracts, D, identified in the previous phase and examine
the content to decide whether indeed these studies are relevant or not.
Examining a document has typically a larger cost than the cost of examining
an abstract, Cd &gt; Ca. The result of the second screening is the set of studies
to be included in the systematic review.</p>
      <p>Unfortunately, the precision of the Boolean query is typically low, hence
reviewers often need to manually examine many thousands of irrelevant titles
and abstracts in order to identify a small number of relevant ones. Further,
there is no guarantee that the Boolean query will retrieve all relevant studies,
jeopardizing the validity of the reviews. To overcome some of the limitations of
the Boolean search, researchers have been testing the effectiveness of machine
learning and information retrieval methods. O’Mara-Eves et al. [?] provide a
systematic review of the use of text mining techniques for study identification
in systematic reviews.</p>
      <p>The focus of the CLEF 2017 and 2018 e-Health Technology Assisted Reviews
in Empirical Medicine (TAR) [?,?], lied on Diagnostic Test Accuracy (DTA)
reviews. Identifying DTA studies has additional difficulties over the more common
intervention studies caused by poorer reporting and indexing of these studies
with a lot of heterogeneity in terminology, a breakthrough in this field would
likely be applicable to other areas as well [?]. During the past two years search
and classification algorithms were developed demonstrating good retrieval
performance over the DTA studies. In 2019 we extended our focus to Intervention,
Prognosis, and Qualitative systematic reviews.</p>
      <p>The goal of the lab, as part of the CLEF e-Health Lab [?], is to bring
together academic, commercial, and government researchers that will conduct
experiments and share results on automatic methods to retrieve relevant studies
with high precision and high recall, and release a reusable test collection that can
be used as a reference for comparing different retrieval and mining approaches
in the field of medical systematic reviews.</p>
      <p>This paper is organized as follows: Section 3 describes the two subtasks of
the lab in detail, Section 2 describes the constructed benchmark collection, and
Section 4 the evaluation measures used; in Section 5 we discuss the results of
the evaluation. Section 6 concludes the article.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Benchmark Collection</title>
      <p>In what follows we describe the collection of articles used in the task, the topics
released to participants, and how they were developed, as well as the relevance
labels used in the evaluation.
2.1</p>
      <sec id="sec-2-1">
        <title>Articles</title>
        <p>The collection used in the lab is PubMed Baseline Repository last updated on
11/12/2018, and available on the NCBI FTP site under the ftp://ftp.ncbi.
nlm.nih.gov/pubmed/baseline directories. PubMed comprises more than 27
million citations for biomedical literature from MEDLINE, life science
journals, and online books. Citations may include links to full-text content from
PubMed Central and publisher web sites. NLM produces a baseline set of
MEDLINE/PubMed citation records in XML format for download on an annual basis.
The annual baseline is released in December of each year. The complete baseline
consists of files pubmed19n0001 through pubmed19n0972.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Topics</title>
        <p>To construct the benchmark collection, the organizers of the task used 8
Diagnostic Test Accuracy, 20 Intervention, 1 Prognosis, and 2 Qualitative
systematic reviews already conducted by Cochrane researchers. These reviews can
be found in the Cochrane Library4. 72 DTA systematic reviews used in the
2017 and 2018 versions of the lab [?,?,?,?], as well as 20 different Intervention
4 http://www.cochranelibrary.com/
reviews were also collected and made available to the participants as a
development set. The 123 systematic review in both the development and test can
be found in Tables 1, 2, 3, and 4. The tables provide the topic id, which is a
substring of the DOI of the document (e.g. the DOI for the topic ID CD008122
is 10.1002/14651858.CD008122.pub2), and the title of the systematic review
that corresponds to the topic.</p>
        <p>Topic Description for Subtask 1: In subtask 1 each topic file was generated
through the following procedure: First, the topic ID was extracted from the DOI
of the systematic review. Then, the title of the systematic review was considered.
Last, for each systematic review, the corresponding protocol was identified, and
the objective of the review as described in the protocol was also considered.
These three elements, topic ID, title and objective constitute the topic provided
to participants. An example can be seen below:
Topic: CD008122
Title: Rapid diagnostic tests for diagnosing uncomplicated P. falciparum
malaria in endemic countries
Objectives: To assess the diagnostic accuracy of RDTs for detecting
clinical P. falciparum malaria (symptoms suggestive of
malaria plus P. falciparum parasitaemia detectable by
microscopy) in persons living in malaria endemic areas
who present to ambulatory healthcare facilities with
symptoms of malaria, and to identify which types and
brands of commercial test best detect clinical P. falciparum
malaria.</p>
        <p>Furthermore, participants were provided with other relevant parts of the
protocol, which varies per type of review. The protocol for DTA reviews includes the
type of study, the participants, the index tests, the target conditions, the
comparator tests, and the reference standards. The protocol for Intervention reviews
includes the types of studies, the type of participants, the types of interventions,
and the type of outcome measures. The protocol for Prognosis reviews includes
the types of studies, the types of participants, and the types of outcome
measures. The protocol for Qualitative reviews includes types of studies and types
of participants.</p>
        <p>Topic Description for Subtask 2: In subtask 2 each topic file was generated
through the following procedure: For each systematic review, we reviewed the
search strategy from the corresponding study in Cochrane Library. A search
strategy, among other things, consists of the exact Boolean query developed and
submitted to a medical bibliographic database, at the time the review was
conducted, and typically can be found in the Appendix of the study. Rene Spijker,
a co-author of this work and a Cochrane information specialist examined the
grammatical correctness of the search query and specified the date range which
dictated the valid dates for the articles to be included in this systematic review.
The date range was necessary because a study published after the systematic
review should not be included even though it might be relevant, since that would
require manually examining its content to quantify its relevance. Although the
date ranges reflect the time of the review a complete mirror image of the database
as it was at the time is impossible as records get added and removed
retrospectively so using the date range gives us the best approximation of the content at
the moment of the review.</p>
        <p>A number of medical databases, and search interfaces to these databases
is available for searching, and for each one information specialists construct
a different variation of their query that better fits the data and meta-data
of the database. For this task, we only considered the Boolean query
constructed for the MEDLINE database, using the Wolters Kluwer Ovid
interface. Then we submitted the constructed Boolean query to the OVID system
at http://demo.ovid.com/demo/ovidsptools/launcher.htm and collected all
the returned PubMed document identification numbers (PMID’s) which satisfied
the date range constraint. This step was automated by a Python script we put
together and through an interface available to the University of Amsterdam.</p>
        <p>The topic file is in a text format and contains four sections, Topic, Title,
Query, and PMID’s. PMID’s are the PubMed document IDs returned by the
Boolean query. The PMIDs can be used to access the corresponding document
through the National Center for Biotechnology Information (NCBI)5. An
example of a topic file can be viewed below.</p>
        <p>Topic: CD008122
Title: Rapid diagnostic tests for diagnosing uncomplicated</p>
        <p>P. falciparum malaria in endemic countries
Query:
1. Exp Malaria/
2. Exp Plasmodium/
3. Malaria.ti,ab
4. 1 or 2 or 3
5. Exp Reagent kits, diagnostic/
6. rapid diagnos* test*.ti,ab
7. RDT.ti,ab
8. Dipstick*.ti,ab
9. Rapid diagnos* device*.ti,ab
10. MRDD.ti,ab
11. OptiMal.ti,ab
12. Binax NOW.ti,ab
13. ParaSight.ti,ab
14. Immunochromatograph*.ti,ab
15. Antigen detection method*.ti,ab
16. Rapid malaria antigen test*.ti,ab
5 https://www.ncbi.nlm.nih.gov/books/NBK25497/
The original systematic reviews written by Cochrane researchers included a
reference section that listed Included, Excluded, and Additional references to studies.
Included are the studies that are relevant to the systematic review. Excluded are
the studies that in the abstract and title screening stage were considered
relevant, but at the full text screening phase were considered irrelevant to the study
and hence excluded from it. Additional are the studies that do not impact the
outcome of the review, and hence irrelevant to it. The union of Included and
Excluded references are the studies that were screened at a Title and Abstract level
and were considered for further examination at a full content level. These
constituted the relevant documents at the abstract level, while the Included references
constituted the relevant documents at the full content level.</p>
        <p>The majority of the references included their corresponding PMID, but not
all of them. For those references missing the PMID, the title was extracted from
the reference, and it was used as a query to Google Search Engine over the
domain https://www.ncbi.nlm.nih.gov/pubmed/. The top-scored document
returned by Google was selected, and the title of the study contained in landing
page, as identified in the metadata extracted. The title was compared then with
the title of the study used as search query. If the Edit Distance between the
two titles was up to 3 (just to account for spaces, parentheses, etc.) then the
study reference was replaced by the PMID also extracted from the metadata of
the landing page. If (a) the title had an edit distance greater than 3 but less
than 20, or (b) the study was an included study, or (c) no title was contained
in the Google result metadata, or (d) no Google results were returned, then
the query was submitted at https://www.ncbi.nlm.nih.gov/pubmed/ and the
results were manually examined. All other studies were discarded under the
assumption that they are not contained in PubMed. The format of the qrels
followed the standard TREC format:</p>
        <sec id="sec-2-2-1">
          <title>Topic Iteration</title>
        </sec>
        <sec id="sec-2-2-2">
          <title>Document</title>
          <p>Relevance
where Topic is the topic ID of the systematic review, Iteration in our case is a
dummy field always zero and not used, Document is the PMID, and Relevancy
is a binary code of 0 for not relevant and 1 for relevant studies. The order
of documents in the qrel files is not indicative of relevance. Studies that were
returned by the Boolean query but were not relevant based on the above process,
were considered irrelevant. Those are studies that were excluded at the abstract
and title screening phase. All other documents in MEDLINE were also assumed
to be irrelevant, given that they were not judged by the human assessor.</p>
          <p>Note that, as mentioned earlier, the references of a systematic review were
produced after a number of Boolean queries were submitted to a number of
medical databases, and their titles and abstracts were screened. The PMID’s provided
however were only those that came out of the MEDLINE query. Therefore, there
was a number of abstract-level relevant studies (the gray area in the Venn
diagram below) that were not part of the result set of the Boolean query provided
to the participants. Studies that were cited in the systematic review but did not
appear in the results of the Boolean query were excluded from the label set for
both Subtask 1 and Subtask 2 (while in 2018 they were included for Subtask 1).</p>
          <p>The average percentage of relevant abstract in the training set is 6.5% of the
total number of PMID’s released, and in the test set 8.9%, while at the content
level the average percentage is 2.6% in the training set, and 3.9% in the test
set. Table 5, Table 6, Table 7, and Table 8 show the distribution of the relevant
documents at abstract and document level for all the topics in the test set.
A break down of the average percentage of relevant abstracts/documents are:
DTA 12.9%/5.3%, Intervention 7.6%/3.4%, Prognosis 15.7%/9.4%, Qualitative
2.6%/1.0%.
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Task Description</title>
      <p>In this section we describe the two subtasks of the TAR lab, the input provided
to participants for each one of the subtasks and the expected participant’s output
submitted to the lab for evaluation.
3.1</p>
      <sec id="sec-3-1">
        <title>Subtask 1: No Boolean Search</title>
        <p>Prior to constructing a Boolean Query researchers have to design and write
a systematic review protocol that in detail defines what constitutes a relevant
study for their review. In this experimental task of the TAR lab, participants are
provided with the relevant pieces of a protocol, in an attempt to complete search
effectively and efficiently by-passing the construction of the Boolean query.</p>
        <p>In particular, for each systematic review that needs to be conducted (also
referred to as topic in the IR terminology), participants are provided with the
following input data:
1. topic ID;
2. the title of the review written by Cochrane experts;
3. parts of the protocol;
4. the PubMed database, provided by the National Center for Biotechnology
Information (NCBI), part of the U.S. National Library of Medicine (NLM).</p>
        <p>For each one of these topics participants are asked to submit: (a) a ranked
linked of PubMed articles, and (b) a threshold over this ranked list. Participant
can submit an unlimitted number of submissions (“runs”). A run is the output
of the participants’ algorithm for all the topics, in the form of a text file, with
each line of the file following the format:</p>
        <p>TOPIC-ID</p>
        <p>THRESHOLD</p>
        <p>PMID</p>
        <p>RANK</p>
        <p>SCORE</p>
        <p>RUN-ID</p>
        <p>Each line represents a PubMed article in the ranked list for a given topic,
with RANK indicating the index of this article in the ranked list. TOPIC-ID is
the id of the topic for which the document has been retrieved, and THRESHOLD
is either 0 or 1, with 1 indicating that the given rank is the rank of the threshold.
PMID is the PubMed Document Identifier of the article ranked at that position,
SCORE is the score the algorithm gives to the article, and RUN-ID is an identifier
for the submitted run. Participants are allowed to submit a maximum of 5,000
ranked PMIDs per topic.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Subtask 2: Title and Abstract Screening</title>
        <p>Given the results of the Boolean Search from the first stage of the systematic
review process as the starting point, participants are asked to rank the set of
abstracts. The task has two goals: (i) to produce an the efficient ordering of
the documents, such that all of the relevant abstracts are retrieved as early as
possible, and (ii) to identify a subset which contains all or as many of the relevant
abstracts for the least effort (i.e. total number of abstracts to be assessed).</p>
        <p>In particular, for each systematic review that needs to be conducted (also
refereed to as topic in the IR terminology), participants are provided with the
following input data:</p>
        <sec id="sec-3-2-1">
          <title>1. topic ID</title>
          <p>2. the title of the review written by Cochrane experts;
3. the Boolean query manually constructed by Cochrane experts;
4. the set of PubMed Document Identifiers (PMID’s) returned by running the
query in MEDLINE.</p>
          <p>As in subtask 1 participants are asked to submit: (a) a ranked linked of
the PubMed articles in the given set, and (b) a threshold over this ranked list.
Participant can submit an unlimitted number of runs, and the format of each
submission follows the format of subtask 1 submissions.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Evaluation</title>
      <p>Evaluation within the context of using technology to assist in the reviewing
process is very much dependent on how the users interact with the system, and
on the goal of the technology assistance. For example, if the goal of the assistance
is to autonomously predict which studies should be assessed by the end-user at a
document level, then the problem can be viewed as a classification problem; the
system screens all abstracts and returns a subset of them as relevant. If the goal
of the assistance is to identify all the relevant documents as quick as possible but
let the human decide when to stop screening, then the problem can be viewed as
a ranking problem. There are, of course, many other possible variations. For the
purposes of the 2018 lab, we consider the problem as a ranking problem - that
is, to rank the set of documents associated with the topic in decreasing order of
relevance.</p>
      <p>Furthermore, the two subtasks although very similar in terms of evaluation,
i.e. in both subtasks participants’ runs are rankings of article, with a designated
threshold, they also differ: in subtask 2 the set of articles to be prioritized
contains all the relevant articles, while in subtask 1 the relevant articles need to be
found within the entire PubMed database, and hence there is no guarantee that
all relevant articles will appear in the top 5000.</p>
      <p>For the evaluation of the two runs we employ a number of standard IR
measures, along with measures that have been developed for the particular task
of technology assisted reviews [?,?]. A list of the used measures can be seen
below:
– Subtask 1
1. Average Precision
2. Number of Relevant Found
3. Precision @ last relevant found
4. Recall @ rank k, with k in [50, 100, 200, 500, 1000, 2000, 5000]
5. Recall @ threshold
– Subtask 2
1. Average Precision
2. Recall @ k % of top ranked abstracts, with k in [5, 10, 20, 30]
3. Work Saved over Sampling at recall r, W SS@r = (T N + F N )=N (1 r)
[?]
4. Reliability = lossr + losse [?], with lossr = (1 r)2, where r is the recall
at the threshold, and losse = (n=(R + 100) 100=N )2, where n is the
number of returned documents by the system up to the threshold, N is
the size of the collection, and R the number of relevant documents.
5. Recall @ threshold</p>
      <p>The lab organizers developed an evaluation software similar to trec_eval for
the easy evaluation of the submitted runs, also provided to participants. The code
of the tar_eval software is available at https://github.com/CLEF-TAR/tar.</p>
    </sec>
    <sec id="sec-5">
      <title>Results</title>
      <p>The 2019 task received submissions from 3 teams, all from Europe, including
one team from The Netherlands (UvA), one team from the UK (Sheffield), and
one team from Italy (UNIPD). For Subtask 1, we received no runs. For Subtask
2, we received 36 runs from the three teams. The three teams used a variety
of ranking methods including traditional BM25, interactive BM25, continuous
active learning, relevance feedback, as well as a variety of stopping criteria to
provide a threshold on the ranking. The results on a selected subset of metrics
on DTA, Intervention, Prognosis, and Qualitative studies, on abstract-level
relevance, are shown in Tables 9, 7, 11, 12, respectively. Figures 1, 2, 3, and 4 shows
the box plots for Average Precision against the abstract level labels for each one
of the participants’ runs in Subtask 2, with the Mean Average Precision denoted
by a blue dashed line in the box plot. Figures 9, 10, 11, 12 presents the recall
obtained by the participants’ runs at the point of the threshold as a function of
the number of abstracts presented to the user. As expected the more abstract
presented to the user (the lower the threshold) the higher the achieved recall.
Nevertheless, there are still algorithms that dominate others. The figures present
the Pareto frontier.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>The CLEF e-Health TAR has now constructed a benchmark collection of 80
Diagnostic Test Accuracy, 40 Intervention, 1 Prognosis, and 2 Qualitative
systematic reviews to study the effectiveness and efficiency of information retrieval
and machine learning algorithms in retrieving relevant studies from medical
databses, and prioritizing the studies to be screened at the abstract and title
screening stage, while providing a stopping criterion over the ranked list. The
results demonstrate that automatic methods can be trusted for finding most, if
not all, relevant studies in a fraction of the time manual screening can do the
same. Given that across different runs many parameters change simultaneously
it is not easy to come to certain conclusions about the relative performance of
automatic methods.</p>
      <p>Regarding the benchmark collection itself, there is a number of limitations to
be considered: (a) Pivoting on the results of the the OVID MEDLINE Boolean
query limits our ability to identify all relevant studies, i.e. relevant studies that
are outputted by Boolean queries over different databases, and relevant studies
that are actually not found by these Boolean queries. The former can be overcome
by considering all the different queries submitted; for the latter extra manual
judgments would be required. (b) Pivoting on abstract and title only we miss the
opportunity to study the effect of automatic methods when applied to the full
text of the studies, that would present an opportunity to completely overcome the
multi-stage process of systematic reviews. However, most of the full text articles
are protected under copyright laws that do not give all participants access to
those. (c) The evaluation setup of ranking does not allows us to consider the
cost of the process, since given a ranking a researcher would have to still go
over all studies ranked. A more realistic setup, e.g. a double-screening setup,
could be considered. (d) In the construction of relevant judgments we considered
the included and excluded references of the systematic reviews under study,
which prevented us to study the noise and disagreement between reviewers.
(e) In our effort to allow iterative algorithms, e.g. active learning algorithms,
to be submitted, we handed the test sets’ relevant judgments directly to the
participants, which is rather unusual for this type of evaluation exercises.</p>
      <p>CD012080
CD011686
CD009044
CD012669
CD012233
CD008874
CD012768</p>
    </sec>
    <sec id="sec-7">
      <title>Appendix: Tables and Figures</title>
      <p>CD010239
CD011977
CD012164
CD010038
CD009069
CD001261
CD010753
CD006468
CD010558
CD000996
CD012069
CD012661</p>
      <p>Topic
CD008874
CD009044
CD011686
CD012080
CD012233
CD012567
CD012669
CD012768
Fig. 1. Average precision using the abstract level relevance judgments for DTA reviews.</p>
      <p>Fig. 5. Recall at different ranks for DTA reviews.</p>
      <p>Fig. 6. Recall at different ranks for Intervention reviews.</p>
      <p>Fig. 7. Recall at different ranks for Prognosis reviews.</p>
      <p>Fig. 8. Recall at different ranks for Qualitative reviews.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>