<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ranking Abstracts to Identify Relevant Evidence for Systematic Reviews: The University of Sheffield's Approach to CLEF eHealth 2017 Task 2</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Sheffield</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <abstract>
        <p>This paper describes Sheffield University's submission to CLEF 2017 eHealth Task 2: Technologically Assisted Reviews in Empirical Medicine. This task focusses on the identification of relevant evidence for systematic reviews in the medical domain. Participants are provided with systematic review topics (including title, Boolean query and set of PubMed abstracts returned) and asked to identify the abstracts that provide evidence relevant to the review topic. Sheffield University participated in the simple evaluation. Our approach was to rank the set of PubMed abstracts returned by the query by making use of information in the topic including title and Boolean query. Ranking was based on a simple TF.IDF weighted cosine similarity measure. This paper reports results obtained from six runs: four submitted to the official evaluation, an additional run and a baseline approach.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Systematic reviews attempt to identify, synthesise and summarise evidence available to
answer a research question. They form the backbone of evidence-based approaches to
medicine where they are used to answer complex questions such as “How effective are
statins for heart attack survivors?" [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The process of creating a systematic review is time-consuming with a single review
often requiring 6 to 12 months of effort from expert reviewers [
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ]. Text mining
techniques have been shown to be a useful way to reduce this effort [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4,5,6,7</xref>
        ]. CLEF eHealth
Task 2 “Technologically Assisted Reviews in Empirical Medicine” focusses on the
application of text mining to the process of developing systematic reviews with the aim to
reduce the effort required.
      </p>
      <p>This paper is organised as follows: Section 2 introduces CLEF eHealth Task 2.
Section 3 describes our approach to this task. Section 4 discusses the results obtained
from applying this approach to both the development and test datasets. Finally, Section
5 presents the conclusions and potential future work.
1. Boolean Search: Experts construct a boolean query designed to identify all
evidence relevant to the review question. This query is run against a medical database
such as PubMed and set of titles and abstracts returned.
2. Title and Abstract Screening: Experts screen the titles and abstracts retrieved to
identify those that are potentially relevant for inclusion in the review.
3. Document Screening: The full document content is then retrieved for any title and
abstract that has been identified as being relevant in the previous stage. These are
then examined in a second round of expert screening to form a final decision about
their relevance to the review.</p>
      <p>
        In CLEF eHealth 2017 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], Task 2 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] focuses on the second stage of systematic
review (Title and Abstract Screening). Participants are required to develop methods to
rank a list of PubMed abstracts returned by a boolean query (stage 1) so that relevant
documents appear as early as possible.
3
3.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Method</title>
      <sec id="sec-2-1">
        <title>Datasets</title>
        <p>Participants are provided with two datasets: a development set and a test set. The
development dataset contains 20 topics and the test dataset contains 30 topics. All reviews
focus on Diagnostic Test Accuracy (DTA). The queries were manually constructed by
expert reviewers from the Cochrane collaboration1. For each topic, participants are
provided with topic id, review title, boolean query and a list of PubMed documents
identifiers retrieved by the query. The collection contains a total of 266,967 abstracts.</p>
        <p>
          Figure 1 shows examples of two topics from the development dataset. Two different
formulations were used for the Boolean queries: OVID and PubMed. The queries are
generally complex and contain multiple operators. Table 1 shows operators commonly
used in both types of query [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>Participants also provided with files that indicate which of the titles and abstracts
returned by the Boolean query were indicated as being relevant after the Title and Abstract
Screening and Document Screening stages (see Section 2), referred to as the abstract
qrels and content qrels respectively.
3.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>University of Sheffield’s Approach</title>
        <p>The University of Sheffield’s submission to Task 2 ranked the list of PubMed abstracts
retrieved for each topic with the intention of returning relevant ones as early as possible.
The approach is completely automatic since queries are processed algorithmically and
without manual intervention2. In addition, relevance feedback is not used.</p>
        <p>Our method makes use of three pieces of information from the topic: (1) the title,
(2) terms extracted from the Boolean query and (3) MeSH terms extracted from the
Boolean query. Information for (2) and (3) are extracted from the Boolean query
using a simple parser designed to interpret both OVID and PubMed style queries. Terms</p>
        <sec id="sec-2-2-1">
          <title>1 http://www.cochrane.org/ 2 The approach was implemented using Python v3.6</title>
          <p>Topic: CD009591
Title: Imaging modalities for the non-invasive diagnosis of
endometriosis
Query:
exp magnetic resonance imaging/ or exp ultrasonography/ or exp
Imaging, Three-Dimensional/ or exp radiography/
ultraso$.tw. or magnetic resonance imaging.tw. or MRI.tw. or imag$.tw.
diagnos$.tw.
...
(animals not (humans and animals)).sh.
8 not 9
PubMed
Topic: CD008643
Title: Red flags to screen for vertebral fracture in patients
presenting with low-back pain
Query:
1 Index test: clinical red flags
"Medical History Taking"[mesh] OR history[tw] OR "red flag"[tw]
OR "red flags" OR Physical examination[mesh] OR "physical examination"
[tw] OR "function test"[tw] OR "physical test"[tw]
...
1 AND 2 AND 3 NOT 4
and MeSH terms modified by certain operators (e.g. not and adj) are not extracted.
Figure 2 shows examples of terms extracted from the query for topic CD008643 (see
Figure 1). Some MeSH terms (e.g. Spine) are also standard English words that could
appear as a term in an abstract. To avoid false matches all MeSH terms extracted
from a query are prefixed with the string Mesh. In addition, MeSH terms are
preprocesssed to remove whitespace and punctuation (e.g. Lumbar vertibrae
becomes MeshLumbarvertibrate). Example MeSH terms extracted from the same
query are shown in Figure 3.</p>
          <p>’history’, ’red flag’, ’physical examination’, ’function test’,
’physical test’,’clinical’, ’clinically’,’diagnosis’
’MeSHMedicalHistoryTaking’, ’MeSHPhysicalexamination’,
’MeSHra’, ’MeSHri’, ’MeSHWoundsandInjuries’
The abstracts returned by the Boolean query for each topic defined as the list of
PMIDs (PubMed identifier) provided with the topic are downloaded from PubMed3.
The text of the title, abstract and MeSH terms are extracted and the MeSH terms
preprocessed using the same approach that was applied to the Boolean query.</p>
          <p>Pre-processing is applied to both the PubMed abstracts and information extracted
from the topics. The text is tokenised, converted to lower case, stop words/punctuation
are removed and the remaining tokens stemmed4.</p>
          <p>The information extracted from the topic and each of the abstracts are converted
into tf.idf-weighted vectors. The similarity between the topic and each of the abstracts
is then generated by computing the cosine metric for the pair of vectors5. Abstracts are
ranked based on this similarity score.</p>
          <p>Results are output in the TREC format shown in Table 2 where:
– TOPIC-ID: topic identifier provided by CLEF 2017.
– INTERACTION: this field is assigned the value NF in all our runs to indicate that
relevance feedback is not used
– PID: PubMed document identifier
– RANK: rank of the document according to the cosine similarity score
– SCORE: cosine similarity score described above
– RUN-ID: run identifier
3.3</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>Runs</title>
        <p>Four runs were officially submitted for the official evaluation: Sheffield-run-1,
Sheffieldrun-2, Sheffield-run-3, and Sheffield-run-4. In addition, a baseline run (Sheffield-baseline)
and additional approach (Sheffield-run-5) were also implemented and evaluated. A
description of each run is presented below.</p>
        <sec id="sec-2-3-1">
          <title>3 The Entrez package from biopython.org was used.</title>
          <p>4 NLTK’s tokenize and LancasterStemmer packages are used for tokenisation and
stemming. The list of stop words provided by scikit-learn (scikit-learn.org/stable/) is
used for most runs.
5 Scikit-learn’s TfidfVectorizer and linear_kernel packages were used for these
steps
– Sheffield-baseline In this run the list of PubMed abstracts are randomly ordered.</p>
          <p>This is intended to represent the scenario in which the results of the Boolean query
are simply evaluated in the order in which they are retrieved without any attempt to
identify those most likely to be relevant. This situation simulates common practise
within many systematic review projects in which reviewers examine each of the
retrieved abstracts in turn. The score of each abstract is calculated using the following
equation:</p>
          <p>n r + 1
score = (1)
n
where n is the total number of abstracts returned by the Boolean query and r the
abstract’s rank in the random ordering.
– Sheffield-run-1 Abstracts returned by the Boolean query are ranked by comparing
them against only the topic title.
– Sheffield-run-2 Abstracts are compared with the topic title and terms extracted
from the Boolean query.
– Sheffield-run-3 Abstracts are compared with the topic title and both terms and</p>
          <p>
            MeSH terms extracted from the Boolean query.
– Sheffield-run-4 This run is the same as Sheffield-run-2 except that the PubMed
stop-words list [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ] is used rather than the one from sklearn.
– Sheffield-run-5 Abstracts are compared against the topic title and MeSH terms
extracted from the Boolean query. (This run is the same as Sheffield-run-3 except
that terms extracted from the Boolean query are not included when computing the
similarity.)
4
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Results and Discussion</title>
      <p>Task 2 consists of two formal evaluations: simple evaluation and cost-effective
evaluation. The University of Sheffield participated only in the simple evaluation setup and
did not attempt to optimise the approach for the cost-effective evaluation. Evaluation
was carried out using the script provided by the task organisers6.
4.1</p>
      <sec id="sec-3-1">
        <title>Development Dataset</title>
        <p>The development dataset contains of 20 DTA topics (see Section 3.1). Tables 3 and 4
present the results for the approaches described in Section 3.3 applied to this dataset for
the abstract and content qrels respectively.</p>
        <sec id="sec-3-1-1">
          <title>6 https://github.com/leifos/tar</title>
          <p>
            As expected, all of the implemented methods outperform the simple baseline
approach. This demonstrates that even straightforward ranking techniques provide
potential benefit to systematic reviewers by ensuring that documents more likely to be
relevant are placed higher in the rankings. We have previously demonstrated a similar
results for a single systematic review [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] and that finding is supported by these results
which represent a substantially larger dataset.
          </p>
          <p>The best result of the submitted runs for the abstract qrels (Table 3) was achieved
by Sheffield-run-4 which achieved the average precision (ap) score of 0.223, an
improvement of 0.173 against the baseline. It also achieved the best results for work saved
over sampling (wss) and area under the cumulative recall curve normalized by the
optimal area (norm_area) metrics. It is also close to the best result for the average of the
minimum number of abstracts returned to retrieve all relevant ones (last_rel) metric.</p>
          <p>For the content qrels (Table 4), both Sheffield-run-4 and Sheffield-run-5 are strong.
Sheffield-run-4 produced the best scores for last_rel and norm_area and close to the
best result of wss. Sheffield-run-5 achieved the best score for ap and wss_95.</p>
          <p>Results from the development dataset suggest that including terms extracted from
the Boolean query is beneficial (e.g. compare Sheffield-run-1 and Sheffield-run-2).
However, the usefulness of MeSH terms extracted is less clear. Performance decreases
when these are added to the title and query terms (e.g. compare Sheffield-run-2 and
Sheffield-run-3). Results are mixed when they are used instead of query terms (e.g.
compare Sheffield-run-1 and Sheffield-run-5), there is no improvement for the abstract
evaluation but some benefit for the content evaluation.
The development dataset contains of 30 DTA topics (see Section 3.1). Tables 5 and 6
show the results for the abstract and content qrels respectively.</p>
          <p>The highest ap scores were achieved using Sheffield-run-2 and Sheffield-run-4 for
both the abstract and content qrels (Tables 5 and 6). The overall pattern of results
suggest that Sheffield-run-4 is the best performing run on the test data.</p>
          <p>Results from the development and test datasets indicate the strong relative
performance of Sheffield-run-4. This indicates that including terms extracted from Boolean
query and using the PubMed stop-words list are benefical for this task.</p>
          <p>There were some relevant documents in the test data set for which our approach
assigned a score of 0 and this caused NCG@100 scores to be less than 1. This was
observed at both the content and abstract level for the development and test datasets.
The scoring script treats these documents as not being included in the ranking. The
problem could be resolved by adding a small delta value to each score.
5</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion and Future Work</title>
      <p>This paper described the University of Sheffield’s approach to CLEF 2017 Task 2.
Information from the review title and Boolean query was used to rank the abstracts returned
by the query using standard similarity measures. The title and terms extracted from the
Boolean query were found to be the most useful information for this task. All of the
submitted runs outperform a baseline approach based on random ordering.</p>
      <p>In future we plan to refine the techniques for extracting terms and MeSH terms
from the Boolean query (Section 3.2) by taking account of the query structure and
MeSH hierarchy. We also plan to develop techniques to minimise the cost of identifying
relevant evidence and make use of ActiveLearning to improve the ranking based on
feedback from reviewers.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>D.</given-names>
            <surname>Gough</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Oliver</surname>
          </string-name>
          , and J.
          <string-name>
            <surname>Thomas</surname>
          </string-name>
          , An Introduction to Systematic Reviews. Sage,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ambert</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>McDonagh</surname>
          </string-name>
          , “
          <article-title>A Prospective Evaluation of an Automated Classification System to Support Evidence-based Medicine</article-title>
          and Systematic Review,
          <source>” AMIA Annual Symposium Proceedings</source>
          , vol.
          <year>2010</year>
          , pp.
          <fpage>121</fpage>
          -
          <lpage>125</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>S.</given-names>
            <surname>Karimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pohl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Scholer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cavedon</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Zobel</surname>
          </string-name>
          , “
          <article-title>Boolean Versus Ranked Querying for Biomedical Systematic Reviews,” BMC medical informatics and decision making</article-title>
          , vol.
          <volume>10</volume>
          , no.
          <issue>1</issue>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>M.</given-names>
            <surname>Miwa</surname>
          </string-name>
          , J. Thomas,
          <string-name>
            <given-names>A. O</given-names>
            <surname>'Mara-Eves</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ananiadou</surname>
          </string-name>
          , “
          <article-title>Reducing Systematic Review Workload Through Certainty-based Screening,”</article-title>
          <source>Journal of Biomedical Informatics</source>
          , vol.
          <volume>51</volume>
          , pp.
          <fpage>242</fpage>
          -
          <lpage>253</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>S.</given-names>
            <surname>Paisley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sevra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stevenson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Archer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Preston</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Chilcott</surname>
          </string-name>
          , “
          <article-title>Identifying Potential Early Biomarkers of Acute Myocaridal Infarction in the Biomedical Literature: A Comparison of Text Mining and Manual Sifting Techniques,”</article-title>
          <source>in Proceedings of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) 19th Annual European Congress</source>
          , (Vienna, Austria),
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>I.</given-names>
            <surname>Shemilt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Park</surname>
          </string-name>
          , and J. Thomas, “
          <article-title>Use of Cost-effectiveness Analysis to Compare the Efficiency of Study Identification Methods in Systematic Reviews</article-title>
          ,” Systematic reviews,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>A. O'Mara-Eves</surname>
            ,
            <given-names>J.</given-names>
            Thomas, J.
          </string-name>
          <string-name>
            <surname>McNaught</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Miwa</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Ananiadou</surname>
          </string-name>
          , “
          <article-title>Using Text Mining for Study Identification in Systematic Reviews</article-title>
          ,” Systematic reviews,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>E.</given-names>
            <surname>Kanoulas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Azzopardi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Spijker</surname>
          </string-name>
          , “
          <article-title>CLEF Technologically Assisted Reviews in Empirical Medicine Overview</article-title>
          ,” in Working Notes of CLEF 2017 -
          <article-title>Conference and Labs of the Evaluation forum</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          , (Dublin, Ireland),
          <source>CEUR-WS.org, September</source>
          <volume>11</volume>
          -14
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>L.</given-names>
            <surname>Goeuriot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kelly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Suominen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Névéol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Robert</surname>
          </string-name>
          , E. Kanoulas,
          <string-name>
            <given-names>R.</given-names>
            <surname>Spijker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Palotti</surname>
          </string-name>
          , and G. Zuccon, “
          <article-title>CLEF 2017 eHealth Evaluation Lab Overview</article-title>
          ,”
          <source>CLEF 2017 - 8th Conference and Labs of the Evaluation Forum, Lecture Notes in Computer Science (LNCS)</source>
          , Springer, September,,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>V.</given-names>
            <surname>Nisenblat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Bossuyt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Farquhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Hull</surname>
          </string-name>
          , “
          <article-title>Imaging Modalities for the Non-invasive Diagnosis of Endometriosis,”</article-title>
          <source>Cochrane Database of Systematic Reviews</source>
          <year>2016</year>
          , vol.
          <volume>2</volume>
          , no.
          <source>CD009591</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>C. Williams</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Henschke</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Maher</surname>
          </string-name>
          , M. van
          <string-name>
            <surname>Tulder</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Koes</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Macaskill</surname>
          </string-name>
          , and L. Irwig, “
          <article-title>Red Flags to Screen for Vertebral Fracture in Patients Presenting with Low-back Pain,”</article-title>
          <source>Cochrane Database of Systematic Reviews</source>
          <year>2013</year>
          , vol.
          <volume>1</volume>
          , no.
          <source>CD008643</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. “[table, stopwords]
          <article-title>- pubmed help - ncbi bookshelf</article-title>
          .” [online] Available at: https://www.ncbi.nlm.nih.gov/books/NBK3827/table/pubmedhelp.T.stopwords/ [Accessed 7 May 2017].
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>