<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>SINAI at CLEF eHealth 2017 Task 3</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Manuel Carlos D az-Galiano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M. Teresa Mart n-Valdivia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Salud Mar a Jimenez-Zafra</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alberto Andreu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>L. Alfonso Uren~a Lopez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Universidad de Jaen Campus Las Lagunillas</institution>
          ,
          <addr-line>E-23071, Jaen</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we present our participation as SINAI research group from the University of Jaen at Task3 Patient-Centred Information Retrieval. Although only two runs are allowed to be submitted, we have tried several strategies using di erent models and parameters in order to check the e ectiveness of our system. The main 3 approaches try to apply query feedback using MeSH expansion, search engine Google and a Word2Vec model over the Wikipedia. Finally, we have sent two runs in the ad-hoc task. The rst one uses Google and the second one applies Word2Vec using the pages related with Health extracted from the Wikipedia.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The Internet is an important source of health information not only for medical
professionals but also for patients or traditional users. Everyday more and more,
users are searching for medical information. However, the terminology and the
understanding of professional and non-professional users are very di erent. In
this paper, we describe our participation in CLEF eHealth 2017 Task 3:
PatientCentered Information Retrieval [6]. The CLEF eHealth lab aims to evaluate the
e ectiveness of information retrieval systems when searching for health content
on the web. From 2013 the share task eHealth organized by the CLEF [
        <xref ref-type="bibr" rid="ref3">7,5,3,8,4</xref>
        ]
is focused on studying the medical information retrieval but from the patient
point of view, assuming that this kind of user has more di culties
understanding documents in the health domain. In 2017, the CLEF eHealth task 3
PatientCentred Information Retrieval, continues to focus on evaluating the e
ectiveness of information retrieval system on the Web [4], the topics provided by the
organizers are also the same as those of 2016, with the aim of improving the
relevance assessment pool and the collection reusability.
      </p>
      <p>The 2016 topics were developed by mining health web forums where users
were seeking advice about speci c symptoms, diagnosis, conditions or
treatments. For each forum post a set of 6 query variants were generated, representing
di erent ways to express the same information need.</p>
    </sec>
    <sec id="sec-2">
      <title>Method</title>
      <p>In this section, we present the di erent strategies that we have followed in our
participation in CLEF eHealth 2017 Task 3 Patient- Centred Information
Retrieval: IRTask 1 Ad-hoc search.
2.1</p>
      <sec id="sec-2-1">
        <title>System description</title>
        <p>
          Although our research group SINAI has a large experience participating in
several tasks of other editions of CLEF, mainly in ImageCLEFmed [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], this is the
rst time that we participate in CLEF eHealth. We have tried 3 main approaches,
all of them focused on the integration of external knowledge in order to enrich
the query:
{ Including terms extracted from MeSH.
{ Including information retrieved from Google.
        </p>
        <p>{ Including terms extracted from the Wikipedia.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Preprocessing and indexing</title>
        <p>We have used ClueWeb12 B13 corpus1 and the Lemur IR System2. Speci cally,
we have used Indri search engine for indexing with several default parameters:
preprocessing deleting stopwords and stemming words with krovezt algorithm.
In addition, we have used Dirichlet prior retrieval method with = 2500.</p>
        <p>For the queries, we have also applied stopword removal and krovezt stemmer.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Experiments</title>
      <p>3.1</p>
      <sec id="sec-3-1">
        <title>MeSH approach</title>
        <p>
          Our rst approach was to apply the query expansion strategy using MeSH that
we have used in other CLEF task in previous years [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The main goal is to
integrate medical knowledge in order to semantically enrich the query. However,
when we tested the result with the assessments of 2016, the results were very
poor, even worse than the baseline. We think that the main reason for this is
because the collection and the queries are written by non-professional of medicine.
Thus, we need to integrate other kind of information with more informal writing
instead of using the technical terms extracted from MeSH.
1 http://lemurproject.org/clueweb12/
2 http://lemurproject.org/
3.2
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Google approach</title>
        <p>Since the collection and queries are designed to simulate a typical searching on
the web, we have tried to integrate the knowledge from the most popular web
search engine, i.e., Google. Thus, we rst launch a query on Google and then,
we have accomplished experiments with di erent parameters:
{ Replace the query by the titles of the top X retrieved documents, with X
=f1,2,3,4,5,10g
{ Replace the query by the snippets of the top X retrieved documents, with X
=f1,2,3,4,5,10g
{ Replace the query by the titles and snippets of the top X retrieved
documents, with X =f1,2,3,4,5,10g
{ Include in the query the titles of the top X retrieved documents, with X
=f1,2,3,4,5,10g plus the original query
{ Include in the query the snippets of the top X retrieved documents, with X
=f1,2,3,4,5,10g plus the original query
{ Include in the query the titles and snippets of the top X retrieved documents,
with X =f1,2,3,4,5,10g plus the original query</p>
        <p>We have evaluated the experiments using the 2016 relevance assessments and
the results are a bit better than the MeSH approach although only the
experiments including the information of 5 and 10 documents overcome the baseline. It
is worth to mention that the inclusion of the original query always improves the
results. Of course, run time is increasing as the number of documents increases,
being the experiment with 10 documents the slowest one. Anyway, we have
selected the experiment including the titles and snippets of the top 10 retrieved
documents plus the original query because is the one with higher precision.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Wikipedia approach</title>
        <p>Finally, we have integrated information by including word vectors obtained from
Word2Vec. We have applied two di erent approaches to create our model, one
using the whole Wikipedia (Wikipedia-All) and another one using only pages
related to health categories from the Wikipedia (Wikipedia-Health). To create
Wikipedia-Health we have obtained all the pages included in the Health category
and subcategories. For subcategories we have gone down four levels. We have
downloaded a total of 80,765 pages from 13,279 categories.</p>
        <p>To expand the original query we calculate a vector for each word in the
query. Next, we nd the centroid of these vectors calculating the average vector.
Finally, we obtain the words whose vectors are near to the centroid. We use the
proximity value as weight for this word in the expansion.</p>
        <p>Although usually the Word2vec models work better as more documents are
included, in this case, the Wikipedia-Health seems that is more e cient than
the whole Wikipedia. In our case, evaluating with the 2016 assessments, both
approaches slighted overcome the baseline although the Wikipedia-Health works
a bit better. For these reasons, we have selected this last experiment to be
summited to CLEF eHealth 2017.
3.4</p>
      </sec>
      <sec id="sec-3-4">
        <title>Results</title>
        <p>After running all the experiments described in the previous section, we select
only two of them in order to be presented in the CLEF eHealth 2017. We have
selected the best one from the Google approach and the best one from the
Wikipedia approach.</p>
        <p>{ SINAI-Run1: experiment including the titles and snippets of the top 10
retrieved documents plus the original query.
{ SINAI-Run2: experiment including one word got using word2vec model
generated from the Health-Wikipedia.</p>
        <p>Unfortunately, assessments and o cial results will be released before the
conference, thus we can not include our system evaluation.</p>
        <p>Table 1 shows results obtained with 2016 relevance judgments.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>This work has been partially supported by a grant from the Ministerio de
Educacion Cultura y Deporte (MECD - scholarship FPU014/00983), Fondo Europeo
de Desarrollo Regional (FEDER) and REDES project (TIN2015-65136-C2-1-R)
from the Spanish Government.
4. Goeuriot, L., Kelly, L., Suominen, H., Neveol, A., Robert, A., Kanoulas, E., Spijker,
R., Palotti, J., Zuccon, G.: Clef 2017 ehealth evaluation lab overview. In: CLEF 2017
- 8th Conference and Labs of the Evaluation Forum. Springer (2017)
5. Kelly, L., Goeuriot, L., Suominen, H., Schreck, T., Leroy, G., Mowery, D.L.,
Velupillai, S., Chapman, W.W., Martinez, D., Zuccon, G., et al.: Overview of the share/clef
ehealth evaluation lab 2014. In: International Conference of the Cross-Language
Evaluation Forum for European Languages. pp. 172{191. Springer (2014)
6. Palotti, J., Zuccon, G., Jimmy, L., Pecina, P., Lupu, M., Goeuriot, L., Kelly, L.,
Hanbury, A.: Clef 2017 task overview: The ir task at the ehealth evaluation lab. In:
Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR
Workshop Proceedings (2017)
7. Suominen, H., Salantera, S., Velupillai, S., Chapman, W.W., Savova, G., Elhadad,
N., Pradhan, S., South, B.R., Mowery, D.L., Jones, G.J., et al.: Overview of the
share/clef ehealth evaluation lab 2013. In: International Conference of the
CrossLanguage Evaluation Forum for European Languages. pp. 212{231. Springer (2013)
8. Zuccon, G., Palotti, J., Goeuriot, L., Kelly, L., Lupu, M., Pecina, P., Mueller, H.,
Budaher, J., Deacon, A.: The ir task at the clef ehealth evaluation lab 2016:
usercentred health information retrieval. In: CLEF 2016-Conference and Labs of the
Evaluation Forum. vol. 1609, pp. 15{27 (2016)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>D</given-names>
            <surname>az-Galiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.C.</given-names>
            ,
            <surname>Garc</surname>
          </string-name>
          a-Cumbreras,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Mart</surname>
          </string-name>
          n-Valdivia,
          <string-name>
            <given-names>M.T.</given-names>
            ,
            <surname>Montejo-Raez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Urena-Lopez</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          :
          <article-title>Integrating mesh ontology to improve medical information retrieval</article-title>
          .
          <source>In: Workshop of the Cross-Language Evaluation Forum for European Languages</source>
          . pp.
          <volume>601</volume>
          {
          <fpage>606</fpage>
          . Springer (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>D</given-names>
            <surname>az-Galiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.C.</given-names>
            ,
            <surname>Garc</surname>
          </string-name>
          a-Cumbreras,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Mart</surname>
          </string-name>
          n-Valdivia,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Urena-Lopez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Montejo-Raez</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>SINAI at ImageCLEFmed 2008</article-title>
          . In: Peters,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Ferro</surname>
          </string-name>
          , N. (eds.)
          <source>CLEF (Working Notes)</source>
          .
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>1174</volume>
          .
          <string-name>
            <surname>CEUR-WS.org</surname>
          </string-name>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kelly</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suominen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hanlen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neveol</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grouin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palotti</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zuccon</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Overview of the clef ehealth evaluation lab 2015</article-title>
          . In:
          <article-title>International Conference of the Cross-Language Evaluation Forum for European Languages</article-title>
          . pp.
          <volume>429</volume>
          {
          <fpage>443</fpage>
          . Springer (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>