<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Medical Query Expansion using Semantic Sources DBpedia and Wikidata</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sarah Dahir</string-name>
          <email>sarah.dahir2012@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jalil ElHassouni</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abderrahim El Qadi</string-name>
          <email>abderrahim.elqadi@um5.ac.ma</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hamid Bennis</string-name>
          <email>hamid.bennis@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ENSAM, Mohammed V University in Rabat</institution>
          ,
          <country country="MA">Morocco</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>IMAGE Laboratory, SCIAM Team, Graduate School of Technology, Moulay Ismail University of Meknes</institution>
          ,
          <country country="MA">Morocco</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>LRIT-CNRST (URAC'29), Faculty of Sciences, Rabat IT Center, Mohammed V University in Rabat</institution>
          ,
          <country country="MA">Morocco</country>
        </aff>
      </contrib-group>
      <fpage>195</fpage>
      <lpage>201</lpage>
      <abstract>
        <p>, as candidates for expansion, along with their associated labels (“rdfs:label”) in DBpedia base. We evaluate our suggested approach, using MEDLINE collection and Indri search engine. Our expansion approach lead to significant improvements; especially in terms of precision and Mean Average Precision (MAP) compared to related approaches; using only one domain dependant/independent source.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;DBpedia</kwd>
        <kwd>Information Retrieval</kwd>
        <kwd>PubMed</kwd>
        <kwd>Query Expansion</kwd>
        <kwd>Wikidata</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Information Retrieval Systems (IRS) match the user
query to a collection of documents. As a result, a
subset of documents is returned. This subset is
considered relevant because it contains the query
terms. But, sometimes words from the u1ser query are
different from those contained in the relevant
document set. This issue has been shown in various
studies; one from the medical field.</p>
      <p>Covid-19 symptoms (fever, sore throat, shortness of
breath, loss of taste, and loss of smell) as well as
testing for coronavirus, and preventing measures (face
mask, hand sanitizer, social distancing, and hand
washing) have become some of the most trending
queries, along with other search trends related to the
© 2021 for this paper by its authors. Use permitted under Creative</p>
      <p>Commons License Attribution 4.0 International (CC BY 4.0).</p>
      <p>
        CEUR Workshop Proceedings (CEUR-WS.org)
aftermath of the pandemic on several other domains
such as economy (e.g. unemployment and stock
market) and education e.g. school closure [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. For
instance, queries on the loss of smell attained 8% in
Mars 23rd, 2020 and testing for corona queries
attained 97% on April 13th [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The lockdown caused by the pandemic, increased,
more than ever, our need for better IR for medical
queries in general. Especially that this type of queries
lacks technical terms that domain experts use in web
pages. This problem is often referred to as vocabulary
mismatch.</p>
      <p>
        One way to overcome this problem is to use query
expansion. This process is done through adding new
terms to the user query based on association rules
between the terms [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. However, adding so many
terms to the query can be more harmful than adding
few ones [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Linked Data2 take advantage from the Web to
connect related data [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. For this purpose Uniform
Resource Identifier (URI) and Resource Description
Framework (RDF) are used among other technologies
2 http://linkeddata.org/
and Linked Data standards. Some of them are open
and others require a license agreement:
• DBpedia: is a knowledge base that contains
structured information from Wikipedia. This
knowledge base describes 6 million entities;
including 5000 diseases [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. And it allows
among other things: annotation of a text
through the Web interface DBpedia Spotlight3
that performs Named Entity Recognition. Yet,
we noticed throughout our multiple accesses to
DBpedia Spotlight that the annotation stops
functioning from time to time. For more
precision, we were unable to annotate texts
using this Web application for three times in
four years. And whenever, it stops functioning,
it stays that way for three to four days in a row.
• Wikidata: is one of the largest datasets. It is a
free knowledge database project hosted by
Wikimedia with 90,478,674 data items [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
(including concepts). Unlike other knowledge
bases, Wikidata may be edited by users.
Furthermore, it usually gives links that allow
browsing the resource in other databases like
MeSH, PubMed, Freebase, etc.
      </p>
      <p>In this work, we suggest expanding queries using
two linked data sources (DBpedia, Wikidata) along
with a search engine (PubMed) that allows the search
in the MEDLINE database, and the National Library
of Medicine (NLM) controlled vocabulary thesaurus
(Medical Subject Headings (MeSH)) which is used to
index PubMed articles.</p>
      <p>This paper is organized as follows: Section 2
discusses related work. Section 3 gives methodological
details of our suggested approach, and section 4
presents its evaluation results, and gives an outlook on
future work.</p>
    </sec>
    <sec id="sec-2">
      <title>Related work</title>
      <p>Query Expansion (QE) plays a crucial role in
improving Web searches. The user’s initial query is
reformulated by adding additional meaningful terms
with similar significance. There are many queries
expansion techniques:</p>
      <p>
        Linguistic analysis [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] - [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]: deals with each query
keyword separately from the others using for example
the lexical database WordNet [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] - [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] - [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] that has
a limited coverage of concepts [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and a very small
number of relationships (synonyms, hypernyms, and
      </p>
      <sec id="sec-2-1">
        <title>3 https://www.dbpedia-spotlight.org/demo/</title>
        <p>
          196
hyponyms). Consequently, this kind of techniques
cannot solve ambiguity issues [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ];
        </p>
        <p>
          Query-log analysis: exploits log files’ information
of earlier queries; like the click activity of the user.
But, this technique requires large logs [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ];
        </p>
        <p>
          Linked Data techniques [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]: take into
consideration the context of keywords. In [16] authors
explore just a small number of Dbpedia properties
which means that important properties may not have
been exploited. In [17] and [18] DBpedia is used to
expand queries by using indexed terms from feedback
documents that share similar DBpedia features with
query terms.
        </p>
        <p>In the medical domain; Linked Data allow
corresponding terms used by patients to those used by
domain experts. In [19], authors used the “Unified
Medical Language System” (UMLS) database to
determine synonyms for phrases within the user query.</p>
        <p>In [20], authors expanded medical queries using
only MeSH thesaurus. After that, they extracted
documents based on the similarity between those
expanded queries and clusters of medical documents.
And In our previous work [21], we used attributes
(features) values from Wikidata to expand medical
queries. For this purpose, we considered only values
that contained a query term. However Wikidata is not
domain specific. Thus it lacks emphasis on the medical
data. But, since Wikidata has links to numerous
ontologies and databases from different domains, we
decided to exploit one of those links that is specific to
the medical domain. It is the PubMed’s link.
3.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Proposed method</title>
      <p>Be it domain dependant or independent, linked data or
not; every external source has its advantages, its limits,
and its specificities. As a result, we suggested in this
work a medical query expansion approach (Figure 1)
that combines various sources; including two
knowledge bases from Linked Data, a medical
database, and a medical thesaurus as explained in the
following steps:
1. We first look for the longest n-gram that
covers most (if not all) DBpedia entities
within the query and returns results in the
Wikidata search engine. In case the n-gram
does not feature all of the entities within the
query; use other n-grams too; featuring those
entities. Table 1, shows an example of used
queries from the MEDLINE collection.
Most of those queries are long (more than 4
keywords) and consist from many sentences.
As a result, we must shorten them to avoid
not getting any results at all and to make
sure that we kept the most valuable
keywords while shortening them.
2. Then, we search the n-gram(s) in Wikidata.
3. After that, we browse the PubMed identifier
(“PubMed ID”) of the first result in
Wikidata.
4. Next, we perform Named-entity Recognition
on the PubMed abstract, of the previously
browsed page, using DBpedia.
5. Then, we consider DBpedia entities within
the PubMed abstract, as candidates for
expansion; along with their associated labels
(“rdfs:label”) in DBpedia.
6. Finally, we expand the query using the
entities as well as their associated labels,
197
from the previous step, that are also
available in the MeSH terms of the PubMed
page.
renal amyloidosis as a complication of
tuberculosis and the effects of steroids on
this condition. only the terms kidney diseases
andnephrotic syndrome were selected by the
requester. prednisone and prednisolone are
the only steroids of interest.
To evaluate our approach, we used MEDLINE (table
2) collection. It is a set of articles from a medical
journal that we indexed with a stop words list using
Indri search engine.</p>
      <p>Number of topics
Total number of tokens
Total number of distinct (unique) tokens
Average number of tokens per text
For the implementation of our approach, we used
Kulback Leibler(KL)[22] IR model [23]. In KL (1), we
compare the document’s model with the query’s
model.</p>
      <p>Where P and Q are discrete probability distributions
defined on the same probability space.</p>
      <p>And we use smoothing through Dirichlet to avoid
getting a null result when a term is not present in the
created language model.
4.2.</p>
      <sec id="sec-3-1">
        <title>Evaluation metrics</title>
        <p>In this work we used the following evaluation
measures:
• Precision (2): is a measure that indicates how
efficient a system is in retrieving only relevant
documents [24]:
•
Precision= ⁡⁡⁡⁡ Number⁡of⁡relevant⁡retrieved⁡documents</p>
        <p>Number⁡of⁡retrieved⁡documents
Precision at rank N is evaluated by considering
only top results returned by the system.</p>
        <p>Mean Average Precision (MAP) (3): The MAP for
a set of queries is the mean of the Average
Precision (AP) scores for every query [25].</p>
        <p>MAP =</p>
        <p>Q
∑q=1 AveP(q)</p>
        <p>Q</p>
        <sec id="sec-3-1-1">
          <title>Where Q is the number of queries, and:</title>
          <p>AveP=</p>
          <p>∑kn=1(P(k)rel(k))</p>
          <p>Nombre⁡de⁡documents⁡pertinents
Where rel(k) is equal to 1 if the element at rank
« k » is a relevant document, and zero otherwise
[25].</p>
          <p>Normalized Discounted Cumulative Gain (nDCG)
(5): measures the quality of the ranking by
dividing the Discounted Cumulative Gain (DCG)
by the Ideal Discounted Cumulative Gain (IDCG)
[26].</p>
          <p>DCGp
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡NDCGP= IDCGp
  ( || )= ∑ ∈  ( )
⁡( (( )))
(1)
(6)
(7)
(8)
p 2reli−1
⁡⁡⁡DCGP= ∑i=1 log2(i+1)
With reli: the relevance score of document i; is
obtained after documents retrieval using an IR
model. And:</p>
          <p>IDCGP= ∑iR=E1L lo2gr2el(ii−+11)
Where |REL| is the list of relevant documents
ranked based on their relevancy in the corpus.
Mean Reciprocal Rank (MMR) (8): The
Reciprocal Rank (RR) is the multiplicative
inverse of the rank of the first exact answer [27].
And the MRR is the average of the RR of multiple
queries Q [27].</p>
          <p>MRR = 1 ∑|Q| 1</p>
          <p>|Q| i=1 ranki
Where ranki is the rank position of
the first relevant document for the i-th query [27].
4.3.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Results and discussion</title>
        <p>To evaluate our method (see Table 3, 4 and figure 2)
we compared it first with “Wikidata expansion
approach” [21]. As we consider this work to be quite
comparable to [21], since both works use Wikidata and
are suitable for long queries. Also, we compared our
approach with a non ex(p4a)nsion approach (baseline)
and a DBpedia method that uses DBpedia labels of
entities within the query for expansion. Second, we
compared our work with “Clusters’ Retrieval Derived
from Expanding Statistical Language Modeling
Similarity and Thesaurus-Query Expansion with
Thesaurus” (CRDESLM-QET) [20] because it uses
MeSH terms and is thus comparable to our work.</p>
        <p>We chose to compare the approaches at 30 for most
evaluation measures and 10 or 20 for the precision
because users are more interested in the top results.
•
•</p>
        <p>Figure 2 shows the impact of using low and high values of C in P@20, we varied the number of expansion
concepts to C=1, 2, 5, 10, 15, and 20.</p>
        <p>0,6
0,55
0,5
0,45
0,4</p>
        <p>Number of expansion concepts</p>
        <p>Based on the results in table 3, our approach carry important information for the extraction of
outperformed the state of art’s work in [21].The use of documents that are relevant but use different terms to
KL to retrieve documents improved the P@20 of our refer to the user’s query. Whereas the Wikidata
“Multi semantic sources expansion” approach approach uses terms that may lead to the extraction of
compared to the “Wikidata expansion approach” [21] documents that are related to the query but do not
with 5,4%. Also, our expansion approach in this work necessarily correspond to the user’s intent.
gave a 5,8% improvement in terms of MAP, a 2,3% Moreover, we reckon that our approach
improvement in terms of MRR, and a 4,4% outperformed the Wikidata expansion approach [21]
increasement in terms of NDCG compared to the because unlike the previous work [21] that uses only
“Wikidata expansion approach” [21]. Similarly, our Wikidata to expand queries, our multi semantic
approach increased the results of both the baseline and sources expansion approach benefits from several
the DBpedia approach that performs better than semantic sources, some of them are general or domain
Wikidata. independent (DBpedia, Wikidata) and others are</p>
        <p>From table 4, our approach outperforms CRDESLM- related to the Medical domain (PubMed, and MeSH).
QET [20] in terms of P@10 with 10% and improves the So, along with Wikidata, we decided to use, in this
MAP of CRDESLM-QET [20] with 20,6%. work, some domain specific databases by taking</p>
        <p>From figure 2, we noticed that using lower numbers advantage from identifiers’ links (e.g. PubMed ID)
of concepts, especially C=5, leads to better results that are available in almost every Wikidata page of a
compared to using higher numbers. certain resource or concept. And we had promising</p>
        <p>We think that by increasing the number of results because PubMed is one of the most valuable
expansion terms, we increase the possibility of adding sources in the medical domain. Furthermore, our
non relevant terms to the query. approach can be applied on queries of any domain by</p>
        <p>We believe that DBpedia improves Wikidata results switching to other identifiers depending of the domain
because in the DBpedia approach we use labels that of the query.</p>
        <p>As for CRDESLM-QET [20], it did not lead to high
results because it uses only MeSH terms. Although
MeSH terms are domain specific, they are very short
(formed with few words) compared to PubMed
abstracts. Also, MeSH is only a thesaurus that follows
a tree structure. Consequently, it is not rich in terms of
vocabulary compared to linked data sources.</p>
        <p>In the future, we consider using other domain
specific linked data sources, such as UMLS, for
comparison purposes.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>CONCLUSION</title>
      <p>Throughout the lockdown that occurred, nearly, in all
of the countries in a row, medical queries became
some of the most trending ones. As a matter of fact,
the need for relevant search results in this particular
domain, at this moment, pushed us to give more
attention to this field and do research in it.</p>
      <p>Our approach relies on various sources to determine
expansion concepts. Two of these sources are LOD
and others are: a search engine on medical databases
(PubMed), and a controlled vocabulary (MeSH).</p>
      <p>Since our suggested expansion approach that uses
domain independent as well as domain dependant
semantic sources outperforms our DBpedia approach
and expansion approaches from earlier works [20] and
[21], we may say that multiplying semantic sources in
Automatic Query Expansion and exploiting domain
specific sources, like PubMED and MeSH, helps in the
improvement of retrieval results. Furthermore, using
low numbers of expansion concepts helps in the
improvement of retrieval results. Moreover, our new
approach can be used for any collection of documents
and not only for collections in the medical domain
because Wikidata varies links (of identifiers) to a
resource in other databases depending on the domain
of the query.</p>
      <p>In the future, we will try to further improve the
results using other specific databases.
201
[16] Augenstein, I., Gentile, A.L., Norton, B., Zhang, [21] Dahir, S., El Qadi, A., &amp; Bennis, H. Query
Z., and Ciravegna, F.:.Mapping Keywords to expansion using Wikidata attributes’ values.
Linked Data Resources for Automatic Query In Third International Conference on Computing
Expansion. The Semantic Web: ESWC 2013 and Wireless Communication Systems, ICCWCS
Satellite Events. Lecture Notes in Computer 2019. European Alliance for Innovation (EAI)
Science, vol 7955. Springer, Berlin, Heidelberg (2019).</p>
      <p>(2013) [22] Boughanem, M., Kraaij, W., and Nie,
[17] Dahir, S., El Qadi, A., and Bennis, H.: Enriching J.Y.: Modèles de langue pour la recherche
User Queries Using DBpedia Features and d'information. In : Les systèmes de recherche
Relevance Feedback. Procedia Computer Science. d'informations. majid Ihadjadene (Eds.),
HermesVol.127 Issue C, pp. 499-504 (2018) Lavoisier, Lavoisier, 11, rue Lavoisier 75008, pp.
[18] Dahir, S., El Qadi, A., &amp; Bennis, H. An 163-182 (2004)</p>
      <p>Association Based Query Expansion Approach [23] Lemur Retrieval Applications.
Using Linked Data. In 2018 9th International http://www.lemurproject.org/lemur/retrieval.php
Symposium on Signal, Image, Video and [24] Common Evaluation Measures.
Communications (ISIVC) (pp. 340-344). IEEE https://trec.nist.gov/pubs/trec10/appendices/measu
(2018). res.pdf
[19] Le Maguer, S., Hamon, T., Grabar, N., and [25] Wikipedia contributors, Evaluation measures
Claveau, V.: Recherche d'information médicale (information retrieval). Wikipedia, The Free
pour le patient Impact de ressources Encyclopedia. Wikipedia, The Free Encyclopedia,
terminologiques. COnférence en Recherche 23 Mar. 2019. Web. 17 Apr. (2019).
d’Information et Applications, CORIA 2015, Mar [26] Goharian, N., Information Retrieval Evaluation,
2015, Paris, France. Actes de la conférence COSC 488:
CORIA (2015) https://www.coursehero.com/file/8847955/Evaluat
[20] Keyvanpour, M., &amp; Serpush, F. (2019). ESLMT: a ion/
new clustering method for biomedical document [27] Wikipedia contributors. (2018, December 6).
retrieval. Biomedical Mean reciprocal rank. In Wikipedia, The Free
Engineering/Biomedizinische Technik, 64(6), Encyclopedia. Retrieved 12:41, April 28, 2020
729-741. from https://en.wikipedia.org/w/index.php?title=
Mean_reciprocal_rank&amp;oldid=872349108</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Coronavirus</surname>
            <given-names>search trends</given-names>
          </string-name>
          ,
          <source>Page consultée le</source>
          <volume>18</volume>
          /04/2020 à partir de : https://trends.google.com/trends/story/
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Bouziri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Latiri</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gaussier</surname>
          </string-name>
          , É.:
          <article-title>Expansion de requêtes par apprentissage</article-title>
          .
          <source>Conférence en Recherche d'Informations et Applications</source>
          (
          <year>2016</year>
          )
          <fpage>200</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Keikha</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ensan</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Bagheri</surname>
          </string-name>
          , E.:
          <article-title>Query expansion using pseudo relevance feedback on wikipedia (</article-title>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Linked</surname>
            <given-names>open data</given-names>
          </string-name>
          ,
          <source>Page consultée le</source>
          <volume>18</volume>
          /04/2020 à partir de : https://wiki.digitalclassicist.org/Linked_open_data
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] DBpedia version 2016-04 | DBpedia [Internet].
          <source>[cited 2020 Oct</source>
          <volume>31</volume>
          ]. Available from: https://wiki.dbpedia.org/dbpedia-version-2016
          <source>-04</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Wikidata</surname>
          </string-name>
          [Internet].
          <source>[cited 2020 Oct</source>
          <volume>31</volume>
          ]. Available from: https://www.wikidata.org/wiki/Wikidata:Main_Pa ge
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Moreau</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Claveau</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>and Sébillot P.</surname>
          </string-name>
          :
          <article-title>Automatic morphological query expansion using analogy-based machine learning</article-title>
          .
          <source>ECIR'07 - 29th Eur. Conf. Inf. Retr.</source>
          , pp.
          <fpage>222</fpage>
          -
          <lpage>233</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Bhogal</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Macfarlane</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>A review of ontology based query expansion</article-title>
          .
          <source>Inf. Process. Manag.</source>
          , vol.
          <volume>43</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>866</fpage>
          -
          <lpage>886</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Jain</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mittal</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tayal</surname>
            ,
            <given-names>D. K.</given-names>
          </string-name>
          :
          <article-title>Automatically incorporating context meaning for query expansion using graph connectivity measures</article-title>
          .
          <source>Progress in Artificial Intelligence</source>
          , Volume
          <volume>2</volume>
          ,
          <string-name>
            <surname>Issue</surname>
          </string-name>
          2-
          <issue>3</issue>
          , pp.
          <fpage>129</fpage>
          -
          <lpage>139</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Azad</surname>
            ,
            <given-names>H.K.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Deepak</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>A New Approach for Query Expansion using Wikipedia and WordNet</article-title>
          . arXiv preprint arXiv:
          <year>1901</year>
          .
          <volume>10197</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Dahir</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khalifi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>El Qadi</surname>
            ,
            <given-names>A..</given-names>
          </string-name>
          <article-title>Query Expansion Using DBpedia and WordNet</article-title>
          .
          <source>In Proceedings of the ArabWIC 6th Annual International Conference Research Track</source>
          (pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          ) (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Sinha</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mihalcea</surname>
          </string-name>
          , R.:
          <article-title>Unsupervised graphbased word sense disambiguation using measures of word semantic similarity</article-title>
          .
          <source>In: Proceedings of ICSC</source>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Carpineto</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Romano</surname>
          </string-name>
          , G.:
          <article-title>A Survey of Automatic Query Expansion in Information Retrieval</article-title>
          .
          <source>ACM Comput. Surv.</source>
          , vol.
          <volume>44</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>50</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Guisado-Gámez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dominguez-Sal</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Larriba-Pey</surname>
          </string-name>
          , J.-L.:
          <article-title>Massive Query Expansion by Exploiting Graph Knowledge Bases for Image Retrieval</article-title>
          .
          <source>Proc. Int. Conf. Multimed. Retr., no. i</source>
          , p.
          <volume>33</volume>
          :
          <fpage>33</fpage>
          --
          <volume>33</volume>
          :
          <fpage>40</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Abbes</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          et al.:
          <article-title>Apport du Web et du Web de Données pour la recherche d'attributs</article-title>
          .
          <source>Conférence en Recherche d'Information et Applications - CORIA</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>