<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Measuring of Similarity between Pair of Words Using Word Net</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Atul Gupta</string-name>
          <email>atulpsit3883@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kalpana Sharma</string-name>
          <email>kalpanasharma56@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Krishan Kumar Goel</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Application, Raja Balwant Singh Management Technical Campus</institution>
          ,
          <addr-line>Agra</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science and Engineering, Bhagwant University</institution>
          ,
          <addr-line>Ajmer</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>WordNet</institution>
          ,
          <addr-line>Similarity Measure, Semantic Similarity, IS-A relationship</addr-line>
        </aff>
      </contrib-group>
      <fpage>23</fpage>
      <lpage>31</lpage>
      <abstract>
        <p>In the current era, the digital data size increased enormously and available abundantly. Retrieving the relevant and accurate information from the available data still a big challenge. In this paper we are finding the similarity of noun-noun pairs and verb-verb pairs using WordNet as corpus. Computation of similarity between noun-noun pairs in a sentence using different semantic algorithm computed and analysed.it has been observed that computation of similarity between verb-pair is found to be not as easy as computation of similarity between noun-pair. There are two challenges observed during the experimentation of this work. The first one is no standard data set available for verb pair and second is no exact hierarchy of verb of available in word-net.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>Internet Searching has been integral part of life. There are lot of search engines available, but it has</title>
        <p>still some unsolved challenges like not able to provide accurate and exact search result. For example, if
someone searches “Lincoln a Car ” then it will be providing Car brand as well as it will provide output
about Abraham Lincoln also. This search is not about the president of America President Abraham</p>
      </sec>
      <sec id="sec-1-2">
        <title>Lincoln, but the search engine will display these results also.</title>
      </sec>
      <sec id="sec-1-3">
        <title>In the given example it is the Car and the Name Lincoln have been identified as noun-noun pair. So</title>
        <p>semantically it should relate the two nouns together and result will be retrieved as per the context given.
Computation of similarity between word pair (noun-noun/ verb-verb) and sentence pair is still a huge
problem for the researcher who work in the field of search engine, gene prioritization and NLP.
Measuring of similarity between words is possible only in fixed domain for example: medical domain,
engineering domain etc. Computing the similarity between noun pairs and verb pairs is done by lexical
database.</p>
      </sec>
      <sec id="sec-1-4">
        <title>Lucknow</title>
        <p>→</p>
      </sec>
      <sec id="sec-1-5">
        <title>Words City</title>
        <p>→
→
are
connected
in the
form
of lexical chain
in
lexical
database i.e.,</p>
      </sec>
      <sec id="sec-1-6">
        <title>Capital</title>
      </sec>
      <sec id="sec-1-7">
        <title>Uttar Pradesh.</title>
      </sec>
      <sec id="sec-1-8">
        <title>Different semantic</title>
        <p>
          measure algorithms have been
developed to compute the semantic closeness in the pair of words using the lexically connected database
i.e., Word-Net. Words are present in hierarchal form in Word-Net. Various approaches have been
implemented previously which uses lexical database as Word-Net. George A. Miller [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] a psychology
professor of Princeton university developed Word-Net in 1985.
        </p>
      </sec>
      <sec id="sec-1-9">
        <title>Words are arranged in Word-Net corpus in the synonymous relationship called as Synsets. In the Word-Net, words are organized in the form of Synset. There are 207016 word-pairs presents in compressed form. Word-Net contain different type of semantic relationship like synonym, antonym, hyponyms and meronyms in noun, verb, adverb and adjective.</title>
        <p>2020 Copyright for this paper by its authors.
concept to subordinate</p>
        <p>relationship
concept to sub-type</p>
        <p>relationship
part to whole relationship
whole to part relationship
opposite relationship
concept to subordinate
relationship
word1=”breakfast”→word2=”meal”</p>
        <p>word1=”plant”→word2=”tree”
wowrodr1d=1”=c”otuarbslee””→→wwoorrdd22==””mleeg”al”
word1=”leader”→word2=”follower”
word1=”breakfast”→word2=”meal”</p>
        <p>
          The Co-occurrence approach[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] formalizes the concept concerning information retrieval. In the
word co-occurrence methodolgy, there is a word-list and for each word in the word list a meaningful
word is connected.The query is retrieved by creating a vector. Ordering and context of this particular
search query is fully unmeasured. So it is a major drawback of this methodolgy. In the lexical database
methodology[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], similarity computation is done by the pre-defined Word-Net hierarchy, which is
arranged in tree-like structure[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Web search engine methodolgy compute the similarity but sometimes
the word have opposite meaning which may occurred in the same webpage. This influenced the
calculated similarity value adversly. This methodolgy is developed by Google-Similarity-Distane[4].
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Overview Of Different Similarity Measure Technique</title>
      <p>
        The computation of similarity between two word pair i.e noun-noun pair/ verb-verb pair is done by
using Word-Net ontology. Word-Net is developed by professor G.A Miller[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and it is managed by the
laboratory of Conginitive science in princeton university. The ontology contain three databases noun,
verb and adverb-adjective. In this ontology, words are organized in the form of Synsets.This paper is
focussed on IS-A relationship between noun pair and verb pair. There are several hierarchy present in
the Word-Net and all the hierarchy are subsumes in a common root node. Similarity approach are
classified in different forms:
(a) Similarity calculation by distance based methodology
(b) Similarity calculation by Information Content based methodology.
(c) Similarity calculation by Feature based methodogy.
2.1.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Similarity calculation by distance-based methodology</title>
      <sec id="sec-3-1">
        <title>In this approach similarity is computed by measuring the distance between two words. This is also</title>
        <p>called Edge-Counting based methodology. Pair of words concerning Path Length is calculated for
measuring the similarity among group of words. Similarity score measured by this approach is in
discrete form, so there is applying normalization. Various path-based algorithms have been developed
by Leacock-Chodorow[7] and Wu and palmer[9].</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>2.1.1. Leacock-Chodorow Similarity Approach</title>
      <sec id="sec-4-1">
        <title>The Approach of LCH [7] is based on shortest</title>
        <p>path length. In this approach computation of
similarity is done by finding the path length. It is the shortest path between noun-pair in the Word-Net
IS-A relationship. The shortest path is defined as there are less no of intermediate node between two
words. The shortest value retrieved by this is scaled up by the depth factor D where calculation of depth
is done by the longest path from the root to leaf in the hierarchy of Word-Net. The calculation of
similarity is done by.</p>
        <p>= −
{
⁡(
(
2 ∗ 
ℎ( 1,  2)))
}
(1)
where w1 denotes first word and w2 denotes second word. minimum(length(w1,w2)) denotes the
minimum path length between word pair w1 and w2. Depth factor D is the maximum depth from root
to leaf in the Word-Net.
of links between word-pairs.</p>
      </sec>
      <sec id="sec-4-2">
        <title>LCH approach is easy. Computation of similarity between word pair is done by counting the number</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>2.1.2. Wu and Palmer Similarity Approach</title>
      <sec id="sec-5-1">
        <title>The Wu and Palmer Similarity Approach[9] focuses on path length among word-pair. This is based on most specific common predecessor node.It is called as Lowest Common Subsumer node(LCS).The Similarity between two words in a IS-A relationship in Word-net ontology is computed in this manner.</title>
        <p>This method perform well in verb ontology and different part of speeches where words are arranged in
hierarichal structure. The calculation of similarity is done by:
⁡Similaritywu&amp;palmer(w1, w2) = 2 ∗</p>
        <p>+  + 2 ∗ 


(2)</p>
      </sec>
      <sec id="sec-5-2">
        <title>Where A and B define as count of IS-A node between word w1 &amp; w2 to the most common ancestor</title>
        <p>node. C is defined as depth. It is computed from the word present in the Word-Net to the root in the
Word-Net. The Approach of Wu and Palmer is based on the lowest common subsumer of two words.
The value of similarity between words never becomes 0. In the Fig2.1. Comparison of similarity
between words is determine by LCH and Wu &amp;Palmer based.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Similarity calculation by Information Content</title>
      <sec id="sec-6-1">
        <title>It deals with the similarity characteristic is calculated on the basis of information content. The</title>
        <p>content of Information concerning the word is computed by the frequency of the word in the Word-net
ontology. The characteristic of frequency concerning the word is computed by the probability of
occurrence of the word. Information content of the word is computed as:</p>
        <p>Information Content(IC)= −⁡
( ( ))⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡
(3)</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>2.2.1. Resnik Similarity Approach</title>
      <sec id="sec-7-1">
        <title>Resnik [10] similarity approach is based on the value of information content(IC) of the word.In this</title>
        <p>approach Similarity value calculation is done by how much information is shared between words w1
and w2. If the information shared between words are more than similarity value is high, otherwise low.</p>
      </sec>
      <sec id="sec-7-2">
        <title>The calculation of similarity is done by:</title>
      </sec>
      <sec id="sec-7-3">
        <title>This approach of computing similarity work on verb along with noun. The justification for this that both part of speech in structured in hierarichal manner[5,6].The value computed by resnik simlarity is always greater than 0.The Information Content value give better result than path-based approaches.</title>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>2.2.2. Similarity Approach of Jiang &amp; Conrath</title>
      <p>Jiang and Conrath [11] approach depends on the information content of the LCS between words and
semantic distance between word pairs. The calculation of similarity is done by:

⁡</p>
      <p>Lin similarity approach[12] is based on the information that is shared by the word pairs to the
summation of Information content value of the the word pairs. The concept of Lin’s is based on the
commonality in word-pairs to the information content value that described them completely.</p>
      <sec id="sec-8-1">
        <title>Where Distancejiang and conrath find the value of dissimilarity between words. The measure of</title>
        <p>values (low) indicate more similar words and high value indicate least similar words.
Jiang and Conrath approach is same as Resnik’s approach. Similarity find by this approach is based on
commonality between pair of words w1 and w2 and IC value of the words. There is special case need
to handle when the value is 0.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>2.2.3. Lin Similarity Approach</title>
      <p>(4)
(5)
(6)
(7)
Commonality means the information that is shared by the common ancestor node of the word w1
and w2 and the total amount of information described completely by the word w1 and w2. Lin
computes the similarity between 0 and 1. 0 denotes the low similarity that mean two words are of
different context and 1 denote the high similarity that mean two word are same or compared with
itself. The similarity is calculated by:
5
4
3
2
1
0
il,traeubooCAm j,rvyygaeenuoo l,yabdo ,trsscaehoo l,ssyaaedhuuomm iii,rzcgaaandwm i,yanddnoom ,ftrscvaeenuo i,fftrduoo i,rcckdbo i,rrcaedbn lli,tteepnoomm ,trrkehbnoom l,trraehbdo li,trcaeeepnnmm j,rrcyaenuo l,rckaenoom l,trcyaeednndoowm ,ftrrsedoooo lil,tscaho ,ftrrrsvygaaeedo l,rsaedhdnooow l,svkaenom ,fttrsscaeoo il,rzaaddw il,rscedom ili,sscggaaanm ,trrsvygaeeooo i,trsgnnnoo</p>
    </sec>
    <sec id="sec-10">
      <title>2.3. Similarity Computation by Feature based methodology</title>
      <sec id="sec-10-1">
        <title>This similarity is based on features. Similarity value is high if the two words share the more common features and similarity value is low if the words have some unique features.</title>
        <p>2.3.1. Tversky’s Similarity Approach</p>
      </sec>
      <sec id="sec-10-2">
        <title>Tversky’s similarity approach[13] is based on features. If the two word pairs w1 and w2 have more</title>
        <p>common features then similarity is high, and if the word pairs have indegenious features then the meaure
of similarity values will be low. The Measure of Similarity between two words is based on how much
the two words shared the common feature[8], the unique feature present in word w1 but not present in
word w2 , and the unique feature present in word w2 but not present in word w1. The similarity is
calculated by:
’ = ⁡ .  ( (</p>
        <p>⁡ ( ( )  ⁡ ( ( )
) ⁡ ∩ ⁡ ( )) ⁡ − ⁡ −
 ( )  ( )
(8)
Where  ( 1) means the unique feature of  1 but not present in  2
 ( 2)
 ( 2)means the unique feature of w2 but not present in w1. f(w1)intersection f(w2) means feature
 ( 1)
similar in words.</p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>2.3.2. The Similarity Approach of Piarro</title>
      <sec id="sec-11-1">
        <title>Piarro approach[14] focuses on the feature based approach developed by Tversky’s . This approach</title>
        <p>explored the Tversky’s feature based approach into information content(IC) domain.
The function  ( ( 1)⁡ ⁡ ( 2) is equivalent to IC(lcs(w1,w2)).
 ( 1) is equivalent to IC(w1)-IC(lcs(w1,w2)) ,  ( 2)is equivalent to IC(w2)-IC(lcs(w1,w2)).⁡
 ( 2)  ( 1)
  ⁡ ⁡</p>
        <p>⁡ ∗ ⁡  ( ( ,  )) − ⁡  ( ) −   ( ) ⁡ ( ≠ 
= { ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡
 ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡  ( = 
)
)
(9)</p>
      </sec>
      <sec id="sec-11-2">
        <title>Where:</title>
      </sec>
      <sec id="sec-11-3">
        <title>ICvalue(lcs(w1,w2)) denotes Information content value of the subsumer.</title>
      </sec>
      <sec id="sec-11-4">
        <title>ICvalue(w1) denote Information-content value of the word w1.</title>
      </sec>
      <sec id="sec-11-5">
        <title>ICvalue(w2) denote the Information -content value of the word w2.</title>
      </sec>
    </sec>
    <sec id="sec-12">
      <title>3. Comparison of Various Similarity Technique</title>
      <sec id="sec-12-1">
        <title>On Comparison of two distance-based similarity approach i.e., Leacock and Chodorow and Wu and</title>
        <p>Palmer and three Information content-based approach similarity approach i.e., Resnik’s and Lin etc.
The result shows that similarity score focuses on the synonym of the words. Rubenstein [15] selected
51 word-pair from human based judgement from set of 65 word-pairs. Rating of word-pair is given in
the range from 0.0 to 4.0. The rating 0 means the words are semantically unrelated and rating 4.0 means
words are highly related. Miller and Charles [16] selected 30 set of word-pair from Rubenstein and
Goodenough [15] 65 set of word-pair. Miller’s divide 30 word-pair into three sets i.e., 10 word-pair
in each set. In the first set of 10 word-pair having rating between 3.0 to 4.0 are high similarity value
words , next set of 10 word-pair having rating between 1.0 to 3.0 are intermediate similarity value
words and last set of 10-word pair having rating between 0.0 to 1.0 are low similarity value words. In
this paper for similarity calculation Miller and Charles 30 word-pair are taken for computation of
similarity. In the Fig 5 various similarity-based algorithm as shown below.</p>
      </sec>
    </sec>
    <sec id="sec-13">
      <title>Comparison of Various Similarity based Algorithm</title>
      <p>Human Judgem ent</p>
      <sec id="sec-13-1">
        <title>Leacock &amp; Chodoro w (distance based)</title>
      </sec>
      <sec id="sec-13-2">
        <title>Wu and Palmer (distanc e based)</title>
        <p>5
0</p>
        <p>,raC… j,ryenuo… l,yadbo ,trsscaehoo l,syau…m ii,cgaan…m i,yannddoom ,ftrscvaeeuno i,fftrudoo i,rcckdbo i,rrcaenbd l,too… ,trrehbo… l,trraebdho ,rcaen… j,rrcyaeuno l,rckaenoom ,trcyeen…m ,ftrrsedoooo lil,tscaho ,ftrseo… ,rseho… l,svkaenom ,fttrsscaeoo il,rzaaddw il,rscedom l,ssga… ,trrseoo… i,trsgnnnoo</p>
      </sec>
    </sec>
    <sec id="sec-14">
      <title>4. Results &amp; Discussion</title>
      <sec id="sec-14-1">
        <title>The result shows that on the implementation of the various edge counting methodology like LCH, Wu &amp; Palmer and Information content methodology like Resnik’s and Jiang etc. The information content methodolgies gives better correlation than all the edge counting methodolgies.Comparison of all the similarity approach is shown in Fig 6 given below:</title>
      </sec>
      <sec id="sec-14-2">
        <title>In the fig 6 it is observed that correlation value of jiang and conrath is better than all the other</title>
        <p>information content approaches like Resnik and Lin . The similarity value of jiang and conrath is based
on commonality between words w1 and w2. and the IC-value of the words that describe them
completely. The correation value of Jinag &amp; Conrath is 0.892 on testing with Miller and Charle’s 30
word-pair.</p>
      </sec>
    </sec>
    <sec id="sec-15">
      <title>5. Conclusion</title>
      <sec id="sec-15-1">
        <title>Similarity between word-pairs is one of the emerging concept in the field of artificial intelligence,</title>
        <p>machine learning and genes priortization.Calculation of similarity between</p>
        <p>Word-pair is done by various approaches like distance based , information content based and feature
based. All the approaches uses ontology of speific domain to find similarity.The Jiang and Conrath
provides better result than other approaches. It is based on information and It has been seen that the
similarity increases with the increase in the heirarchy of WordNet depth. So depth feature can be taken
into account to find the similarity between words. The future task is to develop the approach which
compute the simlarity between two pair of sentence</p>
      </sec>
    </sec>
    <sec id="sec-16">
      <title>6. References</title>
      <p>[4] Cilibrasi, Rudi L., and Paul MB Vitanyi. "The google similarity distance." IEEE Transactions on
knowledge and data engineering 19, no. 3 (2007): 370-383.
[5] Gupta, Atul, and Krishan Kumar Goyal. "Classification of Semantic Similarity Technique between</p>
      <p>Word Pairs using Word Net."
[6] Goyal, Krishan Kumar. "Computation of Verb Similarity." Design Engineering (2021): 4127-4140.
[7] Leacock, Claudia, and Martin Chodorow. "Combining local context and WordNet similarity for
word sense identification." WordNet: An electronic lexical database 49, no. 2 (1998): 265-283.
[8] Gupta, Atul, and Dharamveer Kr Yadav. "Semantic similarity measure using information content
approach with depth for similarity calculation." (2014).
[9] Wu, Zhibiao, and Martha Palmer. "Verb semantics and lexical selection." arXiv preprint
cmplg/9406033 (1994).
[10] Resnik, Philip. "Using information content to evaluate semantic similarity in a taxonomy." arXiv
preprint cmp-lg/9511007 (1995).
[11] Jiang, Jay J., and David W. Conrath. "Semantic similarity based on corpus statistics and lexical
taxonomy." arXiv preprint cmp-lg/9709008 (1997).
[12] Li, Yuhua, Zuhair A. Bandar, and David McLean. "An approach for measuring semantic similarity
between words using multiple information sources." IEEE Transactions on knowledge and data
engineering 15, no. 4 (2003): 871-882.
[13] Tversky, Amos. "Features of similarity." Psychological review 84, no. 4 (1977): 327.
[14] Pirró, Giuseppe. "A semantic similarity metric combining features and intrinsic information
content." Data &amp; Knowledge Engineering 68, no. 11 (2009): 1289-1308.
[15] Rubenstein, Herbert, and John B. Goodenough. "Contextual correlates of
synonymy." Communications of the ACM 8, no. 10 (1965): 627-633.
[16] Miller, George A., and Walter G. Charles. "Contextual correlates of semantic similarity." Language
and cognitive processes 6, no. 1 (1991): 1-2</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>George A.</given-names>
          </string-name>
          , Richard Beckwith, Christiane Fellbaum, Derek Gross, and
          <string-name>
            <given-names>Katherine J.</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <article-title>"Introduction to WordNet: An on-line lexical database."</article-title>
          <source>International journal of lexicography 3</source>
          , no.
          <issue>4</issue>
          (
          <year>1990</year>
          ):
          <fpage>235</fpage>
          -
          <lpage>244</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Boyce</surname>
          </string-name>
          , Bert R.,
          <string-name>
            <surname>Bert</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Boyce</surname>
          </string-name>
          , Charles T. Meadow,
          <string-name>
            <surname>Donald H. Kraft</surname>
          </string-name>
          ,
          <string-name>
            <surname>Donald H. Kraft</surname>
          </string-name>
          , and
          <string-name>
            <surname>Charles</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Meadow</surname>
          </string-name>
          .
          <article-title>Text information retrieval systems</article-title>
          . Elsevier,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>George A</given-names>
          </string-name>
          .
          <article-title>"WordNet: a lexical database for English."</article-title>
          <source>Communications of the ACM</source>
          <volume>38</volume>
          , no.
          <volume>11</volume>
          (
          <year>1995</year>
          ):
          <fpage>39</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>