=Paper=
{{Paper
|id=Vol-1797/paper3
|storemode=property
|title=LooPings: a Look at Semantic Similarities
|pdfUrl=https://ceur-ws.org/Vol-1797/paper3.pdf
|volume=Vol-1797
|authors=Adama Sow,Jean-Rémi Bourguet
|dblpUrl=https://dblp.org/rec/conf/iwsw/SowB16
}}
==LooPings: a Look at Semantic Similarities==
LooPings: a Look at Semantic Similarities
Adama Sow1 and Jean-Rémi Bourguet2,3
1
Département Génie Informatique et Télécommunications
Ecole Polytechnique de Thiès (EPT) – Senegal
adama.sow.4@ulaval.ca
2
Dipartimento di Scienze Politiche ed Ingegneria dell’Informazione
Università degli Studi di Sassari (UNISS) – Italy
3
Núcleo de Estudos em Modelagem Conceitual e Ontologias
Federal University of Esprito Santo (UFES) – Brazil
jrbourguet@inf.ufes.br
Abstract. Semantic similarities is a cross-field research in Natural Language
Processing and Ontologies with some possible fallouts in Artificial Intelligence.
Formerly, similarities were computed following a syntactical treatment to sup-
port case-based reasoning. Textual similarities are now guided by semantic ma-
chineries, offering various ways to compute relatedness measures. In this paper,
we present both a logical and a visual framework aiming to reason with them.
For that reason, we introduced FLH± , a fragment of description logic under-
pinning the well-known lexical database Wordnet. We illustrated this framework
with the path length relatedness, one of the historical similarity measures oc-
curring in a taxonomy. The core of our framework orchestrates the computation
of similarity scores supported by R E V ERB, S TANFORD CORE NLP and W ORD -
N ET:S IMILARITY APIs and interfaces global similarities in graphical way by
positioning them on segments. We also depicted some experimental results to
confront our computational framework with some empirical data.
Keywords: Semantic Similarity, Ontology, Description logic, Natural language
processing, Interface, Empirical data
1 Introduction
Reasoning with similarities is seen as one of the crucial steps in Artificial Intelli-
gence. Turing, in his paper Computing Machinery and Intelligence [1], suggested that
a machine having passed the so-called test, should appear as a human 70% of the time
after five minutes of conversation. From Joseph Weizenbaum, and his seminal proposal
Eliza [2] in 1966, until the last generation of programs Jabberwacky and Cleverbot
developed by Rollo Carpenter [3] during the last decade, the treatment of similarities
attracted a lot of interest to tackle this issue. Formerly, one classical way was to deal
with Natural Language Processing (NLP), focusing on syntactic similarities through a
case-based reasoning machinery.
These last years, the Semantic Web [4] has given complementary resources in terms
of knowledge bases, offering new standards to deal with lexical databases. The semantic
dimension of similarities is now guided by well-established and moderated repositories
24
of knowledge including ontological layers (see [5]), but few attempts was performed to
combine the classical NLP-based technologies together with semantic web infrastruc-
tures in order to analyze the limits of such interactions (see for example [6]). Moreover,
if some methods were also proposed to compute global scores of similarities between
two phrases based on local semantic similarities of their components (see [7,8]), we
remark that the usual way to output a set of similarities between a phrase and a set
of phrases is an ordered list of items (e.g. the popular search engines), and very few
approaches were proposed in the literature in order to screen the similarity scores in
different ways (see [9,10]).
In this paper, we present a logical and visual framework to represent and reason with
textual relatedness measures guided by ontologies. In order to bridge the gap between
natural language and ontologies, we introduced a fragment of Description Logic (DL),
underpinning the well-known lexical database W ORD N ET [11]. With this fragment, we
can properly redefine some relatedness measures historically used in taxonomies. To
gain place, we only introduced here the path length relatedness measure denoted plr.
Nevertheless, other historical similarity measures considering the maximum depth in
taxonomy (see lch in [12]), the depth of the least common subsumer (see wup in [13])
or the supported information content (see jcn in [14] and lin in [15]) have also been
integrated in this framework.
After some preliminaries to introduce our logical framework in section 2, we de-
scribe in section 3 both our computation and an interface to screen the similarities be-
tween a phrase and a set of phrases. Finally, in section 4 we present an experimental
validation to confront the theoretical similarities with some experimental ones.
2 Preliminaries
DL is a well-known family of formal knowledge representation models. Semantic
languages used on the web to share knowledge (e.g. RDFS [16], OWL [17] and OWL2 [18])
have all some direct underpinning logics that are fragments of DL. The core interpre-
tation of a DL in first order logic was given by Baader and Nutt in [19] as follows:
Definition 1. Let C the set all the atomic concepts and R the set all the atomic roles,
an interpretation I is an ordered pair (∆I , ·I ) such that:
- ∆I is the domain, i.e. a non-empty set of individuals,
- ·I is the interpretation function which maps:
- each atomic concept A ∈ C to a set AI ⊆ ∆I ,
- each atomic role r ∈ R to a binary relation rI ⊆ ∆I × ∆I .
In this paper, we chose to introduce FLH± defined in Definition 2 as an extension
of AL [20] using transitivity and hierarchy for roles but removing negation, intersec-
tion, limited existential and value restrictions.
25
Definition 2. Let an interpretation I, {A, B} ⊆ C and {r, s} ⊆ R
>I = ∆I top concept
⊥I = ∅ bottom concept
(A v B)I = AI ⊆ B I inclusion axiom
(r v s)I = rI ⊆ sI role hierarchy
(r+ )I = (a, b) ∈ rI ∧ (b, c) ∈ rI → (a, c) ∈ rI transitivity
(dom(r) ≡ A)I = ∀a, b ∈ ∆I .a ∈ AI ∪ (a, b) ∈ / rI domain
I I I
(ran(r) ≡ B) = ∀a, b ∈ ∆ .b ∈ B ∪ (a, b) ∈ / rI range
As redefined in [19], a Terminology Box or a TBox in DL is a finite set of axioms,
with no symbolic name (equality whose left-hand side is an atomic concept), which is
defined more than once. Princeton’s W ORD N ET is a lexical database for the English
language [21]. A decade after the creation of this database, van Assem [22] proposed a
conversion of W ORD N ET in OWL. Example 1 presents a sample of TBox underpinning
W ORD N ET in OWL introducing particularly the concept S YNSET4 and the role h5 .
Example 1 (Sample of W ORD N ET TBox).
S YNSET v >, dom(h) ≡ S YNSET, ran(h) ≡ S YNSET, h+ .
In the terms of Baader and Nuts [19], an Assertional Box or an ABox is a specific
state of affairs of an application domain in terms of concepts and roles. Example 2
depicts a graph representing a sample of ABox underpinning W ORD N ET in OWL where
>n represents the individual root of all the individual nouns present in W ORD N ET,
the edge represents synsets (e.g. S YNSET(Astract)) and the directed arcs represent the
hyponym relation assertions between two individuals (e.g. h(Attribute, Abstract)).
Note that the W ORD N ET Abox is partitioned in different set, dealing with different
parts of speech (nouns, verbs, adjectives and adverbs).
Example 2 (Sample of W ORD N ET ABox).
>n
Astract P hysical
Relation Attribute
The idea of computing relatedness measures between concepts occurring in a taxon-
omy is an old issue (see [23]). Few approaches (see [24,25]) have described the common
relatedness measures through a DL formalism. According to Rada [26], a path length
is founded on a node-counting scheme concerning the smallest specified role counting
between two individuals. Given an interpretation and two individuals, we redefined the
shortest path length function denoted L as follows:
4
Short for “Synonym set” to call a non-empty set of synonyms.
5
Short for “hyponym” to call the well-known “hyponymy” relation.
26
Definition 3. Let an interpretation I, {uk , vl } ∈ ∆I and h ∈ R
L(uk , vl ) = min(2n-(k+l)) s.t. ∀(i, j) ∈ [[k, n]×[[l, n]. h(ui , ui+1 )∧h(vj , vj+1 )∧un =vn
Note that if a practitioner deals with the transitive closure of the W ORD N ET ABox,
a min max operator may be used due to the transitivity of the relation h. Example 3 lists
the shortest path length between individual nouns introduced in Example 2. We denote
>v the individual root of all the verbs. The ABox being partitioned, note that no verb
is involved in a hyponym relation with a noun (and vice-versa).
Example 3 (Shortest Path Length).
L(Relation, Abstract) = 1 L(Relation, Attribute) = 2
L(P hysical, P hysical) = 0 L(P hysical, >v ) 7→ ∞
As reintroduced by Pedersen [27], the path length based relatedness score is equal
to the inverse of the shortest path length between two concepts. Naturally, it is inversely
proportional to the number of nodes along the shortest path between the synsets. The
path length based relatedness denoted plr is defined as follows:
Definition 4. Let an interpretation I, {u, v} ∈ ∆I
1
plr(u, v) = 1+L(u,v)
If no path exists (e.g. between a verb and a noun) we consider that L tends to ∞, and
then the path length based relatedness score approaches 0. The highest possible score
occurs when the two synsets are the same, in which case the score is 1.
Example 4 (Path Length based Relatedness).
plr(Relation, Abstract) = 1/2 plr(Relation, Attribute) = 1/3
plr(P hysical, P hysical) = 1 plr(P hysical, >v ) 7→ 0
In the next section, we present our method to compute similarity scores between
two phrases based on the local semantic similarities of their components (e.g. plr),
moreover we depict a way to screen them through a graphical user interface.
3 Computational Framework
The finality of our framework is to screen similarity scores between one phrase and a
set of phrases. We chose to investigate the way transforming a phrase in a set of triples
before performing the computation of similarities. We selected the API R E V ERB, an
information extractor for massive corpora (working without pre-specified vocabulary).
In [28], R E V ERB was judged with a extraction precision of 0,8 or higher in at least 30%
of the time, substantially outperforming others extractors like T EXTRUNNER (see [29])
and W OE (see [30]). Formally and for the following, we can see a phrase p as a set
of triples {t1 , . . . , tn } where tl = ht1l , t2l , t3l i with tkl ∈ ∆I . Thereafter, we define
27
the global score between two phrases as the maximum of the averages of similarities
between the components from the triples (average of similarities between subjects, ob-
jects, and complements). Moreover, we arbitrary avoid to take into account two kinds of
scores: scores of 0, it is the case when at least one entry is not present in the lexical base
of W ORD N ET (see section 2), and the score of 1 (similar strings) only if the individuals
in comparison are present in a stop words set denoted θ.
Definition 5. Let θ and two phrases p, q, the Score of similarity is defined as follows:
S(p, q)=max(avg({s(tki , tkj )| s(tki , tkj ) 6= 0 ∧ (s(tki , tkj ) 6= 1 ∨ tki , tkj ∈
/ θ)}))
ti ,tj
with ti ∈ p, tj ∈ q, k ∈ [[1, 3]], s ∈ {plr, lch, wup, jcn, lin} and S the upper case of s.
To support this computation, we used the S TANFORD CORE NLP API [31] to per-
form the lemmatization of the components from the triples. The similarities calculus of
the lemme were performed by W ORD N ET:S IMILARITY (developed by Pedersen et al.
in [32] and redesigned in Java by Shima [33]).
Fig. 1: The system description of LooPings
All this treatment is included in the framework LooPings6 (Lexical and ontological
observations to Plot ingathering similarities). Looking at the system description of
Fig. 1, LooPings is decomposed in several modules:
6
The framework LooPings is available at http://nemo.inf.ufes.br/?p=1185.
28
INPUT takes a phrase and some parameters (e.g. local measure, stop words list, etc.).
LIBRARIES comprise R E V ERB, S TANFORD CORE NLP and WS4J API (bundled
with W ORD N ET).
CORE manages the computation.
OUTPUT is devoted to screen and to plot the similarity scores.
Fig. 2: An interface of LooPings
One of the most challenging steps is the connection between the NLP APIs R E -
V ERB and WS4J. The main aspect that made this compatibility a little immature is the
fact that R E V ERB can output expressions (several raw words) while WS4J accepts only
a single element as input (a lemme or an expression of lemmes linked themselves in one
string by underscores). Then, we had to perform a treatment from our raw phrases, fol-
lowing some basic heuristics, as described through the example below:
. What can we learn from the neural networks of C.elegans to understand human brains?
The outputted triple is hwe,learn from,the neural networks of C.elegansi.
1. Lemmatization:
hwe,learn from,the neural network of C.elegansi
2. Chunking:
hwe,learn from,the neural network C.elegansi
3. Stop words removing:
hwe, learn, neural network C.elegansi
4. Cutting:
hwe, learn, neural networki;hwe, learn, C.elegansi
29
Note that during the step of the removal, stop words were never removed from the
triples when it involved an empty set for one component. Fig. 2 presents the interface of
LooPings, where practitioners can visualize and confront the semantic similarities.
Note that this framework is oriented to integrate queries and SPARQL [34] interpreta-
tions allowing a possible integration of other semantic web technologies. The similarity
scores are represented on a segment [0,1].
The last section is dedicated to the confrontation between the theoretical scores with
some experimental ones.
4 Experimental Validation
In order to deal with real queries, we used STACK EXCHANGE API allowing us to
extract from a website some structured data (in JSON format) about different question-
and-answer themes. The total number of available queries for the Cognitive Science
section of STACK EXCHANGE website was 1245, we decided to extract the 1200 first
queries (w.r.t. the extraction order), from each of them we stored ids (from 1 to 4697)
and queries. Formally, a query referenced by an id is denoted qid , moreover we call
series Sj a set of queries: Sj ⊆ {qid |1 ≤ id ≤ 4697}. A tricky aspect was the fact that
R E V ERB outputted nothing in 70.75 percent of queries. So, we decided to divide, the
1200 queries in 12 series of 100 queries, finally giving a median series of 28 queries.
Due to space limitation, we will depict in this article only two series (A and B).
Once the extraction was performed, we created an on-line semantic recall based
experience. For each series, we designed a witness query. We attached a questionnaire to
each series and sent them by e-mail to mailing lists of PhD students. The requirement for
participating was to be proficient in English. Questionnaires were designed as follows:
1. Instruction to read carefully a witness query.
2. Instruction to read the series of queries and to check 1, 2 or 3 queries among them,
(at least 1, at most 3), the most similar, for the user, to the witness queries.
3. Instruction to partially preorder the selected query(ies).
We succeeded in obtaining 50 volunteers. We gave accumulated marks for queries
in each series w.r.t the preorders given by the volunteers. For example if a volunteer
selected only one query qi in the series, the accumulated mark for qi was increased by
6, the remaining possible situations are listed below:
qi qj qi (+4); qj (+2) qi ∼ qj qi , qj (+3)
qi qj qk qi (+3); qj (+2); qk (+1) qi ∼ qj ∼ qk qi , qj , qk (+2)
qi qj ∼ qk qi (+3); qj , qk (+1.5) qi ∼ qj qk qi , qj (+2.5); qk (+1)
The maximum mark is translated to an experimental score of 1, after what all the
other marks have been transposed in experimental scores by cross-multiplications. As
depicted in Fig. 3, we took the PLR score as a reference in order to observe how the
other scores behaved following it. Thereafter, we describe what is globally remarkable
in the behavior of the theoretical similarities.
30
– The higher the PLR is, the lower is the difference between scores founded on path
lengths (LCH, WUP) and scores founded on information contents (JCN, LIN).
– WUP and LIN react in the same way in the case of a sudden increase or drop.
– LIN and JCN scores have a behavior of exponential shape in all the series.
Nevertheless, there are some limitations for this framework. We relate for example
the case of a saturation. Series B is remarkable by the fact that 5 queries are outputted
with the maximal score.
. Where could I find psychological experiments tools?
The outputted triples were hI,find,psychologicali; hI,find,experimenti; hI,find,tooli
Here, the systematic presence of two stop words “I” and “find” make that all the
similarity scores are 1 iff one of the words “psychological”, “experiment” or “tool” is
present in one of the extracted triples.
Fig. 3: Some plots of LooPings
This experience showed some natural limits concerning our approach. The main
issue seems to be the automatic semantic annotation step (R E V ERB) and the matching
step with W ORD N ET individuals (through W ORD N ET:S IMILARITY).
5 Conclusion
In this paper, we presented both a computational and a visual framework to support
semantic similarities guided by ontologies. The first contribution was to redefine some
taxonomy-based relatedness measures (e.g. path length relatedness) in a DL fragment
(FLH± ) we introduced.
The core of our framework orchestrates the computation of similarity scores supported
by R E V ERB, S TANFORD CORE NLP and W ORD N ET:S IMILARITY APIs and interfaces
results in graphical way through segments. Moreover, we plotted the behavior of our
computations by designing a semantic recall-based experience to confront empirical
similarities with the theoretical ones. Some research perspectives for this work would
31
be to extend LooPings in two ways. The first concerns the core of our framework
by interpreting scores founded on other relations like for instance the mereology in
W ORD N ET. The second is to integrate other repositories to support the similarities
or other frameworks to compute similarities directly on semantic web languages (e.g.
using the technology of QAKiS [35]).
References
1. Alan Mathison Turing. Computing machinery and intelligence. Mind, 59(236):433–460,
1950.
2. Joseph Weizenbaum. Eliza;a computer program for the study of natural language communi-
cation between man and machine. Commun. ACM, 9(1):36–45, 1966.
3. Rollo Carpenter and Jonathan Freeman. Computing machinery and the individual: The Per-
sonal Turing Test. Technical report, Jabberwacky, 2005.
4. Tim Berners-Lee, James Hendler, Ora Lassila, et al. The semantic web. Scientific american,
284(5):28–37, 2001.
5. Andreas Hotho, Steffen Staab, and Gerd Stumme. Ontologies improve text document clus-
tering. In Proceedings of the 2003 IEEE International Conference on Data Mining, pages
541–544. IEEE Computer Society, November 19-22, 2003.
6. Viviana Mascardi, Angela Locoro, and Fabrizio Larosa. Exploiting prolog and nlp tech-
niques for matching ontologies and for repairing correspondences. In Proceedings of the
24th Convegno Italiano di Logica Computazionale, pages 10–24, 2009.
7. Yuhua Li, David McLean, Zuhair Bandar, James O’Shea, and Keeley A. Crockett. Sentence
similarity based on semantic nets and corpus statistics. IEEE Trans. Knowl. Data Eng.,
18(8):1138–1150, 2006.
8. Palakorn Achananuparp, Xiaohua Hu, and Xiajiong Shen. The evaluation of sentence simi-
larity measures. In Proceedings of the 10th International Conference on Data Warehousing
and Knowledge Discovery, DaWaK ’08, pages 305–316. Springer-Verlag, 2008.
9. Oren Zamir and Oren Etzioni. Grouper: A dynamic clustering interface to web search results.
Comput. Netw., 31(11-16):1361–1374, May 1999.
10. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Dis-
tributed representations of words and phrases and their compositionality. In Advances in
Neural Information Processing Systems 26: 27th Annual Conference on Neural Information
Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe,
Nevada, United States., pages 3111–3119, 2013.
11. George A. Miller. Wordnet: A lexical database for english. Communications of the ACM,
38:39–41, 1995.
12. Claudia Leacock and Martin Chodorow. Combining local context and wordnet similarity for
word sense identification. WordNet: An Electronic Lexical Database, pages 265–283, 1998.
13. Zhibiao Wu and Martha Stone Palmer. Verb semantics and lexical selection. In James
Pustejovsky, editor, 32nd Annual Meeting of the Association for Computational Linguistics,
27-30 June 1994, New Mexico State University, Las Cruces, New Mexico, USA, Proceedings,
pages 133–138. Morgan Kaufmann Publishers / ACL, 1994.
14. Jay Jiang and David Conrath. Semantic similarity based on corpus statistics and lexical
taxonomy. In Proc. of the Int’l. Conf. on Research in Computational Linguistics, pages 19–
33, 1997.
15. Dekang Lin. An information-theoretic definition of similarity. In Proceedings of the Fifteenth
International Conference on Machine Learning, ICML ’98, pages 296–304, San Francisco,
CA, USA, 1998. Morgan Kaufmann Publishers Inc.
32
16. RDFS Working Group. Resource description framework schema (rdfs). www.w3.org/
TR/rdf-schema/.
17. OWL Working Group. Web ontology language (owl). www.w3.org/TR/
owl-features/.
18. OWL2 Working Group. Web ontology language 2(owl2). www.w3.org/TR/
owl2-overview/.
19. Franz Baader and Werner Nutt. Basic description logics. In Description Logic Handbook,
pages 43–95. Cambridge University Press, 2003.
20. Klaus Schild. A correspondence theory for terminological logics: Preliminary report. In In
Proc. of IJCAI-91, pages 466–471, 1991.
21. Christiane Fellbaum. WordNet: an electronic lexical database. MIT Press, 1998.
22. Mark van Assem, Aldo Gangemi, and Guus Schreiber. Rdf/owl representation of wordnet.
Technical report, W3C, 2008.
23. Amos Tversky. Features of similarity. Psychological Review, 84(4):327–352, 1977.
24. Peter C. Weinstein and William P. Birmingham. Comparing concepts in differentiated on-
tologies. In Proceedings of the Twelfth Workshop on Knowledge Acquisition, Modeling and
Management (KAW’99), 1999.
25. Alexander Borgida, Thomas J. Walsh, and Haym Hirsh. Towards measuring similarity in
description logics. In Proceedings of the 2005 International Workshop on Description Logics
(DL2005), Edinburgh, Scotland, UK, July 26-28, 2005, volume 147. CEUR-WS.org, 2005.
26. Roy Rada, Hafedh Mili, Ellen Bicknell, and Maria Blettner. Development and application of
a metric on semantic nets. IEEE Transactions on Systems, Man, and Cybernetics, 19(1):17–
30, 1989.
27. Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. Wordnet::similarity: Measuring
the relatedness of concepts. In Demonstration Papers at HLT-NAACL 2004, HLT-NAACL–
Demonstrations ’04, pages 38–41, Stroudsburg, PA, USA, 2004. Association for Computa-
tional Linguistics.
28. Anthony Fader, Stephen Soderland, and Oren Etzioni. Identifying relations for open in-
formation extraction. In Proceedings of the Conference of Empirical Methods in Natural
Language Processing (EMNLP ’11), Edinburgh, Scotland, UK, July 27-31 2011.
29. Michele Banko, Michael J Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni.
Open information extraction from the web. In IN IJCAI, pages 2670–2676, 2007.
30. Fei Wu and Daniel S. Weld. Open information extraction using wikipedia. In Proceedings of
the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages
118–127. Association for Computational Linguistics, 2010.
31. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and
David McClosky. The Stanford CoreNLP natural language processing toolkit. In Association
for Computational Linguistics (ACL) System Demonstrations, pages 55–60, 2014.
32. Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. Wordnet: Similarity - measuring
the relatedness of concepts. In Proceedings of the 19th National Conference on Artifical
Intelligence, AAAI’04, pages 1024–1025. AAAI Press, 2004.
33. Hideki Shima. Wordnet similarity for java (ws4j). https://code.google.com/
archive/p/ws4j/.
34. SPARQL Working Group. Sparql 1.1 overview. www.w3.org/TR/
sparql11-overview/.
35. Elena Cabrio, Julien Cojan, Alessio Palmero Aprosio, Bernardo Magnini, Alberto Lavelli,
and Fabien Gandon. Qakis: an open domain QA system based on relational patterns. In Birte
Glimm and David Huynh, editors, Proceedings of the ISWC 2012 Posters & Demonstrations
Track, Boston, USA, November 11-15, 2012, volume 914 of CEUR Workshop Proceedings.
CEUR-WS.org, 2012.