<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>How Stable are WordNet Synsets?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eric Kafe</string-name>
          <email>kafe@megadoc.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>MegaDoc</institution>
          ,
          <addr-line>Charlottenlund</addr-line>
          ,
          <country country="DK">Denmark</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The diachronic study of the permanent WordNet sense keys reveals that the WordNet synonym sets have stayed very stable through every version of the lexical database since 1.5 (1995), even though the synset identifiers continually changed. In particular, contrary to expectations, 94.5% of the WordNet 1.5 synsets still persisted in the latest 2012 version, compared to only 89.2% of the corresponding sense keys. Meanwhile, the splits and merges between synonym sets remained few and simple. We discuss implications of these results for WordNet mappings, and present tables that allow to estimate the lexicographic effort needed for updating WordNet-based resources to newer WordNet versions.</p>
      </abstract>
      <kwd-group>
        <kwd>WordNet</kwd>
        <kwd>Sense Keys</kwd>
        <kwd>Synsets</kwd>
        <kwd>Mappings</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1.1</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <sec id="sec-2-1">
        <title>Sense Keys and Synset Offsets</title>
        <p>Wordnets cover an increasing number of languages, and interoperate by using
identifiers from the Princeton WordNet (PWN) [3] lexical database. PWN groups
words that share the same meaning in synonym sets (synsets). While the
identifier for each synonym set (the synset offset [14]) changes between each version
of the database, each individual word sense has a stable identifier (the sense
key), which does not change across different PWN versions. So, according to the
WordNet manual, ”A sense key is the best way to represent a sense in semantic
tagging or other systems that refer to WordNet senses” [13].</p>
        <p>Since WordNet 1.5SC (1995), sense keys are unique: each word sense is a
member of one and only one synonym set, so each sense key maps to only one
synset offset in a given WordNet version. Additionally, each synonym set contains
one and only one sense of each word that share this sense, i. e. each synset offset
corresponds to only one sense key of each word.
1.2</p>
        <p>Mappings and Updates
However, foreign language wordnets have mostly been mapped to PWN through
ever-changing synset offsets, and thus bound to one particular version of PWN,
which hinders interoperability between wordnets bound to different versions.</p>
        <p>Daud´e et al. [1] produced a complete set of mappings between all PWN
versions that achieve almost perfect recall by a relaxation of precision, but did
not use the sense keys as a mapping criterion. Also, updating foreign language
wordnets to a newer version of PWN requires additional lexicographic efforts,
because the changes (splits, merges, deletions) in the PWN synsets do not always
correspond to the composition of the foreign language synonym sets.</p>
        <p>So, in order to improve the precision of the mappings when updating between
PWN versions, foreign language lexicographers need an accurate picture of the
changes that occurred between these versions. But previous analyses have been
limited to one PWN source and target pair: WN 1.5-1.6 [1], WN 1.6-3.0 [4], WN
3.0-3.1 [11].
1.3</p>
        <p>The Stability of WordNet Identifiers
The present study aims to investigate the stability of the two essential entities of
the PWN databases (the word senses and the synonym sets), by tracking their
respective identifiers (the sense keys and the synset offsets) across all modern
versions, ranging from WordNet 1.5 to the latest WordNet 3.1.1 for SQL (version
name suggested by Randee Tengi from the PWN team).</p>
        <p>Since the sense keys are unique and persistent, they permit to observe their
groupings in synonym sets across PWN versions, and to trace how these synsets
evolve in the database over time. Even though synset offsets change between
versions, we can follow the sense keys of their members, and obtain an exact
recension of all the splits, merges, additions and deletions that occurred between
PWN versions, and thus estimate the lexicographic effort needed in order to
achieve linguistically satisfying mappings.
2
2.1</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Methods</title>
      <sec id="sec-3-1">
        <title>The Sense Key Index</title>
        <p>The unique input to our analysis is the ski-pwn-flat.tab file from the Sense Key
Index (SKI) [7], built from the index.sense files included in every PWN version
since 1.5. In this form, the SKI is a complete table of tab-separated quadruples
(sense key, WordNet version, part of speech, synset offset), linking every sense
key to its synset offset in all PWN versions between 1.5 and 3.1.1.</p>
        <p>The SKI supports a simple mapping inference rule, stating that whenever the
same sense key is present in both PWN versions v1 and v2, then a bidirectional
mapping link exists between the respective synsets of this key, s1 and s2:
Rule 1: Sense Key Identity</p>
        <p>W Nv1 :
W Nv2 :
M ap</p>
        <p>Keyk ∈ Synsets1</p>
        <p>
          Keyk ∈ Synsets2
W Nv1:s1 ↔ W Nv2:s2
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
        </p>
        <p>This inference is always valid for identical sense keys, so mappings that only
use this rule do not produce false positives, and have thus 100% precision.
2.2</p>
        <p>Analysis
The sense keys After collapsing the part of speech and synset offset fields
from the SKI database file into the 9-digit synset id format used in WNprolog
[12], we applied the built-in xtabs cross-tabulatation function in the R statistical
environment [9], to obtain a table containing all the PWN versions as columns,
all the sense keys as rows, with the synset id corresponding to each sense key
and each PWN version in the cells, and 0 when the sense key was absent from
the corresponding PWN version.</p>
        <p>For each pair of consecutive PWN versions (see Table 1), we count the number
of sense keys present in either the source version (WNsource) or the target
version (WNtarget), or both. Most sense keys persist in both versions, and their
percentage expresses the recall of mappings that use only Rule 1. Sense keys that
only appear in the source have been removed in the target, and those that only
appear in the target have been added to the source.</p>
        <p>The persistent and removed sense keys add up to T otalsource, so we calculate
their ratios as percentages of T otalsource, which add up to 100. The persistent
and added sense keys add up to T otaltarget, but their percentages do not add
up to 100, because they are ratios of different totals. Both totals are identical to
the Word-Sense Pairs reported by the WordNet-team [15].</p>
        <p>Persistent, Added and Removed Synonym Sets We analyse the evolution
of the synonym sets, by considering whether their corresponding sense keys are
present in either or both of the source and target PWN versions (see Table 2).</p>
        <p>The source synset offsets of persistent sense keys have at least one translation
in the target, and are counted as persistent synsets. Source synset offsets that
do not have a sense key present in the target correspond to removed synsets,
while target synsets that do not have a sense key that was present in the source,
have been added in the PWN update.</p>
        <p>These figures and their percentages are calculated as for Table 1: the
persistent and removed synsets add up to T otalsource, and their percentages add
up to 100. The synset totals are identical to those from each corresponding WN
Stats[15] manual page. But, because of splits and merges, the number of
persistent synsets in the source (i. e. the figure we use here) is not identical to the
number in the target, which together with the number of added synsets, would
add up to T otaltarget.</p>
        <p>
          Split and Merged Synsets The synonym sets counted as persistent here
satisfy a minimal condition of stability, because they have at least one sense key
present in both PWN versions. Extending the previous Rule (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) to synonyms
allows to increase recall, by mapping removed sense keys to the target synset of
their synonyms:
Rule 2: Persistent Synonymy
        </p>
        <p>M ap
W Nv1 :
W Nv2 :</p>
        <p>W Nv1:s1 ↔ W Nv2:s2</p>
        <p>
          Keyk ∈ Synsets1
Keyk ∈ Synsets2
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
        </p>
        <p>Rule 2 applies a mapping link established by Rule 1 to a sensekey k from s1,
to predict that k belongs to s2 in PWN version v2. But Rule 2 produces fallacies
when s1 was split into different target synsets, where Rule 1 only holds for some
synonyms of k, but not for k itself.</p>
        <p>Studying the evolution of the sensekeys allows us to detect all splits or merges,
and to assess their frequency and complexity, i. e. the maximal number of
synonym sets involved in one split or merge operation (see Table 3). This allows
to precisely identify and count the maximal number of false positives that Rule
2 can produce. By contrast, other heuristics like gloss similarity [1] are more
uncertain, and therefore not considered in this study.
3
3.1</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <sec id="sec-4-1">
        <title>The Persistence of Word Senses</title>
        <p>persistent keys was generally above 99. But before version 1.6, the persistence
was a little lower, with approx. 3% removals between versions. For long-distance
updates, the lost sense keys accumulate: in total 18160 sense keys have been
removed since PWN 1.5, so the ratio of keys from PWN 1.5 that persist in the
latest PWN 3.1.1 drops to 89.2%. Most often, the number of additions have by
far exceeded the deletions, the only exception being the latest WN 3.1.1 update,
which mostly consisted in removals.
3.2</p>
        <p>The Persistence of Synonym Sets</p>
        <p>This result should actually be expected, considering that removed word
senses still can be mapped to target synonym sets through their synonyms. For
example, although the adjective sense key for ”froward” disappeared between
WN 3.1 and 3.1.1 because the orthography of the lemma was corrected to
”forward”, it is still mapped through synonyms like ”headstrong”. So mappings that
link synset offsets have a higher recall than those that link sense keys, because
they cover whole sets of words, and thus avoid some of the losses incurred from
the removal of individual sense keys. However, when synsets are split, mapping
each key to all its synonyms causes a loss of precision, which we can quantify
through a more precise analysis of the splits.
In a mapping with unique pairs of (source , target) synset offsets, split synsets
are those appearing more than once in the source column, while merged synsets
are those appearing more than once in the target. The number of times that these
synsets appear is a measure of the complexity of the split or merge operation.
We indicate this size with a subscript, so that split2 and split3 are the number
of synsets that were split in respectively two or three different target synsets.
Similarly, merged2 and merged3 are the number of merges from two or three
different source synsets. Some synonym sets are both split and merged, and we
indicate their frequency as &amp;smpelirtged.</p>
        <p>After PWN version 1.5SC, split2 and split3 add up to the total number of
splits. Similarly, merged2 and merged3 add up to the total number of merges.
Thus, between two consecutive WordNet versions after 1.5SC, no source synset
was split into more than three target synsets, and no target synset was merged
from more than three source synsets. Only in the mapping between WordNet
1.5 and 1.5SC, the total number of splits includes a very small number of four
and five-way splits.</p>
        <p>The number and size of the splits and merges was generally low, and there
were always more splits than merges. Almost all splits and merges only involved
two synsets, and operations involving three synsets were very rare. Between
the non-consecutive versions, no merge involved more than three synsets. After
WordNet 1.5C, the splits were also limited to two or three synsets</p>
        <p>Synsets that were split and merged at the same time most often resulted from
the migration of a single sense key to another synset. The following example
from PWN 2.1 displays an addition (medusoid), a deletion (medusa#2), a split
(jellyfish), and a merge (medusan). The deletion of medusa#2 is implied by the
fact that there is already a sense of medusa in the target synset.</p>
        <p>The following example shows that the adverb observably migrated to its
antonym set, during the update from WordNet 2.0 to 2.1. In this case, applying
the mapping Rule 2 to its source synonyms imperceptibly and unnoticeably would
aggravate the confusion between synonyms and antonyms, instead of resolving
it. To avoid such errors, it is crucial to review all the splits manually.</p>
        <p>Sense Keysmpelirtged</p>
        <p>WN2.0</p>
        <p>WN2.1
imperceptibly%4:02:00:: 400369180 400367415
unnoticeably%4:02:00:: 400369180 400367415
observably%4:02:00::
noticeably%4:02:00::
perceptibly%4:02:00::
By simply following the sense keys between WordNet versions, we saw that the
synonym sets remained very stable throughout. There was never more than a
few hundred split or merged synonym sets between consecutive versions and,
after version 1.6, the complexity of these changes was often the lowest possible,
because each split or merge almost always involved only two synsets, and never
more than three.</p>
        <p>Lexicographers can use Tables 1, 2 and 3 to estimate the effort required to
update a resource between two PWN versions. For example, when updating to
PWN 3.0, a resource that uses PWN 1.6 sense keys and just applies Rule 1
would obtain 100% precision and 95.9% recall (Table 1), which can be improved
by a review of the 7068 removed sense keys, as well as the collapsed word senses
resulting from the 260 merged synsets (Table 3). The synset-based mappings
have higher recall (97% in Table 2), which can be improved by reviewing the
same 260 merges, and the part of the 7068 removed sense keys that belong to the
2958 removed synsets, while the rest of these 7068 removed sense keys could be
false positives produced by Rule 2, and need to be reviewed in order to increase
precision, in addition to the 559 splits from Table 3, which do not affect sense
keys.</p>
        <p>So these results confirm that ”sense keys are the best way to represent a
sense” [13], but only by a small margin. Contrary to expectations, synset
identifiers provide a reasonable alternative, since the splits between most versions
are relatively few and simple. As a consequence, stable synset identifiers like the
Inter-Lingual Index (ILI) [10, 11] appear viable.</p>
        <p>Practical Application For older projects that were originally mapped to PWN
1.5, like [2, 8], upgrading to PWN 3.1.1 requires to review the intersection of the
source data with the 1202 PWN splits reported in Table 3.</p>
        <p>On the other hand, updating the wordnets from MCR30-2016 [4] to PWN
3.1 is much easier, since only 33 splits need to be checked. One of these is the
following example from PWN, where ”Pluto” was moved from the Greek to the
Roman ”gods of the underworld”.</p>
      </sec>
      <sec id="sec-4-2">
        <title>Sense Key</title>
        <p>WN3.0</p>
        <p>ILI</p>
        <p>WN3.1
aides%1:18:00:: 109570298 i86957 109593427
aidoneus%1:18:00:: 109570298 i86957 109593427
hades%1:18:00:: 109570298 i86957 109593427
pluto%1:18:00::</p>
        <p>The ILI 3.1 mapping [5] provides correct identifiers at the synset level, but
cannot help in mapping local translations of Pluto to their adequate PWN 3.1
synset, so the eventual local splits have to be resolved by local lexicographers.
Thus, the Spanish lexicographers need to consider whether Plut´on#n#1 should
be moved to the same synset as orco#n#2.</p>
        <p>
          Limitations The present study is limited to only two primary mapping
inference rules, based on sense key identity (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) and persistent synonymy (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ).
Additional mapping links can also be inferred automatically from gloss similarity
and other relations, as in [1]. However, since these additional heuristics are more
uncertain, they should be studied separately, and applied at a later stage. We
find further support for this viewpoint in an analysis of the lower bounds for the
performance of the many-to-many mappings that result from applying only the
two more reliable rules (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) and (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ).
4.2
        </p>
        <p>Performance Analysis
The true performance of these mappings lies somewhere above a lower bound
that can be calculated by finding the theoretical minimum of the number of
correct mapping predictions, and the maximal number of possible fallacies.</p>
        <p>As reference, we use the imaginary performance of a hypothetic ideal mapping
which would be able to map everything accurately, achieving 100% precision and
100% recall. In this ideal situation, there are no true negatives (tn = 0), so the
sense keys pertaining to the removed synsets from Table 2, which our less ideal
mapping cannot map, are false negatives (fn).</p>
        <p>Only mappings resulting from Rule 1 do not produce false positives (fp),
while all additional mappings resulting from Rule 2 are potentially false. Thus,
only the persistent sense keys from Table 1 are the true positives (tp), while all
the rest of the mapping could be false positives. In this study, we verified that
fp+fn is equal to the number of SenseKeysremoved.</p>
        <p>Rule 2 produces two kinds of false positives. When synsets are split, a simple
one-to-many mapping from a source synset into all its target synsets results in
a persistent synonymy relation, where all the words that were synonyms in the
source remain synonyms in the target. This may hold for some words, but is not
true for all, and can introduce dangerous fallacies, as we saw with the migration
of the adverb ”observably” to its antonym synset. Hence, all the additional
mapping links resulting from split synsets may in theory be false positives (fp).
Likewise, we also consider as potentially false positives all the removed sense
keys that are mapped through their synonyms. However, since these do not
necessarily correspond to removals in foreign language wordnets, we may expect
the number of fp to be stricly lower, in practical use, than the value used here.</p>
        <p>So, in this set of values, those that represent correct mappings (tp and tn)
have been set to their theoretical minimum, while the values that concern
mapping errors (fp and fn) are set to their theoretical maximum. Thus, these values
allow us to use standard formulas to calculate lower bounds for the precision
and recall of the mappings.</p>
        <p>These results show, as expected, that applying Rule 2 increases recall but
deteriorates precision. However, after version 1.6, both measures show excellent
performance.</p>
        <p>This analysis differs from human evaluations by considering the whole PWN
dataset, instead of smaller samples, so it provides exact metrics, while human
evaluations of limited samples add sample and evaluator biases that can yield
higher standard error, resulting in wider confidence intervals. Larger human
evaluations are needed, as well as deeper analyses. Both approaches have
complementary merits, and allow meaningful comparisons.
4.3</p>
        <p>Comparison with Other Mappings
Daud´e 2001 [1] produced a complete mapping from PWN 1.5 to 1.6, by
applying a relaxation labelling algorithm, with a set of constraints that involved
all semantic relations, and additional heuristics such as gloss similarity. They
evaluated the results manually, by applying different constraint sets on samples
drawn from the monosemous vs. ambiguous nouns, verbs, adjectives and adverbs
(4200 synsets in total), and found 98.8% precision and 98.9% recall for the nouns
overall, when using the complete constraint set. In all cases, recall was higher
than precision, which is consistent with our results concerning early WordNet
versions. However, our Table 5 shows higher precision than recall with the later
versions, which suggests that a combined approach could lead to improvements.
HyperDic 2012 [6] used a mixed approach to produce a mapping from PWN
3.0 to 3.1, by combining an all-to-all sense key mapping with additional
heuristics, meant to improve recall. The mapping is released under the CC-by 3.0
license, and we found that it strictly included all the results from the simple
allto-all approach and, in particular, that the 33 split2 synsets from Table 3 were
split in two. The additional heuristics added 80 synsets, so, if these additional
mappings are correct, the mixed approach could produce a modest improvement.
CILI 2016 [11] used sense keys to find that 1796 synsets were modified between
WN 3.0 and 3.1. This number, as well as their other figures, differ slightly from
our findings, but display similar variations. The authors mapped the changes
by hand to the ILI, using a one-to-one strategy, where each synset corresponds
to only one ILI identifier. But one-to-one mappings have difficulties with split
synsets, and particularly sense key migrations, as we saw previously with the
example of Pluto, so this approach needs to be complemented by a local review
of the split synsets.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>We followed the sense keys between WordNet versions, and obtained exact figures
for the number of added and removed word senses and synonym sets, as well as
the number and complexity of the split and merged synsets.</p>
      <p>We found that the splits and merges between versions were few and simple,
and that the synsets have remained very stable throughout. Even though their
identifiers are unstable, the synsets were always more persistent than the sense
keys, especially in the earlier versions. However the sense keys have the
advantage of perfect precision, and have stayed almost as persistent as the synsets
after PWN 1.6. So both identifiers provide almost equivalent support for highly
accurate mappings between the later WordNet versions: sense keys are still
preferrable, but synsets are close.</p>
      <p>Then, by relying on the solid baseline provided by the persistent sense keys
and synsets, the lexicographic work required to update synset-mapped resources
to newer versions of WordNet can essentially be reduced to a manual review of
relatively few splits and merges, and a moderate amount of removals.</p>
      <p>This study was only possible because PWN offers permanent sense keys, so
we may expect that other wordnets with permanent identifiers also enjoy more
accurate traceability, leading to enhanced interoperability.</p>
      <p>Acknowledgement. This paper benefited from the constructive remarks and
suggestions by the anonymous reviewers, and the lively discussion session at the
Challenges for Wordnets (CfWns) workshop at LDK 2017. Special thanks to the
sponsors, organisers and participants of CfWns 2017.
3. Fellbaum, C.: WordNet, An Electronic Lexical Database. MIT Press, Cambridge
(1998)
4. Gonzalez-Agirre, A., Laparra, E., Rigau, G.: Multilingual central repository version
3.0: upgrading a very large lexical knowledge base. In: Proceedings of the Sixth
International Global WordNet Conference (GWC2012). Matsue, Japan (2012)
5. GWA: ili-map-pwn31.tab. In: Collaborative Inter-Lingual Index (CILI). GitHub,
https://www.github.com/globalwordnet/ili, retrieved 2017/04/15 (2017)
6. Kafe, E.: Wordnet mapping. In: HyperDic hyper-dictionary. MegaDoc,
http://www.hyperdic.net/en/doc/mapping (2012)
7. Kafe, E.: Sense key index (ski). GitHub, https://www.github.com/ekaf/ski,
retrieved 2017/04/25 (2017)
8. Kahusk, N., Vider, K.: The revision history of estonian wordnet. In: McCrae, J.P.,
Bond, F., Buitelaar, P., Cimiano, P., Declerck, T., Gracia, J., Kernerman, I.,
Ponsoda, E.M., Ordan, N., Piasecki, M. (eds.) Proceedings of the LDK workshops:
OntoLex, TIAD and Challenges for Wordnets (2017)
9. R-team: R version 3.3.3. In: R: A language and environment for statistical
computing. R Foundation for Statistical Computing, Vienna, Austria,
https://www.Rproject.org/ (2017)
10. Vossen, P.: EuroWordnet General Document. EWN (2002)
11. Vossen, P., Bond, F., McCrae, J.P.: Toward a truly multilingual global
wordnet grid. In: Proceedings of the Eigth International Global WordNet Conference
(GWC2016). Bucharest, Romania (2016)
12. WordNet-team: Prologdb(5wn) manual page. In: WordNet manual. Princeton
University, http://wordnet.princeton.edu/man/prologdb.5WN.html (2010)
13. WordNet-team: Senseidx(5wn) manual page. In: WordNet manual. Princeton
University, http://wordnet.princeton.edu/wordnet/man/senseidx.5WN.html (2010)
14. WordNet-team: Wndb(5wn) manual page. In: WordNet manual. Princeton
University, http://wordnet.princeton.edu/wordnet/man/wndb.5WN.html (2010)
15. WordNet-team: Wnstats(7wn) manual page. In: WordNet manual. Princeton
University, http://wordnet.princeton.edu/wordnet/man/wnstats.7WN.html (2010)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. Daud´e, J., Padr´o,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Rigau</surname>
          </string-name>
          , G.:
          <article-title>A complete wn1.5 to wn1.6 mapping</article-title>
          .
          <source>In: Proceedings of the NAACL Workshop 'WordNet and Other Lexical Resources: Applications</source>
          , Extensions and Customizations' (NAACL'
          <year>2001</year>
          ).,
          <string-name>
            <surname>Pittsburg</surname>
          </string-name>
          , PA, USA (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Dziob</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piasecki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maziarz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wieczorek</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dobrowolska-Pigo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Towards revised system of verb wordnet relations for polish</article-title>
          . In: McCrae,
          <string-name>
            <given-names>J.P.</given-names>
            ,
            <surname>Bond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Buitelaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Cimiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Declerck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Gracia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Kernerman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Ponsoda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.M.</given-names>
            ,
            <surname>Ordan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Piasecki</surname>
          </string-name>
          , M. (eds.)
          <article-title>Proceedings of the LDK workshops: OntoLex, TIAD and Challenges for Wordnets (</article-title>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>