<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>WordNet-based Semantic Similarity Measures for Process Model Matching</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Khurram Shahzad</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ifrah Pervaz</string-name>
          <email>ifrah@pucit.edu.pk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rao Muhammad Adeel Nawab</string-name>
          <email>adeelnawab@ciitlahore.edu.pk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, COMSATS Institute of Information Technology</institution>
          ,
          <addr-line>Lahore. khurram</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Punjab University College of Information Technology, University of the Punjab</institution>
          ,
          <addr-line>Lahore</addr-line>
        </aff>
      </contrib-group>
      <fpage>33</fpage>
      <lpage>44</lpage>
      <abstract>
        <p>Process Model Matching (PMM) refers to the automatic identification of corresponding activities between a pair of process models. Due to the wider applicability of PMM techniques several semantic matching techniques have been proposed. However, these techniques focus on utilizing few word-to-word (word-level) similarity measures, without giving due consideration to activitylevel aggregation methods. The inadequate attention to the choice of activitylevel methods limit the effectiveness of the matching techniques. Furthermore, there are some WordNet-based semantic similarity measures that have shown promising results for various text matching tasks. However, the effectiveness of these measures has never been evaluated in the context of PMM. To that end, in this paper we have used five word-level semantic similarity measures and three sentence-level aggregation methods to experimentally evaluate the effectiveness of their 15 combinations for PMM. The experiments are performed on the three widely used PMMC'15 datasets. From the results we conclude that, a) Jiang similarity is more suitable than the mostly used Lin similarity, and b) QAP is the most suitable sentence-level aggregation method.</p>
      </abstract>
      <kwd-group>
        <kwd>Business Process Models</kwd>
        <kwd>Process Model Matching</kwd>
        <kwd>Semantic Similarity</kwd>
        <kwd>WordNet-based similarity measures</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Business process models are the conceptual models that explicitly represent the
business operations of an enterprise. These models are widely accepted as a useful resource
for a variety of purposes ranging from representing requirements for software
development to configuring ERP systems. Process Model Matching (PMM) refers to
identifying the activities between two process models that represent similar or identical
functionality [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. A pair of activities that represent similar or identical functionality is called
a corresponding pair and the involved activities are called corresponding activities [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Figure 1 shows the example process models of two universities, University A and
University B, and correspondences between their activities. In the figure, each
correspondence between a pair of activities is marked by a shaded area.
      </p>
      <p>
        An accurate identification of corresponding activities is of higher significance for
the BPM community due to its widespread application areas, such as identifying clones
of process models, searching process models and harmonizing process models [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. To
that end, a plethora of automatic techniques have been proposed [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Despite the
existence of several matching techniques, the need for enhancing the accuracy of matching
techniques has been widely pronounced during the recent years [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. For instance, a
comprehensive survey of the state-of-the-art has made imperative revelations about
process model matching techniques [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The two notable ones are, 1) 21 out of 35
techniques use the most basic syntactic measures, and 2) Lin similarity is the most
prominent semantic similarity measure.
      </p>
      <p>
        In this study we contend that there are several word-level semantic similarity
measures that have shown promising results for various text processing tasks [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
However, an empirical assessment of these competing measures has never been
conducted in the context of process model matching. Consequently, a well-grounded
recommendation about the choice of a semantic similarity measure is non-existent.
Furthermore, the presently used similarity measures merely focus on the word-to-word
semantic similarity, without paying adequate attention to the aggregation of word-level
similarity scores to an activity-level similarity score. This arbitrary selection of
sentence-level aggregation method, such as average score, may impede the effectiveness
of the matching techniques. To that end, in this study we evaluate the effectiveness of
five WordNet-based word-to-word semantic similarity measures and three
sentencelevel methods, which extend word-level semantic similarity scores to an activity-level
similarity score. The effectiveness of all (fifteen) combinations of five word-level
semantic similarity measures and three sentence-level methods are evaluated using the
three PMMC’15 datasets.
      </p>
      <p>The rest of the paper is organized as follows: Section 2 provides an overview of the
word-level and sentence-level semantic similarity measures. Section 3 and 4 presents
the experimental setup and results of the experiments, respectively. Section 5 provides
an overview of the related work. Finally, Section 6 concludes the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>WordNet-based Semantic Similarity Measures</title>
      <p>The semantic similarity methods that we have applied for identifying corresponding
activities between process models are based on WordNet. WordNet is widely
acknowledged as a valuable source to find semantic similarity between two words as it organizes
words based on lexical relations and then defines semantic relations between those
lexically related synsets. The lexical relationships are categorized into two subcategories:
synsets, and antonyms, whereas semantic relations are categorized into five sub
categories: hyponyms, meronyms, co-ordinate terms, entailment of a verb, and troponym
of a verb. Synsets are related with other synsets to form a hierarchical structure of
conceptual relations. In the WordNet version 2.0, there are nine noun hierarchies that
include 80,000 concepts and 554 verb hierarchies that are made up of 13,500 concepts.
All the concepts are linked to a unique root, called entity.
2.1</p>
      <sec id="sec-2-1">
        <title>Word-level Semantic Similarity Measures</title>
        <p>
          We have selected five well established and widely used word-level semantic similarity
measures to compute the degree of semantic similarity between activity pairs. These
are: Resnik similarity [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], Jiang similarity [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], Leacock similarity [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], Lin similarity [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ],
and Wu similarity [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. The methods have been previously used for lexical and textual
semantic relatedness [
          <xref ref-type="bibr" rid="ref11 ref13">11, 13</xref>
          ], word sense disambiguation [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], gene and sequence
matching [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], generating sentences from pictures [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], paraphrasing [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], sentiment
analysis [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] and topic modeling [
          <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
          ]. A brief overview of these measures is as
follows:
Resnik Similarity. Resnik similarity relies on is-a relationship in the WordNet
taxonomy, where each node represents a unique WordNet synset or concept. According to
this measure two nodes are considered more similar if they share more information.
This shared information is specified by Information Content (IC) of the nodes that
subsumes these nodes in a taxonomy. Formally, IC is calculated as follows:
 = −   ( )
        </p>
        <p>Let C1 and C2 be the two concept nodes in WordNet taxonomy and concept node C
is the lowest common subsumer node of nodes C1 and C2. Furthermore, let P(C) is the
probability of occurrence of longest common subsumer node C and probability of node
C is simply found by normalizing occurrences of concepts with total number of nouns
in the taxonomy.</p>
        <p>( ) =
( )

 ( ) = ∑ ∈ ( ) 
( )</p>
        <p>Where, W(C) is the set of concepts in which word w occurs and each occurrence of
a word is considered as occurrence of all concepts containing that word. The Resnik
similarity is referred as maximal IC over all concepts to which both words belong.
Formally, it is defined as follows:</p>
        <p>(  1,  2) =  ( (  1,  2))</p>
        <p>Where, LCS is the lowest common subsumer of concept nodes C1 and C2 defined
as the common parent of these with minimum node distance.
Jiang Similarity. This method uses corpus statistical information i.e. Information
Content (IC) and nodes path in is-a taxonomy for computing similarity, where, IC is
measure of occurrence of the concept in the corpus. Given a word pair C1 and C2, this
measure computes similarity between the words by using following equation:
 (  1,  2) =</p>
        <p>( ) ( ) ∗ ( )</p>
        <p>Where, IC stands for information content and LCS is the Lowest Common Subsumer
of concepts C1 and C2 defined as the common parent of these with minimum node
distance.</p>
        <p>Leacock Similarity. This similarity measure is based on a node based approach using
is-a taxonomy in the WordNet. When considering the WordNet taxonomy, each node
represents a unique concept (or synset) in the taxonomy. Subsequently, the degree of
similarity between a word pair is computed by calculating the shortest path between
two concepts (represented as nodes), and dividing it by twice the maximum depth of
the graph. Formally, it is represented as follows:
 (  1,  2) = −</p>
        <p>∗</p>
        <p>In the equation, C1 and C2 are the two concepts represented by nodes, shortest path
length is the minimum path length from node C1 to node C2 by using node counting,
and depth is the number of maximum nodes from root node to a leaf node.
Lin Similarity. According to this measure, similarity between two concepts is
expressed as the similarity between generic terms belonging to these concept classes,
rather than measuring similarity between all terms. For instance, word ‘design’ belongs
to the concept class ‘blueprint’, and the word ‘construct’ belongs to the concept class
named ‘concept’. According to Lin, the similarity between these words should be the
same as the similarity between two synsets ‘blueprint’ and ‘concept’ to which these
words belong. Formally, if there is a word x1∈ C1 and a word x2∈ C2 the information
shared by two words can be expressed by C, the most specific class that subsumes both.
The similarity is then computed as:</p>
        <p>( ,  ) = (∗ ) ( ) ( )</p>
        <p>Where C is the most specific class that subsumes concepts C1 and C2 is the common
parent of the two concepts with a minimum node distance. log P (C), log P (C1) and
log P (C2) are log likelihood of the occurrence of concepts C, C1 and C2.
Wu Similarity. This similarity measure relies on the depth of both concept nodes, and
depth of lowest common subsumer. The similarity between two concepts is then
computed by using the following Equation.</p>
        <p>( 1,  2) = ∗( ) ( )( )</p>
        <p>Where LCS is the lowest common subsumer of concepts C1 and C2 defined as the
common parent of C1 and C2 with minimum node distance. Depth (C1) represents the
number of nodes from C1 to LCS node C, depth (C2) represents the number of nodes
from C2 to LCS node C and depth (LCS) represents the number of nodes from LCS
node C to root node.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Sentence-Level Similarity Methods</title>
        <p>
          The preceding section presented various WordNet-based semantic similarity measures
for computing word-level similarity. These measures compute similarity between a pair
of words, however, PMM refers to computing similarity between activity pairs.
Therefore, there is a need to combine the word-level measures with sentence-level methods,
where each label is considered as a sentence. For this purpose, we applied three methods
which extend word-level measures to sentence-level methods. These methods are
Greedy Pairing [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], Optimal Matching [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ], and Quadratic Assignment Problem
(QAP) [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. A brief overview of each method is described below.
        </p>
        <p>Greedy Pairing. Using this method, at first both sentences are tokenized. After that,
word-level semantic similarity method is used to search the maximum semantic
similarity of each token in the first sentence with all the tokens in the second sentence.
These maximum similarities are weighted using Inverse Document Frequency (IDF)
scores. Maximum similarity scores of all the tokens in the first sentence are computed,
summed up and resulting score is normalized with maximum sentence length. The same
steps are repeated to find the maximum mappings for each token of the second sentence
with the first sentence. The final similarity score between sentence pair is obtained by
computing the average scores obtained using sentence one and two.</p>
        <p>Optimal Matching. This method is based on combinatorial matching problem, where
for a given weighted bipartite graph the problem is to find maximum matching of graph.
Bipartite graph is the graph whose nodes can be divided into two disjoint sets. Using
this approach, the two sentences S1 and S2 are considered part of a weighted bipartite
graph G = S1 ∪ S2, where words in sentences are represented as nodes of graph and
the weight of edges between these nodes corresponds to similarity score between the
respected nodes. The task is to select those node pairs in matching M such that overall
sum of all selected node pair is maximum.</p>
        <p>Quadratic Assignment Problem (QAP). This approach finds an optimal assignment
of word in first sentence to second sentence using word-level similarity measure and at
the same time maximizes the similarity between syntactic dependencies of words pair.
The Koopmans-Beckmann formulation of the QAP problem is used. The goal is to
maximize the objective function QAP (F, D, B) where F and D captures syntactic
dependencies between words in two sentences respectively and B captures the
word-toword similarity across the two sentences.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Experimental Setup</title>
      <p>
        For the experiments we have used three well established datasets, developed by experts
and used in Process Model Matching Contest 2015 (PMMC’15). Since the competition,
the datasets are widely used for the evaluation of process model matching techniques
[
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. The datasets are named as, University Admissions (UA), Birth Registration (BR),
and Asset Management (AM) datasets. Below, we present a brief overview of the three
datasets. The UA dataset is composed of 9 process models about admission to nine
German universities and 36 pairs of process models. In addition to that, the dataset
includes gold standard correspondences between equivalent activities. The
specifications of the three datasets are given below in Table 1.
      </p>
      <p>
        The BR dataset includes 9 process models, 36 pairs of these nine models and gold
standard correspondences. The models represent birth registration process of different
countries: Germany, Russia, South Africa, and the Netherlands. The collection includes
both 1:1 and 1: n correspondences. The AM dataset consists of 36 process model pairs
selected from 72 process models of the SAP reference model collection [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. The
selected process models cover different aspects from the area of asset management.
This section presents the analysis of the results, which are obtained by applying five
combinations of five word-level semantic similarity measures and 3 sentence-level
methods.
      </p>
      <p>The main goal of experiments is to classify each activity pair as ‘equivalent’ or
‘nonequivalent’. Since, the semantic similarity methods used in this study return a numeric
score between 0 and 1, we have converted these numeric scores into binary 0
(nonequivalent) and 1 (equivalent) at nine different thresholds from 0.1 to 0.9 with a gap of
0.1. However, due to space limitations we have used a cut-off threshold 0.7 because
multiple matching systems participating in the latest episode of the Process Model
Matching Contest 2015 achieved promising results at this threshold. This threshold
value represents that each activity pair for which the similarity score is greater than or
equal to 0.7 is marked as equivalent (or 1) and non-equivalent (or 0) otherwise. The F1
scores at 0.7 threshold are presented in Table 2 and the remaining results are made
available for download. For each dataset, the word-level measure that obtained the
highest F1 score for a sentence-level method is highlighted in bold. Therefore, each
Sentence Level
Greedy
Pair
Optimal
Pairing
QAP</p>
      <p>Word-level
Resnik
Jiang
Leacock
Lin
Wu
Resnik
Jiang
Leacock
Lin
Wu
Resnik
Jiang
Leacock
Lin
Wu
sentence-level method has at least one bold value. Furthermore, we have underlined the
word-level measure that obtained the highest F1 score for a dataset, independent of any
sentence-level aggregation method.</p>
      <p>A brief analysis of the results is as follows.</p>
      <p>Difficulty level of datasets. From the table it can be observed that there is a clear
difference between the performance of all techniques for the three datasets. That is, all
combinations of techniques obtained high F1 scores for UA dataset, moderate F1 scores for
BR dataset, and low F1 scores for AM dataset. These results indicate that the
corresponding activity pairs of the AM dataset are harder-to-detect than that of UA and BR
datasets. Furthermore, the corresponding activities in the BR datasets are
harder-todetect than that of the UA dataset.</p>
      <p>Performance variation across word-level measures. From Table 2 it can be observed
that in the case of Greedy pairing sentence-level method, Jiang similarity obtained the
highest F1 scores for all the three datasets (0.516, 0.534 and 0.464). Furthermore, for
Optimal pairing sentence-level method, Jiang similarity obtained the highest F1 scores
for two datasets, BR and AM datasets (0.534 and 0.464) whereas, Lin similarity
obtained the highest F1 score for one dataset, UA dataset. Similarly, for QAP pairing,
Jiang similarity obtained the highest F1 scores for two datasets, BR and AM datasets,
whereas Lin similarity obtained the highest F1 score for one dataset, UA dataset. Based
on these observations and the previous observation about the hardness of the three
datasets (AM &gt; BR &gt; UA) we conclude, Jiang similarity is the most suitable word-level
semantic similarity measure.</p>
      <p>Performance variation across sentence-level methods. From Table 2 it can be observed
that for the UA dataset, among the five word-level measures, Lin similarity obtained
the highest F1 score with Optimal and QAP pairing (i.e. 0.520 for Optimal and 0.525
for QAP pairing). However, in the case of Greedy pairing sentence-level method, both
Resnik and Jiang measures obtained a higher F1 score than Lin similarity. Similarly, for
the BR dataset, both Lin similarity and Jiang similarity obtained the highest F1 score
with QAP pairing. However, the F1 scores obtained by Jiang similarity with Optimal
and Greedy pairing is higher than that of the Lin similarity. These changes in the best
performing similarity measures due to the change in sentence-level methods, highlights
the significance of sentence-level methods. Hence, we conclude that adequate attention
should be given to the choice of the sentence-level methods. Another key observation
regarding the sentence-level methods is that, for each dataset, the highest F1 score
obtained by a word-level measure involved QAP pairing. This indicates that QAP pairing
is the most suitable sentence-level aggregation method than Optimal and Greedy
pairing methods.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Related Work</title>
      <p>
        A plethora of process model matching techniques have been developed which can be
broadly divided into two types, syntactic and semantic [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Syntactic techniques merely
rely on the similarity or distance between the labels without taking into consideration
the meaning of the words. In contrast, semantic techniques rely on the semantics of
words for computing similarity.
      </p>
      <p>
        A recent survey of PMM has identified a set of semantic matching techniques that
are used in literature [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. A summary of these techniques is presented in Table 3. In the
table, Wu, Leacock, Jiang represents Wu &amp; Palmer, Leacock &amp; Chodorow and Jiang
&amp; Conrath word-level semantic similarity measures. The ‘+’ sign in the table indicates
that the technique uses the respective technique, whereas the ‘–’ sign indicates that the
technique is not used in the paper. There are occasions in which the use of synonyms is
implicit, these are marked as ‘+/’.
Pittke et al. [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]
Klinkmuller et al. [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]
Klinkmuller et al. [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]
Klinkmuller et al. [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ]
Jin et al.[
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]
Caygolu et al. [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ]
Belhoul et al. [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ]
Niemann et al.[
        <xref ref-type="bibr" rid="ref38">38</xref>
        ]
Leopold et al. [
        <xref ref-type="bibr" rid="ref39">39</xref>
        ]
Humm et al. [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ]
Dijkman et al. [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ]
Dumas et al. [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ]
Dongen et al. [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ]
Agnes et al. [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ]
Ehrig et al. [45]
Corrales et al. [46]
Schoknecht et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
+
      </p>
      <p>From the table it can be observed that most of the studies propose to use synonyms
for semantic similarity. However, these studies do not explicitly present the measures
used for computing similarity. Also, it can be seen from the table that, Lesk, Wu and
Lin are the other similarity measures used in literature. Furthermore, it can be observed
that Leacock, Resnik and Jiang measures have never been used for identifying
corresponding activities between a pair of process models. Additionally, only word-level
semantic similarity measures are considered for computing similarities between activity
pairs and these word-level similarity measures have not been extended to compute
similarity at activity-level.
6</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>Several semantic Process Model Matching (PMM) techniques have been proposed,
however these techniques merely focus on the word-to-word semantic similarity,
without due consideration to aggregation of word-level similarity to sentence-level (or
activity-level) similarity. Furthermore, the existing studies have only used three semantic
similarity measures and ignored the other semantic similarity techniques that have
shown promising results for various text processing tasks. To that end, in this paper, we
have used five word-level sematic similarity measures and three sentence-level
aggregation techniques to experimentally evaluate the effectiveness of all the 15
combinations in the context of PMM. For the experiments we have used well established
datasets from PMMC’15. The results reveal the following: a) the hardness of the three
+
datasets are different, with AM dataset being the hardest, BR dataset being the
moderate, and UA dataset being the easiest, b) Jiang similarity, is the most suitable matching
technique, and c) QAP Pairing is the most effective sentence-level measure. In the
future, we plan to compare the performance of these semantic measures with all the
existing matching techniques.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Kuss</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leopold</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , et al:
          <article-title>Probabilistic Evaluation of Process Model Matching Techniques</article-title>
          .
          <source>In: Proceedings of the 35th International Conference on Conceptual Modeling</source>
          , pp.
          <fpage>279</fpage>
          -
          <lpage>292</lpage>
          , Gifu, Japan, (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Cayoglu</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dijkman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dumas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          et al.
          <source>: Report: The Process Model Matching Contest</source>
          <year>2013</year>
          .
          <source>In: Proceedings of the Business Process Management Workshops</source>
          , Springer LNBIP, pp.
          <fpage>442</fpage>
          -
          <lpage>463</lpage>
          , Beijing, China, (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Meilicke</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leopold</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , et al:
          <article-title>Overcoming Individual Process Model Matcher Weaknesses Using Ensemble Matching</article-title>
          . DSS,
          <volume>100</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>15</fpage>
          -
          <lpage>26</lpage>
          , (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Kuss</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leopold</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , et al:
          <article-title>Ranking-based evaluation of process model matching</article-title>
          .
          <source>In: Proceedings of the International Conference on Cooperative Information Systems</source>
          , pp.
          <fpage>298</fpage>
          -
          <lpage>305</lpage>
          , Rhodes, Greece, (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Jabeen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leopold</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reijers</surname>
            ,
            <given-names>H. A.</given-names>
          </string-name>
          :
          <article-title>How to make process model matching work better? An analysis of current similarity measures</article-title>
          .
          <source>In: 20th International Conference on BIS</source>
          , pp.
          <fpage>181</fpage>
          -
          <lpage>193</lpage>
          . Springer LNBIP, Poznan, Poland, (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Leacock</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chodorow</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Combining local context and WordNet similarity for word sense identification</article-title>
          . In: WordNet:
          <article-title>An electronic lexical database</article-title>
          , MIT Press, (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Resnik</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Using information content to evaluate semantic similarity in a taxonomy</article-title>
          .
          <source>In: Proceedings of the 14th International Joint Conference on Artificial Intelligence</source>
          , pp.
          <fpage>448</fpage>
          -
          <lpage>453</lpage>
          . Quebec, Canada, (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Schoknecht</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thaler</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fettke</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Obserweis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laue</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Similarity of Business Process models - A State-of-the-Art Analysis</article-title>
          .
          <source>ACM Computing Survey</source>
          ,
          <volume>50</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          , (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>An information-theoretic definition of similarity</article-title>
          .
          <source>In: Proceedings of the 15th International Conference on Machine Learning</source>
          , pp.
          <fpage>296</fpage>
          -
          <lpage>304</lpage>
          , (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palmer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Verbs semantics and lexical selection</article-title>
          .
          <source>In: Proceedings of the 32nd Annual Meeting on ACL</source>
          , pp.
          <fpage>133</fpage>
          -
          <lpage>138</lpage>
          ,
          <string-name>
            <given-names>Las</given-names>
            <surname>Cruces</surname>
          </string-name>
          , New Maxico, (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Budanitsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hirst</surname>
          </string-name>
          , G.:
          <article-title>Evaluating WordNet-based measures of lexical semantic relatedness</article-title>
          .
          <source>Computational Linguistics</source>
          ,
          <volume>32</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>13</fpage>
          -
          <lpage>47</lpage>
          , (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Navigli</surname>
          </string-name>
          , R.:
          <article-title>Word sense disambiguation: A survey</article-title>
          .
          <source>ACM Computing Surveys</source>
          ,
          <volume>41</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          , (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Androutsopoulos</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malakasiotis</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>A survey of paraphrasing and textual entailment methods</article-title>
          .
          <source>Journal of AI Research</source>
          ,
          <volume>38</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>135</fpage>
          -
          <lpage>187</lpage>
          , (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Gabrilovich</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Markovitch</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Computing semantic relatedness using Wikipedia-based explicit semantic analysis</article-title>
          .
          <source>In: Proceedings of the 20th International Joint Conference on Artificial Intelligence</source>
          , pp.
          <fpage>1606</fpage>
          -
          <lpage>1611</lpage>
          , Hyderabad, (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Lord</surname>
            ,
            <given-names>W.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stevens</surname>
            ,
            <given-names>R.D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brass</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goble</surname>
            ,
            <given-names>C.A.</given-names>
          </string-name>
          :
          <article-title>Investigating semantic similarity measures across the Gene Ontology: the relationship between sequence and annotation</article-title>
          .
          <source>Bioinformatics</source>
          ,
          <volume>19</volume>
          (
          <issue>10</issue>
          ), pp.
          <fpage>1275</fpage>
          -
          <lpage>1283</lpage>
          , (
          <year>2003</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Farhadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hejrati</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , et al.:
          <article-title>Every picture tells a story: Generating sentences from images</article-title>
          .
          <source>In: Proceedings of the European Conference on Computer Vision</source>
          , Springer LNCS, pp.
          <fpage>15</fpage>
          -
          <lpage>29</lpage>
          , Crete, Berlin, (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Androutsopoulos</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malakasiotis</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>A survey of paraphrasing and textual entailment methods</article-title>
          .
          <source>Journal of AI Research</source>
          ,
          <volume>38</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>135</fpage>
          -
          <lpage>187</lpage>
          , (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Sentiment analysis and opinion mining</article-title>
          . Morgan &amp; Claypool, (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>McCallum</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xuerui</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corrada-Emmanuel</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Topic and role discovery in social networks with experiments on enron and academic email</article-title>
          .
          <source>Journal of AI Research</source>
          ,
          <volume>30</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>249</fpage>
          -
          <lpage>272</lpage>
          , (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garber</surname>
            ,
            <given-names>J.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gerrish</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blei</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          :
          <article-title>Reading tea leaves: How humans interpret topic models</article-title>
          .
          <source>In: Proceedings of the 22nd International Conference on NIPS</source>
          , pp.
          <fpage>288</fpage>
          -
          <lpage>296</lpage>
          , BC, Canada, (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Corley</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mihalcea</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Measuring the semantic similarity of texts</article-title>
          .
          <source>In: Proceedings of the ACL Workshop on empirical modeling of semantic equivalence and entailment</source>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>18</lpage>
          , Michigan, USA, (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Rus</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lintean</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics</article-title>
          .
          <source>In: Proceedings of the Seventh Workshop on Building Educational Applications Using NLP</source>
          , pp.
          <fpage>157</fpage>
          -
          <lpage>162</lpage>
          , Montreal, Canada, (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Lintean</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rus</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>An Optimal Quadratic Approach to Monolingual Paraphrase Alignment</article-title>
          .
          <source>In: Proceedings of the 20th Nordic Conference of Computational Linguistics</source>
          , pp.
          <fpage>127</fpage>
          -
          <lpage>134</lpage>
          , Vilnius, Lithuania, (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Antunes</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bakhshandeh</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Borbinha</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cardoso</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dadashnia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Di Francescomarino</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Hake</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>The process model matching contest 2015</article-title>
          . GIEdition/Proceedings: Lecture notes in informatics,
          <volume>248</volume>
          ,
          <fpage>127</fpage>
          -
          <lpage>155</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Sebu</surname>
            ,
            <given-names>M. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ciocârlie</surname>
          </string-name>
          , H.:
          <article-title>Similarity of business process models in a modular design</article-title>
          .
          <source>In: Proceedings of the IEEE 11th Symposium on Applied Computational Intelligence and Informatics</source>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>36</lpage>
          , Timisoara, Romania, (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Sonntag</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hake</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fettke</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Loos</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>An approach for semantic business process model matching using supervised learning</article-title>
          .
          <source>In: Proceedings of the European Conference on Information System</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          , Istanbul Turkey, (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Sebu</surname>
            ,
            <given-names>M. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ciocarlie</surname>
          </string-name>
          , H.:
          <article-title>Merging business processes for a common workflow in an organizational collaborative scenario</article-title>
          .
          <source>In Proceedings of the 19th International Conference on STCC</source>
          , pp.
          <fpage>134</fpage>
          -
          <lpage>139</lpage>
          ,
          <string-name>
            <surname>Cheile</surname>
            <given-names>Gradistei</given-names>
          </string-name>
          , Romania, (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Makni</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haddar</surname>
            ,
            <given-names>N.Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ben-Abdallah</surname>
          </string-name>
          , H.:
          <article-title>Business process model matching: An approach based on semantics and structure</article-title>
          .
          <source>In Proceedings of the 12th International Conference on e-Business and Telecommunications</source>
          , pp.
          <fpage>64</fpage>
          -
          <lpage>71</lpage>
          , (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Fengel</surname>
          </string-name>
          , J.:
          <article-title>Semantic technologies for aligning heterogeneous business process models</article-title>
          .
          <source>Business Process Management Journal</source>
          ,
          <volume>20</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>549</fpage>
          -
          <lpage>570</lpage>
          , (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Pittke</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leopold</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mendling</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tamm</surname>
          </string-name>
          , G.:
          <article-title>Enabling reuse of process models through the detection of similar process parts</article-title>
          .
          <source>In: Proceedings of the BPM Workshops</source>
          , Springer LNBIP, pp.
          <fpage>586</fpage>
          -
          <lpage>597</lpage>
          , Tallinn, Estonia, (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Klinkmüller</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leopold</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , et al.:
          <article-title>Listen to me: Improving process model matching through user feedback</article-title>
          .
          <source>In: Proceedings of the International Conference on BPM</source>
          ,
          <string-name>
            <surname>Springer</surname>
            <given-names>LNCS</given-names>
          </string-name>
          , pp.
          <fpage>84</fpage>
          -
          <lpage>100</lpage>
          , Haifa, Israel, (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Klinkmüller</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weber</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mendling</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          et al.:
          <article-title>Increasing recall of process model matching by improved activity label matching</article-title>
          .
          <source>In: Proceedings of the International Conference on BPM</source>
          ,
          <string-name>
            <surname>Springer</surname>
            <given-names>LNCS</given-names>
          </string-name>
          , pp.
          <fpage>211</fpage>
          -
          <lpage>218</lpage>
          , Beijing, China (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Jin</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>La</surname>
            <given-names>Rosa</given-names>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Ter</given-names>
            <surname>Hofstede</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          <article-title>Efficient querying of large process model repositories</article-title>
          .
          <source>Computers in Industry</source>
          ,
          <volume>64</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>41</fpage>
          -
          <lpage>49</lpage>
          , (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Cayoglu</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oberweis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schoknecht</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ullrich</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Triple-S: a matching approach for Petri nets on syntactic, semantic and structural level</article-title>
          .
          <source>In: Proceedings of PMMC'13</source>
          , co-located
          <string-name>
            <surname>with</surname>
            <given-names>BPM</given-names>
          </string-name>
          , Beijing, China (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Belhoul</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haddad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , et al.:
          <article-title>String comparators based algorithms for process model matchmaking</article-title>
          .
          <source>In: Proceedings of the International Conference on Services Computing (SCC)</source>
          , pp.
          <fpage>649</fpage>
          -
          <lpage>656</lpage>
          , Honolulu, USA, (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Niemann</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Siebenhaar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , et al.:
          <article-title>Comparison and retrieval of process models using related cluster pairs</article-title>
          .
          <source>Computers in Industry</source>
          ,
          <volume>63</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>168</fpage>
          -
          <lpage>180</lpage>
          , (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Leopold</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niepert</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weidlich</surname>
          </string-name>
          , et al:
          <article-title>Probabilistic optimization of semantic process model matching</article-title>
          .
          <source>In: Proceedings of the International Conference on BPM</source>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>334</lpage>
          , Tallinn, Estonia, (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Humm</surname>
            ,
            <given-names>B. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fengel</surname>
          </string-name>
          , J.:
          <article-title>Semantics-based business process model similarity</article-title>
          .
          <source>In: Proceedings of the International Conference on Business Information Systems</source>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>47</lpage>
          , Vilnius, Lithuania, (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Dijkman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dumas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , et al.:
          <article-title>Similarity of business process models: Metrics and evaluation</article-title>
          .
          <source>Information Systems</source>
          ,
          <volume>36</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>498</fpage>
          -
          <lpage>516</lpage>
          . (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Dumas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia-Banuelos</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dijkman</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          <article-title>Similarity search of business process models</article-title>
          .
          <source>IEEE Data Eng. Bull.</source>
          ,
          <volume>32</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>23</fpage>
          -
          <lpage>28</lpage>
          , (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Van Dongen</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dijkman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          et al.:
          <article-title>Measuring similarity between business process models</article-title>
          .
          <source>In: Proceedings of the CAiSE</source>
          , pp.
          <fpage>405</fpage>
          -
          <lpage>419</lpage>
          , Valencia, Spain, (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Koschmider</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oberweis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>How to detect semantic business process model variants?</article-title>
          <source>In: Proceedings of the ACM symposium on Applied computing</source>
          , pp.
          <fpage>1263</fpage>
          -
          <lpage>1264</lpage>
          , Seoul, Korea, (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <surname>Ehrig</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koschmider</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al.:
          <article-title>Measuring similarity between semantic business process models</article-title>
          .
          <source>In: Proceedings of the 4th Asia-Pacific conference on Conceptual modelling</source>
          , Springer LNCS, pp.
          <fpage>71</fpage>
          -
          <lpage>80</lpage>
          , Ballarat, Australia, (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          44.
          <string-name>
            <surname>Corrales</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grigori</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          et al.:
          <article-title>BPEL processes matchmaking for service discovery</article-title>
          .
          <source>In: Proceedings of the CoopIS</source>
          , pp.
          <fpage>237</fpage>
          -
          <lpage>254</lpage>
          , Montpellier, France, (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>