<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Workshop on Ontology Matching, October</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Omaima Fallatah</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ziqi Zhang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Frank Hopfgartner</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Information School, The University of Shefield</institution>
          ,
          <addr-line>Regent Court, 211 Portobello, Shefield S1 4DP</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Information Systems, Umm Al Qura University</institution>
          ,
          <addr-line>Mecca 24382</addr-line>
          ,
          <country country="SA">Saudi Arabia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universität Koblenz-Landau</institution>
          ,
          <addr-line>Mainz 55118</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>23</volume>
      <issue>2022</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>KGMatcher+ is a scalable and domain-independent matching system that matches the schema (classes) of large Knowledge Graphs by following a hybrid matching approach. KGMatcher+ is composed of an instance-based matcher which only uses annotated instances of knowledge graph classes to generate candidate class alignments and a string-based matcher. This year is the second OAEI participation of KGMatcher+, formerly known as KGMatcher. More improvements have been added to the matcher, particularly in terms of handling imbalanced class distribution, and it is the best-performing system in the common knowledge graphs track this year.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Knowledge Graphs</kwd>
        <kwd>Instance-based Ontology Matching</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Schema Matching</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        makes the matcher particularly useful for matching large KGs with numerous populated and
overlapping instances such as DBpedia [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and YAGO [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This is the second OAEI participation
of this matching system, KGMatcher participated in the OAEI in 2021 [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>1.2. Specific techniques used</title>
      <p>Given two input knowledge graphs,  and ′, where  has a set of classes  = {0, 1 , ..,  ,
aonfdclaesascehs csluacshs cthonattain′s=a {set0o′f, in1st′ a,n..c,es′}w=he{re0,1 , ..., },.S1 ,i m...i,l arly},.K′GcMonattcahinesr+ahsaest}
two main components: an instance-based matcher and′ =an{am0e matcher. The workflow of
KGMatcher+ is illustrated in figure 1.</p>
      <sec id="sec-2-1">
        <title>1.2.1. Preprocessing</title>
        <p>Given the two input KGs, the matcher starts by parsing and indexing the lexical data of the two
KGs separately. Following the standard free text search/index approach, an index is created
for each KG where each class is treated as a document and the content of each ‘document‘
is the concatenation of the class’s instance labels. In addition to the standard text cleaning
processes, a word segmentation method is applied in order to separate multi-word entities, e.g.,
academicfield. Using a general dictionary, this method is able to infer the spaces between
words and replace them with a space.</p>
      </sec>
      <sec id="sec-2-2">
        <title>1.2.2. Instance-based Matcher</title>
        <p>
          The first matching component of KGMatcher+ belongs to the extensional matcher category. It
uses a self-supervised machine learning approach to map KG classes based on their instances
overlap. The matching is done in a two-way classification fashion where a KG classifier is
trained using one KG’s instances data. Later on, that classifier is used to classify any instance
name into one of its classes. Here, we summarize the matching process of KGMatcher+, however,
readers may refer to [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] for further details.
        </p>
        <p>
          • Exact Name Filtering. The matcher starts by applying an exact name filter to exclude class
labels that exist in both input KGs. Given the large number of classes in typical KGs, this
step works as a blocking strategy that reduces the search space of the instance-based
matcher.
• Resampling KGs Instances. The class distribution in typical public KG tends to be highly
imbalanced. Thus, the goal of this step is to balance the number of populated instances
in the two input KGs to avoid biased classification results. The previous version of
the system was only targeting the majority, i.e., large classes, by undersampling their
instances using TF-IDF approach [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Diferent from the previous version, here, we
introduced a new sampling strategy that combines undersampling the majority classes
with oversampling classes with fewer instances, i.e., the minority classes. In [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], we
studied 6 diferent strategies for handling class distribution including state-of-the-art
methods such as SMOTE [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and using cost-sensitive learning [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. The sampling method
that outperformed all other methods was when we used the TF-IDF undersampling
method in addition to using random oversampling. The standard TF-IDF method which
is often deployed to measure word relevance in a collection of documents is used to
undersample KGs classes. Here, the TF-IDF of a word in each class represents the relevance
of that word in a particular class in comparison to other classes in the KG. Therefore,
for each majority class, the most frequent words in terms of TF-IDF score are used to
undersample its instance names. Therefore, instance names that do not compose any of
the words with high TF-IDF scores are discarded. Then, random oversampling is used to
generate repeated random samples of instances in the classes that fall in the minority
class category. As a result, a more balanced and indicative set of KG instances is obtained
to be used as training data. Readers interested in further details of the sampling strategies
incorporated in KGMatcher+ may refer to [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
• Training KG classifiers. Here, a KG classifier will be separately trained for each input KG
using the previously re-sampled instances data. Pre-trained word embedding is used here
as a feature to capture and present the semantics of KG instance names. Compared to
traditional feature representation methods, word embedding and language models are
recognized as efective ways to capture the semantic similarity of words. KGMatcher+
is able to train two types of classifiers, a Deep Neural Network (DNN) model 1 2 similar
to other successful NLP tasks such as [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], and a pre-trained BERT model [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
KGMatcher+ will automatically opt to use the BERT model if a GPU is available during
the runtime. The output of this phase is two classifiers,  and ′ trained using
the instances from the two input KGs , and ′ respectively.
        </p>
        <p>1The parameters selected for the DNN model: an input layer of pre-trained word embedding model followed
by four fully connected hidden layers with 128, 128, 64, 32 rectified linear units. A dropout layer of 0.2 is added
between each pair of dense layers for regulation. Finally, a softmax layer for multi-class classification, taking the
total number of classes in the KG we are training a classifier for.</p>
        <p>
          2The input layer is the Google News token-based model https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1
• KG1 alignment elicitation is the process of generating candidate pairs using the classifier
trained on the first input KG. Candidate pairs are generated by iteratively applying the
classifier  to instances in the other KG’s classes. As a result, each instance name in
′ is now classified into a class in . The candidate pair (,′ ) is added to the first
candidate alignments set →′ if the majority of 
 . A similarity score between [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] is obtained usingt′hwe epreerccelanstsaigfieed oafsiinnssttaanncceesstohfat

voted for a particular class. Therefore, if 600 out of 1000 instance names in ′ were
voting for,  the similarity score of that pair will be 0.6.
        </p>
        <p>
          • KG2 alignment elicitation is similar to the above-illustrated elicitation process. However,
the roles of the two KGs are reversed where ′ , i.e., the classifier trained on the second
KG (′), is applied to  instances in order to obtain the second candidate alignment set
′→.
• Similarity computing is where KGMatcher+ combines the two candidate alignment sets
resulting from the two-way classification method. First, the matcher separately stores
each directional alignment in an alignment matrix of a ||.|′| dimension. The two
matrices are then aggregated into one matrix by taking the average similarity score of
each pair. For example, if (6 ,3′ ,0.88) in →′ and (5 ,3 , 0.64) in ′→) their
 ′ 
aggregated similarity value will be 0.76. Consequently, the final alignments for this
matcher are chosen by following the state-of-the-art automatic final alignment selection
approach introduced in [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Given an alignment matrix, this method iteratively selects
the pair with the highest similarity score for each class in both KGs.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>1.2.3. Name Matcher</title>
        <p>The second component of KGMatcher+ is an element-level matcher, which measures the
similarity of KG class labels. First, the edit distance of each class pair is measured, and then their
semantic similarity is measured by a word embedding method. First, the Levenshtein distance
is calculated for each class pair. Then, in terms of the word embedding similarity, a pre-trained
word2vec model is used to represent class labels before measuring their cosine similarities.
The semantic similarity is measured in a Vector Space Model, where words with high semantic
relations are often represented closer to each other. In the case of multi-word labels, the vector
representation of each word composing the label is aggregated with an element-wise average
of the composing word vectors. Finally, the maximum of the two similarity measures is chosen
as the name similarity of that pair. The threshold value of the name matcher is set to 0.8. To
illustrate, if the word embedding similarity of (RailwayStation,TrainStation) is 0.83 while
their Levenshtein distance is 0.56, the maximum similarity value, i.e., the word embedding
similarity which is also higher than 0.8. Nonetheless, in case the two similarity scores are lower
than the threshold, that pair will be excluded from the candidate alignment.</p>
      </sec>
      <sec id="sec-2-4">
        <title>1.2.4. Post Processing</title>
        <p>KGMatcher+ combines the results generated from the two matching components by following
the same method described earlier to combine the two instance classification alignments.</p>
      </sec>
      <sec id="sec-2-5">
        <title>1.2.5. Instance Matching</title>
        <p>For the OAEI participation, we have adapted KGMatcher+ to also match the instances of KGs.
The instance matching component is very simple. First, standard text preprocessing techniques
such as lower casing, and removing stopwords and non-alphanumeric characters are applied.
Then, KGMatcher+ generates candidate instance pairs based on the existence of the label in the
opposite knowledge graph.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>1.3. Adaptations made for the evaluation</title>
      <p>
        KGMatcher+ is mainly developed with Python. To facilitate reusing and evaluating KGMatcher+
and for the OAEI submission, it was packaged using a SEALS client. The wrapping service from
the Matching EvaLuation Toolkit (MELT) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] was used to warp the system’s Python-process,
and to generate the SEALS package.
      </p>
      <sec id="sec-3-1">
        <title>2. Results</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>2.1. Conference</title>
      <p>In this section, we present and discuss the results for each of the OAEI tracks where KGMatcher+
was able to produce a non-empty alignment file. The results include the following OAEI tracks:
Conference, Knowledge Graph, and Common Knowledge Graphs track.</p>
      <p>In the Conference track, when following the rar2-M3 evaluation, KGMatcher+ F1 score (0.52)
is slightly lower than both baselines, i.e., StringEquiv (0.53) and edna (0.56). This particular
evaluation, i.e., M3 takes into consideration both class and property matches. The fact that
KGMatcher+ does not match property justifies the negative impact of the undiscovered property
alignments on the matcher’s performance on this task. Further, given that the Conference
track datasets do not include enough instances to apply the instance-based matcher, the name
matcher is the only matcher applied to map classes. In terms of the cross-domain test case of
mapping DBpedia and OntoFram, KGMatcher+ is the second to the best-performing system.</p>
    </sec>
    <sec id="sec-5">
      <title>2.2. Knowledge Graph</title>
      <p>In the Knowledge Graph track, KGMatcher+ was able to generate results for all 5 test cases at
both classes and instances level. In terms of class matching, the matcher yields satisfactory
results, with 0.79 for F1 score. The added instance matcher has positively impacted the overall
matcher result on this task, with a precision of 0.94, a recall of 0.66, and F1 of 0.82. KGMctcher+
is the second to the best-performing system in this track.</p>
    </sec>
    <sec id="sec-6">
      <title>2.3. Common Knowledge Graphs</title>
      <p>In this track, KGMatcher+ was able to complete the task of matching the classes from 4
crossdomain and common KGs. On the task of matching NELL and DBpedia, the matcher obtained
the highest F1 score of 0.95. In terms of the second task, which maps classes from Yago and
Wikidata, KGMatcher+ is also the best-performing matching system with a recall of 0.83 and an
F1 score of 0.91. KGMctcher+ yields the best performance results on this track.</p>
      <sec id="sec-6-1">
        <title>3. General comments</title>
        <p>
          The results of KGMatcher+ have been very encouraging. In the common knowledge graph track,
it achieves outstanding results. This indicates that our hybrid approach, utilizing instances data
to map KG classes, is able to outperform systems that use other matchers’ combinations. It is
important to note that the performance of KGMatcher+ instance-based component depends on
the dataset nature. Since KGMatcher+ is learning KG classifiers by using general pre-trained
word embedding models, the more representative the KG instances of real-world entities, the
better the instance classification results. Figure 2 shows the diference between the performance
when classifying instances from common KGs, e.g., NELL, compared to a single domain KG from
the knowledge graph track. Note that the latter mainly annotates classes in the entertainment
domain [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>(a)
(b)</p>
      </sec>
      <sec id="sec-6-2">
        <title>4. Conclusion</title>
        <p>As part of OAEI 2022, this paper presents KGMatcher+, a system for matching the schema of
large-scale and domain-independent KGs by utilizing instances data. The process is done by
learning KG classifiers, which are able to classify instances into a particular KG class. The results
suggest that a hybrid approach that incorporates an instance-based technique can be highly
efective particularly for matching large cross-domain KGs with imbalanced class distribution,
such as Yago and Wikidata.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Otero-Cerdeira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Rodríguez-Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gómez-Rodríguez</surname>
          </string-name>
          ,
          <article-title>Ontology matching: A literature review</article-title>
          ,
          <source>Expert Systems with Applications</source>
          (
          <year>2015</year>
          )
          <fpage>949</fpage>
          -
          <lpage>971</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O.</given-names>
            <surname>Fallatah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hopfgartner</surname>
          </string-name>
          ,
          <article-title>The impact of imbalanced class distribution on knowledge graphs matching</article-title>
          ,
          <source>in: Proceedings of the 17th International Workshop on Ontology Matching (OM</source>
          <year>2022</year>
          ), CEUR-WS,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>O.</given-names>
            <surname>Fallatah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hopfgartner</surname>
          </string-name>
          ,
          <article-title>A hybrid approach for large knowledge graphs matching</article-title>
          ,
          <source>in: Proceedings of the 16th International Workshop on Ontology Matching (OM</source>
          <year>2021</year>
          ), CEUR-WS,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Isele</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jakob</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jentzsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kontokostas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Mende</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hellmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Morsey</surname>
          </string-name>
          , P. van Kleef,
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          , C. Bizer, DBpedia
          <article-title>- A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia, Semantic Web (</article-title>
          <year>2012</year>
          )
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Tanon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Weikum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Suchanek</surname>
          </string-name>
          ,
          <article-title>Yago 4: A reason-able knowledge base</article-title>
          ,
          <source>in: European Semantic Web Conference</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>583</fpage>
          -
          <lpage>596</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O.</given-names>
            <surname>Fallatah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hopfgartner</surname>
          </string-name>
          ,
          <article-title>Kgmatcher results for oaei 2021</article-title>
          , in: CEUR Workshop Proceedings, volume
          <volume>3063</volume>
          ,
          <year>2021</year>
          , pp.
          <fpage>160</fpage>
          -
          <lpage>166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Schütze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          , Introduction to information retrieval, volume
          <volume>39</volume>
          , Cambridge University Press Cambridge,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N. V.</given-names>
            <surname>Chawla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Bowyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. O.</given-names>
            <surname>Hall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. P.</given-names>
            <surname>Kegelmeyer</surname>
          </string-name>
          ,
          <article-title>Smote: synthetic minority over-sampling technique</article-title>
          ,
          <source>Journal of artificial intelligence research 16</source>
          (
          <year>2002</year>
          )
          <fpage>321</fpage>
          -
          <lpage>357</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Elkan</surname>
          </string-name>
          ,
          <article-title>The foundations of cost-sensitive learning</article-title>
          ,
          <source>in: International joint conference on artificial intelligence</source>
          , volume
          <volume>17</volume>
          , Lawrence Erlbaum Associates Ltd,
          <year>2001</year>
          , pp.
          <fpage>973</fpage>
          -
          <lpage>978</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Minaee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kalchbrenner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Cambria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nikzad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chenaghlu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <article-title>Deep learning based text classification: A comprehensive review</article-title>
          , arXiv preprint arXiv:
          <year>2004</year>
          .
          <volume>03705</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R.</given-names>
            <surname>Collobert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weston</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bottou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Karlen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kavukcuoglu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kuksa</surname>
          </string-name>
          ,
          <article-title>Natural language processing (almost) from scratch</article-title>
          ,
          <source>Journal of machine learning research 12</source>
          (
          <year>2011</year>
          )
          <fpage>2493</fpage>
          -
          <lpage>2537</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Maiya</surname>
          </string-name>
          ,
          <article-title>ktrain: A low-code library for augmented machine learning</article-title>
          ,
          <source>arXiv preprint arXiv:2004</source>
          .
          <volume>10703</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gulić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vrdoljak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Banek</surname>
          </string-name>
          ,
          <string-name>
            <surname>Cromatcher:</surname>
          </string-name>
          <article-title>An ontology matching system based on automated weighted aggregation and iterative final alignment</article-title>
          ,
          <source>Journal of Web Semantics</source>
          <volume>41</volume>
          (
          <year>2016</year>
          )
          <fpage>50</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hertling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Portisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Paulheim</surname>
          </string-name>
          , MELT
          <article-title>- matching evaluation toolkit</article-title>
          ,
          <source>in: Semantic Systems. The Power of AI and Knowledge Graphs - 15th International Conference</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>231</fpage>
          -
          <lpage>245</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hertling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Paulheim</surname>
          </string-name>
          ,
          <article-title>The knowledge graph track at oaei</article-title>
          ,
          <source>in: European Semantic Web Conference</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>343</fpage>
          -
          <lpage>359</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>