<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>RDF2Vec Light - A Lightweight Approach for Knowledge Graph Embeddings</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Data and Web Science Group, University of Mannheim</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>SAP SE Product Engineering Financial Services</institution>
          ,
          <addr-line>Walldorf</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Knowledge graph embedding approaches represent nodes and edges of graphs as mathematical vectors. Current approaches focus on embedding complete knowledge graphs, i.e. all nodes and edges. This leads to very high computational requirements on large graphs such as DBpedia or Wikidata. However, for most downstream application scenarios, only a small subset of concepts is of actual interest. In this paper, we present RDF2Vec Light, a lightweight embedding approach based on RDF2Vec which generates vectors for only a subset of entities. To that end, RDF2Vec Light only traverses and processes a subgraph of the knowledge graph. Our method allows the application of embeddings of very large knowledge graphs in scenarios where such embeddings were not possible before due to a significantly lower runtime and significantly reduced hardware requirements.</p>
      </abstract>
      <kwd-group>
        <kwd>RDF2Vec</kwd>
        <kwd>knowledge graph embeddings</kwd>
        <kwd>knowledge graphs</kwd>
        <kwd>data mining</kwd>
        <kwd>scalability</kwd>
        <kwd>resource efficient embeddings</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Public knowledge graphs (KGs), such as DBpedia or Wikidata, provide deep
background knowledge that can be exploited for downstream tasks such as
questionanswering or recommender systems [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. KG embeddings (KGEs) represent vertices
and, depending on the approach, also edges of a KG as numeric vectors. This
representation is easily consumable by most algorithms and can be exploited in
downstream tasks. Advantages of KGEs, once they have been trained, include simple
applicability, fast run time, good performance on multiple tasks, and reusability in
downstream applications. On the downside, KGEs produce very large models3, and are
very expensive to train and re-train in the case of evolving knowledge bases. For very
large knowledge graphs, such as Wikidata, computing a complete embedding
typically takes up to a day or longer [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>In this paper, we address the scalability aspect of knowledge graph embeddings:
Our novel approach, RDF2Vec Light, allows to train partial, task-specific models with
3 For example, the 200 dimensional DBpedia RDF2Vec embedding model available at</p>
      <p>
        KGvec2go [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] requires more than 10GB of disk storage.
      </p>
      <p>Algorithm 1: Walk generation algorithm for RDF2Vec Light.
only a fraction of the computation requirements compared to other embedding
approaches, while retaining a high performance on multiple tasks. The resulting
models contain only vectors for entities of interest. Internally, RDF2Vec Light only traverses
a subset of the underlying knowledge graph which leads to processing times that are
much shorter than the original RDF2Vec approach which always processes an entire
knowledge graph. Moreover, the resulting models are much smaller.4</p>
    </sec>
    <sec id="sec-2">
      <title>2 RDF2Vec Light</title>
      <p>
        RDF2Vec is based on performing random walks on a graph [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The underlying idea
of RDF2Vec Light embeddings is to generate only local walks for entities of interest
given a predefined task. After the walk generation has been completed, the training
of vectors can be performed like in the original approach.
      </p>
      <p>
        Rather than starting random walks at all entities of interest, it is randomly
decided for each depth-iteration whether to go backwards, i.e. to one of the node’s
predecessors, or forwards, i.e. to the node’s successors (line 9 of Algorithm 1). As a
result, the entities of interest can be at the beginning, at the end, or in the middle
of a walk which better captures the context of the entity. This generation process is
4 RDF2Vec Light models are typically only a few kilobytes in size, compared to multiple
gigabytes of disk space required to persist classic embedding models.
described in Algorithm 1. The RDF2Vec method as well as the RDF2Vec Light
extension have been implemented in Java and Python.5 The implementation can handle
various RDF formats such as n-triples, RDF/XML, Turtle, or HDT [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In addition, a
REST API has been implemented and is provided on http://www.kgvec2go.org.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3 Evaluation</title>
      <p>
        In order to evaluate the approach presented in this paper, the classification and
regression experiments, as well as the entity and document relatedness experiments of
Ristoski et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] have been repeated. The evaluation follows the setup defined in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Six classic and six light embedding spaces have been trained each with the
following parameters held constant: w i nd ow si ze Æ 5, ne g at i ve sampl e s Æ 25. The
parameters that were changed are the generation mode (cbow and sg ) as well as
the dimension of the embedding space (50, 100, 200). All walks have been generated
with 500 walks per entity and a depth of 4. For the evaluation, the DBpedia
knowledge graph as of 2016-106 has been used.</p>
      <p>
        For the classification and regression tasks, we follow the same setup as in the
original RDF2vec paper [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]: For the classification tasks, four classifiers have been
evaluated: Naïve Bayes, C4.5 (decision tree algorithm), k-NN with k Æ 3, and
Support Vector Machines (SVM) with C 2 {10¡3, 10¡2, 0.1, 1, 10, 102, 103} where the best
C is chosen. A 10-fold cross validation has been used to calculate the performance
statistics. For the regression tasks, three approaches have been evaluated: linear
regression, k-NN, and M5rules. For the sake of brevity, we only report results for the
best performing approaches (SVM and LR).7
5 https://github.com/dwslab/jRDF2Vec
6 https://wiki.dbpedia.org/downloads-2016-10
7 The complete result tables are available at http://www.rdf2vec.org/rdf2vec_light
      </p>
      <p>In the results tables, strategy refers to the configuration with which the
embeddings have been obtained. The structure can be read as follows:
&lt;mode&gt;_&lt;number_of_walks_per_entity&gt;_&lt;walk_depth&gt;_&lt;training_mode&gt;_&lt;dimension&gt; where mode is either Light or Classic. For example, Light_500_4_CBOW_100
refers to RDF2Vec Light embeddings with 500 walks per entity, a walk depth of 4,
CBOW configuration, and an embedding space dimensionality of 100.</p>
      <p>For classification and regression, we can observe that except for the cities dataset,
the difference between the two approaches is rather marginal. For entity and
document relatedness, the results are less conclusive. Here, we see that the RDF2vec light
approach is en par with the classic approach for the CBOW variant, but the results are
reversed when looking at the SG variant, which also yields the best results globally.</p>
      <p>In order to analyze those results more deeply, and to distinguish the cases where
RDF2vec Light is en par with classic RDF2vec from those where it is clearly inferior,
we looked at the linkage degree of the entities at hand, as well the homogeneity of
the entities of interest.</p>
      <p>
        For the linkage degree, we can observe that a higher degree of the entities of
interest leads to a worse performance of RDF2vec Light. This can be seen in the inferior
performance of RDF2vec Light for the cities datasets in classification and regression.
Cities are among the most strongly interlinked entities in DBpedia [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. At the same
time, the document and entity similarity datasets contain a larger number of strongly
interlinked head entities.
      </p>
      <p>While for classification and regression problems, the set of entities is rather
homogeneous (i.e., all are cities, albums, etc.), the homogeneity is lower for the
document and entity relatedness, where the entities of interest are scattered across many
classes. Both degree and homogeneity contribute to the density of the considered
subgraphs, as depicted in Fig. 1. From the plots, we can observe a correlation of
RDF2Vec Light performance and the density of the graph spanned by the random
walks – the more dense the graph (i.e., the less head entities there are and the more
homogeneous the entity set at hand), the better the performance of RDF2Vec Light.</p>
      <p>The runtime of RDF2Vec Light is linear in the number of entities of interest. On
commodity hardware, the runtime is roughly 1 minute per 10 nodes. In comparison,
training RDF2Vec on the full DBpedia graph takes a few days.</p>
    </sec>
    <sec id="sec-4">
      <title>4 Conclusion and Outlook</title>
      <p>In this paper, we presented RDF2Vec Light, an approach for learning latent
representations of knowledge graph entities that requires only a fraction of the computing
power compared to other embedding approaches. Rather than embedding the whole
knowledge graph, RDF2Vec Light trains vectors for only few entities of interest and
their context. For this approach, the walk generation algorithm has been adapted to
better represent the context of the entities. Our experiments show that the results
achieved with RDF2Vec Light are comparable to those obtained with the standard
RDF2Vec, while requiring only a fraction of the runtime.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Fernández</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martínez-Prieto</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gutiérrez</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polleres</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arias</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Binary RDF representation for publication and exchange (HDT)</article-title>
          .
          <source>Web Semantics</source>
          <volume>19</volume>
          ,
          <fpage>22</fpage>
          -
          <lpage>41</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Han</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cao</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xin</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>OpenKE: An open toolkit for knowledge embedding</article-title>
          .
          <source>In: Proceedings of EMNLP</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Heist</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hertling</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ringler</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paulheim</surname>
          </string-name>
          , H.:
          <article-title>Knowledge graphs on the web-an overview</article-title>
          . In: Tiddi,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Lécué</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Hitzler</surname>
          </string-name>
          , P. (eds.)
          <source>Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications and Challenges</source>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>22</lpage>
          . IOS Press (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Pellegrino</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cochez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garofalo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ristoski</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>A configurable evaluation framework for node embedding techniques</article-title>
          .
          <source>In: The Semantic Web: ESWC 2019 Satellite Events</source>
          . pp.
          <fpage>156</fpage>
          -
          <lpage>160</lpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Portisch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hladik</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paulheim</surname>
          </string-name>
          , H.:
          <article-title>KGvec2go - knowledge graph embeddings as a service</article-title>
          .
          <source>LREC</source>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ristoski</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosati</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Noia</surname>
          </string-name>
          , T.D.,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>R.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paulheim</surname>
          </string-name>
          , H.:
          <article-title>RDF2Vec: RDF graph embeddings and their applications</article-title>
          .
          <source>Semantic Web</source>
          <volume>10</volume>
          (
          <issue>4</issue>
          ),
          <fpage>721</fpage>
          -
          <lpage>752</lpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ristoski</surname>
          </string-name>
          , P., de Vries,
          <string-name>
            <surname>G.K.D.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Paulheim</surname>
          </string-name>
          , H.:
          <article-title>A collection of benchmark datasets for systematic evaluations of machine learning on the semantic web</article-title>
          .
          <source>In: ISWC</source>
          . pp.
          <fpage>186</fpage>
          -
          <lpage>194</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>