<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Corresponding author.
$ richard.nordsieck@xitaso.com (R. Nordsieck); michael.heider@informatik.uni-augsburg.de (M. Heider);
anton.hummel@xitaso.com (A. Hummel); joerg.haehner@informatik.uni-augsburg.de (J. Hähner)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Closer Look at Sum-based Embedding Aggregation for Knowledge Graphs Containing Procedural Knowledge</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Richard Nordsieck</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Heider</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anton Hummel</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jörg Hähner</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Organic Computing Group, University of Augsburg</institution>
          ,
          <addr-line>Am Technologiezentrum 8, 86159 Augsburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>XITASO GmbH IT &amp; Software Solutions</institution>
          ,
          <addr-line>Austraße 35, 86153 Augsburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>While knowledge graphs and their embedding into low dimensional vectors are established fields of research, they mostly cover factual knowledge. However, to improve downstream models, e. g. for predictive quality in real-world industrial use cases, embeddings of procedural knowledge, available in the form of rules, could be utilized. As such, we investigate which properties of embedding algorithms could prove beneficial in this scenario and evaluate which established embedding methodologies are suited to form the basis of sum-based embeddings of diferent representations of procedural knowledge.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;knowledge graph embedding</kwd>
        <kwd>industrial knowledge graph</kwd>
        <kwd>expert knowledge</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Knowledge graphs are frequently used to represent a multitude of heterogeneous information
from diferent sources [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Since they are often non-exhaustive, knowledge graph completion,
e. g. through link prediction, is a widely researched field which usually relies on knowledge
graph embeddings, i. e. low dimensional vector representations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. A separate strand of research
utilizes knowledge graph embeddings to infuse background knowledge into downstream models
such as information retrieval, recommender systems or predicting variables [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. However,
most of these use cases deal with factual (i. e. terminology, specific details or elements [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) or
conceptual (i. e. classifications, categories, principles and generalisations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) knowledge, such
as located in or president of relations.
      </p>
      <p>
        In contrast to this, procedural (i. e. knowledge of skills, techniques, methods and ‘criteria for
determining when to use appropriate procedures’ [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) and metacognitive (i. e. strategic,
contextual and conditional) knowledge plays a significant role in industries, such as manufacturing,
where parameters have to be adjusted to achieve target quality criteria or mitigate occurring
quality defects. Training learning systems for these predictive quality use cases is hampered by
a scarcity in observable data since parametrisation processes are executed relatively seldom
which leads to few data points that contain relevant information compared to overall process
iterations since the processes have been manually optimised to a high degree. Still, utilising
learning systems as assistance for relatively untrained labour could be one factor to adapt to the
rapidly changing workplace and demographic changes. To mitigate the previously mentioned
challenges, we envision a combination of procedural knowledge with learning systems. Building
on Nordsieck et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], that have explored how procedural knowledge can be represented in
knowledge graphs and provided a proof of concept embedding methodology, we now investigate
which combination of representations and embedding methods are best suited for embedding
knowledge graphs containing procedural knowledge. To this end, we analyse the following
research questions from a theoretical as well as an empirical perspective:
• RQ 1: What is the impact of representing quantified values as literals or entities?
• RQ 2: What is the impact of chained binary relations on the benefit of quantified values?
• RQ 3: Which embedding methodologies are able to deal with the indirections introduced
through more detailed representations?
• RQ 4: Which embedding method is best suited to embed procedural knowledge?
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Industrial knowledge graphs are gaining traction in practice but are frequently dealing only with
factual knowledge [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Nordsieck et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] address this problem, providing modelling patterns
for procedural knowledge. However, the investigated modelling pattern is not applicable to
knowledge graphs with multiple kinds of relations, which are common in practice. We address
this shortcoming by providing alternative and more detailed modelling patterns.
      </p>
      <p>
        Knowledge graph embedding methods for link prediction have been extensively researched
[
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6, 7, 8</xref>
        ]. Each of these approaches exhibits strengths and weaknesses. Gutiérrez-Basulto and
Schockaert [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] mention that previous approaches are not suited to embed rules, which are an
integral part of representing procedural knowledge. Furthermore, they provide theoretical
considerations on how this could be alleviated. In contrast to this, Abboud et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] present an
implemented approach that is able to represent existential rules. Of special interest to embed
quantifiable procedural knowledge are methods that explicitly cater for literals [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. However,
whether direct transfer of the results achieved using these methods to graphs containing
procedural knowledge is possible needs to be validated. Portisch and Paulheim [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] relate
link prediction embedding methods to those encountered in data mining, e. g. RDF2Vec, and
conclude that link prediction methods could also be used for downstream tasks.
3. Embedding Procedural Knowledge Graphs
In the following, we extend and evaluate an approach, presented by Nordsieck et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], to
embed procedural knowledge to allow for its use in downstream predictive scenarios. The
approach is shown in Figure 1. To be able to use knowledge graph embedding methods a suitable
modelling pattern for a graph representation that determines how to represent the diferent
elements of procedural knowledge in a graph has to be chosen (see Section 3.1). Based on this
      </p>
      <sec id="sec-2-1">
        <title>Knowledge Graph</title>
      </sec>
      <sec id="sec-2-2">
        <title>Graph</title>
      </sec>
      <sec id="sec-2-3">
        <title>Quality Characteristics Propagation</title>
      </sec>
      <sec id="sec-2-4">
        <title>Subknowledge Graph</title>
        <p>+
Subgraph Embedding</p>
        <p>: quality
characteris c
implies
: process
parameter
: quan fied process
parameter</p>
        <p>: quality
characteris c
: process
parameter
quan fies
: quan fied
process
parameter
(a) Graphical representation of ^,rel.</p>
        <p>
          (b) Graphical representation of ^,ch,e.
representation the procedural knowledge, which is usually available in the form of if-then rules
[
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], is transferred into a knowledge graph. For this graph, node embeddings can be calculated
using standard knowledge graph embedding methods, e. g. TransE, BoxE or RDF2Vec. In our
perspective downstream task of predicting parameter adjustments it is necessary to select a
portion of the graph that is relevant for the specific input to the system by propagating from
the node representing the input. The resulting subgraph is then embedded using a sum-based
approach utilising the previously computed node embeddings.
        </p>
        <sec id="sec-2-4-1">
          <title>3.1. Representation</title>
          <p>
            Procedural knowledge can be available in diferent levels of detail, ranging from high-level
implies notions between conditions or conclusions of a rule to quantified versions of it, i. e. implies
relations between quality defects and the responsible process parameters, to relations between
quantified parameters and the inclusion of production data underlying the quantifications in a
manufacturing scenario [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ]. These diferent levels of detail have increasing demands on the
embedding methods due to their properties, e. g. arity of relations or literals. In the following,
we explore modelling alternatives for graph representations of diferent levels of detail of
procedural knowledge. As more fine-granular levels of detail do not introduce new properties
but merely increase their relevance, we focus on the following levels of details: unquantified
procedural knowledge and quantified conclusions. Unquantified knowledge describes high-level
implies relations or unquantified rules, e. g. If quality characteristic q is unsatisfactory then adjust
process parameter p, and can be directly represented as a triple  = (, ⟨implies⟩, ) for  ∈ 
and  ∈  [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ].
          </p>
          <p>
            In contrast to this, procedural knowledge with quantified conclusions, e. g. If quality
characteristic q is unsatisfactory then adjust process parameter p by  with  ∈ R, results in a quadruple
^ = (, ⟨implies⟩, ,  ) [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ]. This means that the relation ^ underlying the quantified
conclusions is ternary as opposed to the binary relations that are usually represented in knowledge
graphs and subsequently knowledge graph embedding methods apart from BoxE [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ]. Nordsieck
et al. [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] dealt with this by treating the quantification  , a numeric literal, as a weight to the
implies relation, a representation we will refer to as ^,rel. However, since common knowledge
graph embedding methods do not support literals of relations, the quantifications were modelled
as separate relations. This incurs a loss of semantics which is especially disadvantageous if
knowledge from multiple sources or of multiple levels of detail is represented in the graph, as is
usually the case in practice.
          </p>
          <p>To address this issue, we rely on the fact that the ternary relation can be written as a chain of
two binary relations, i. e. ^,ch = (, ⟨implies⟩, (, ⟨quantifies ⟩, )), which can then be directly
represented in the graph as can be seen in Figure 2b. However, this indirection could be
challenging for embedding methods to pick up.</p>
          <p>
            At the quantification level, we’re confronted with two modelling alternatives: treating
quantifications as entities, ^,ch,e, which diminishes the semantic value of the respective nodes or
explicitly modelling them as literals, ^,ch,l, which keeps their semantics largely intact but
provides further challenges for established knowledge graph embedding methods, that lack
explicit support for literals [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ]. The second alternative brings with it the need to keep the
high-level implies between  and  entities as in the unquantified case since otherwise there is
no direct connection between them in the graph. We denote this literal-based representation as
^,ch,l, . This modelling decision leads to this representation exhibiting a compositional property
since (,  ) ∈ ⟨implies⟩ ∧ (,  ) ∈ ⟨quantifies ⟩ =⇒ (, ) ∈ ⟨implies⟩. For consistency, we
introduce a modelling alternative with the same characteristic to the entity based modelling
alternative, resulting in ^,ch,e, . Figure 2 allows a graphical comparison of representatives of
the previously described modelling alternatives.
          </p>
          <p>An overview of properties exhibited by at least one of the representations can be seen in
Table 1. While the properties composition, indirection and literals have been previously described,
asymmetry directly results from the modelling decisions made, i. e. if (, ) ∈ ⟨implies⟩ =⇒
(, ) ∈/ ⟨implies⟩. Also, all representations exhibit heterogeneous entities since process
parameters, quality characteristics and quantifications belong to diferent classes.
Overview of properties which representations exhibit and embedding methods address on a theoretical
level. For the pattern underlying representation names refer to Section 3.1.</p>
          <p>Asymmetry</p>
          <p>Composition</p>
          <p>Literals Indirection


.s ^,rel
rep ^,ch,e
eR ^,ch,e,
^,ch,l,
TransE</p>
          <p>ComplEx
gn ComplEx-LiteralE
id DistMult
d
eb DistMult-LiteralE
Em RotatE</p>
          <p>BoxE
RDF2Vec
✓
✓
✓
✓
✓
✓
✓
✓
✗
✗
✓
✓
✓
✗
✗
✗
✓
✓
✓
✗
✗
✗
✗
✓
✗
✓
✗
✗
✗
✗
✓
✗
✗
✓
✗
✓
✗
✗
✗
✗
✗
✓
✓
✓
✗
✗
✗
✗
✗
✗
✗
✓</p>
        </sec>
        <sec id="sec-2-4-2">
          <title>3.2. Individual Embeddings</title>
          <p>
            Individual node embeddings are calculated by established knowledge graph embedding methods.
Since Portisch et al. [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ] concluded that embeddings for link prediction are, to a certain extent,
also suited for downstream tasks, we chose both, methods initially intended for embeddings for
link prediction as well as downstream tasks in our investigation to determine which is suited
best for embedding procedural knowledge. We decided on a set of knowledge graph embedding
methods to investigate, namely TransE, ComplEx, ComplEx-LiteralE, DistMult-LiteralEg,
DistMult, RotatE, BoxE and RDF2Vec, which includes embedding methods of diferent complexity
that satisfy one or more of the properties exhibited by the representations (cf. Section 3.1).
such, we added LiteralE [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ] to our evaluation that builds on ComplEx and gated DistMult for
which [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] report good results on numeric literals. The other properties are shared with the
respective underlying embedding method.
          </p>
          <p>
            As we can see, there is not a single embedding method that is a perfect fit for the properties
exhibited by the representations. Therefore, we rely on the experimental evaluation in Section 4
for indications for the importance of diferent properties.
3.3. Sum-based Embedding Aggregation
To allow for the computation of sum-based embeddings the individual embeddings have to be
identified. Section 3.3.1 presents the employed methods for selecting the relevant subgraphs
while Section 3.3.2 details the aggregation methodology.
3.3.1. Node Selection Strategy
To select the most relevant nodes that can be aggregated for a given input, i. e. a quality
characteristic in our case, a subgraph  is generated by propagating from the node corresponding to
the input. The depth until which the propagation is executed is limited by , which is chosen
depending on the respective representation. In our case  = 1 for unquantified conclusions,
 , and quantified conclusions modelled through relations, ^,rel and  = 2 for entity-based
representations that model the ternary relation through a combination of two relations. For
the literal-based representation, ^,ch,l, ,  = 1 since no node embedding for the literal is
computed—it only influences the embedding of the nodes connected to it.
3.3.2. Aggregation
Individual node embeddings  of nodes , with  ∈ , where  is the subgraph corresponding
to the input q, form the basis of the subgraph’s embedding and are aggregated following a
sum-based approach. Nordsieck et al. [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] argue that since the head node does not provide
additional semantic information it should not be considered. This results in
ℎ¯ =
          </p>
          <p>∑︁
(,)∈</p>
          <p>⊗ (,  ),
where ,  are the pairs of head and tail nodes resulting from the graph propagation and
(,  ) is a distance measure between the node embeddings of  and  . However, one could
also include the head node in the subgraph embedding. Therefore, we define an aggregation
variant including the head node, 0, via:
ℎ = 0 +</p>
          <p>∑︁
(,)∈
 ⊗ (,  ).</p>
          <p>Due to the pattern-based nature of knowledge graphs in our scenario, we expect the relations
to be the same independent of the starting node. Also, they have previously been considered
by the knowledge graph embedding methods. Therefore, we do not consider the relations’
embedding in the aggregation.</p>
          <p>Since it is not clear whether the head node contains semantic information and should
consequently be included in the computation of the subgraph embedding  , we treat it as a parameter
along with the distance measure and conduct an experiment in Section 4 to evaluate its efect.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Evaluation</title>
      <p>
        To evaluate which method performs best for embedding procedural knowledge for downstream
tasks, we divert from metrics frequently encountered in link predictions scenarios (i. e. hits@k,
that measures the fraction of hits for which an entity appears under the first  entries of the
sorted list of individual rank scores) and utilize the metric matches@k, which measures the
mean overlap between the  closest quality characteristics in embedding and graph space [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
As such, it is suited to establish whether subgraphs are close in embedding and graph space.
Closest is in this case defined as the highest overlap in related process parameters. A high
value—up to a maximum of —indicates that the  nearest quality characteristics of the graph
space have been correctly identified in the embedding space.
      </p>
      <p>To evaluate the representations we rely on a synthetic dataset that simulates parametrisation
processes in manufacturing scenarios1, i. e. process iterations with parameters and the resulting
quality characteristics that can be used as target variables in predictive quality systems. This
allows us to control the efect of noise that is typical for many real world datasets and might
lead to misjudging methods due to the influence of said noise. Since in the envisioned
usecase, generalisation of the learnt embeddings to new knowledge is not needed we do not need
to employ a separate test set and can validate convergence in-sample. The dataset results
in a knowledge graph containing 42 to 90 vertices with 48 to 144 edges depending on the
representation. The average neighbour degrees range between 1.67 and 4.11.</p>
      <p>The embeddings have been trained using PyKEEN2 for link prediction node embeddings with
an Adam optimizer with learning rate 4 × 10− 4 and weight decay 1 × 10− 5 as well as the default
parametrization of the embedding methods, while RDF2Vec was trained using pyRDF2Vec3,
and a random walker with maximal depth and maximal walks set to 4 and 100, respectively.
All embedding methods apart from BoxE and RDF2Vec were trained for 750 epochs. BoxE and
RDF2Vec were trained for 1500 and 1000 epochs, respectively, as they did not yet converge after
750. We chose to evaluate 48 dimensional embeddings since these allow for direct utilisation in
a down-stream predictive system.</p>
      <p>Inspecting the results shown in Table 2, we can observe that the theoretically best-suited
method, RDF2Vec, produces uncompetitive results in practice—the exception being
representation ^,rel. Interestingly, it is also unable to cope with modelling ternary relations through
chained binary relations (present from ^,ch,e onward), which we expected to be the case as
the random walks conducted can span more than the required two relations. Overall, BoxE
produced the worst results, performing slightly worse than RDF2Vec for most configurations.
Regarding the variations in the sum-based aggregation, euclidean distance outperforms jaccard
for all representations and embedding methods apart from RDF2Vec. Whether to include the
1Code, data and RDF representations are available at https://github.com/0x14d/embedding-operator-knowledge
2https://pykeen.readthedocs.io/
3https://pyRDF2Vec.readthedocs.io/
representation in bold. Results for ^,ch,l, are only available for embedding methods supporting literals.
.</p>
      <p>d
p
e
R
a
e
H
.
t
s
i</p>
      <p>D
ℎ
ℎ
ℎ
 ℎ¯
l
e
,^ ℎ¯
r
e
,
h
c,^ ℎ¯</p>
      <p>ℎ

,
e
,

ch, ℎ¯
^</p>
      <p>ℎ

,
l
,
ch, ℎ¯
^

euc
jac
euc
jac
euc
jac
euc
jac
euc
jac
euc
jac
euc
jac
euc
jac
euc
jac
euc
jac
indirections or chained binary relations ( and ^,rel) no diference is discernible. For
representations with chained binary relations, the head node is included in a majority of the best
results per representation. This indicates that the indirections introduced by chained binary
relations profit of a stronger sense of context which is provided by adding the head node. This
can be interpreted as an indication that the quantification and its connections to the respective
parameter can be embedded. Regarding RQ4, we conclude that a combination of RotatE with
euclidean distance and included head node is likely to be the embedding method best suited to
embed procedural knowledge.</p>
      <p>To answer RQ1, whether quantified values should better be represented as literals or entities,
we refer to RotatE’s performance on ^,ch,e, versus DistMult-LiteralE on ^,ch,l, . Here, RotatE
outperforms DistMult-LiteralE by 15.03% for the respective best configuration. In general,
we observe that the literal enabled method DistMult-LiteralE performs consistently worse
than its literal agnostic base DistMult with Complex-LiteralE showing the same behaviour.
Therefore, we conclude that representing quantified values as entities is the better choice for
the investigated methods.</p>
      <p>Regarding RQ2, no benefit of including quantified values in the respective representations
can be discerned using the matches@k metric. As such, RQ2 cannot be definitively answered.
Consequently, this question should be re-evaluated on an actual downstream scenario which
includes a stronger reliance on quantified values than matches@k.</p>
      <p>The representation containing only chained binary relations, ^,ch,e, provides mostly worse
results compared to representations (a) not considering the quantification, (b) representing the
quantification as a separate relation or (c) adding  for all evaluated embedding methods. As
such, we conclude that none of the evaluated embedding methods is able to suficiently deal
with the indirections (RQ3). Consequently, alternative approaches to represent ternary relations
in knowledge graphs should be explored.</p>
    </sec>
    <sec id="sec-4">
      <title>5. Future Work</title>
      <p>On a conceptual level, the presented representations all concern non-temporal aspects of
procedural knowledge. However, in practice the order in which actions are executed often plays
a significant role. As such, we intend to provide representations that address this aspect of
procedural knowledge and evaluate whether ordered RDF2Vec provides a benefit over other
embedding methods in this scenario. Also, since BoxE is able to deal with higher arities it could
be used directly to gain a better understanding of the implications of modelling the ternary
relation as chained binary relations. As the detrimental efect of lower dimensional embeddings
on matches@k visible in preliminary experiments was not evenly distributed over embedding
methods, e. g. RDF2Vec was able to handle it better than its competitors, a more thorough
investigation in this direction is planned. Furthermore, diferent aggregation methods than
sum-based aggregations will be considered. While we argued that to evaluate the quality of
the embeddings in a downstream scenario a specific metric, i. e. matches@k, is beneficial, an
evaluation on standard link prediction metrics, i. e. hits@k, could be used to quantitatively
evaluate the node embedding quality. Furthermore, an evaluation with an actual downstream
task might give further insights into the expressiveness of matches@k and the achieved quality
of the respective embedding method. Furthermore, we plan to evaluate the approach on more
datasets.</p>
    </sec>
    <sec id="sec-5">
      <title>6. Conclusion</title>
      <p>
        In this paper, we took a closer look at embeddings of procedural knowledge. From exploring more
detailed modelling patterns than were previously published [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], over analysing the properties
they exhibit, to evaluating—on a theoretical and experimental level—which embedding methods
are best suited to embed them. Additionally, subgraph selection strategies were adapted and
alternative aggregation strategies evaluated. We discovered that the theoretical inspection of
the properties of embedding methods is not confirmed by the experimental results. Especially,
methods capable of embedding literals did not provide a benefit over using literal agnostic
methods. Most of the evaluated methods seem to have dificulties of capturing the indirections
introduced by modelling ternary relations as chained binary relations. However, explicitly
adding the implied high-level relation seems to mitigate this problem. An evaluation on an
actual downstream task will provide further insights into our findings.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Buchgeher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gabauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Martinez-Gil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ehrlinger</surname>
          </string-name>
          ,
          <article-title>Knowledge graphs in manufacturing and production: A systematic literature review</article-title>
          ,
          <source>IEEE Access 9</source>
          (
          <year>2021</year>
          )
          <fpage>55537</fpage>
          -
          <lpage>55554</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Abboud</surname>
          </string-name>
          , I. Ceylan,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lukasiewicz</surname>
          </string-name>
          , T. Salvatori,
          <article-title>Boxe: A box embedding model for knowledge base completion</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>9649</fpage>
          -
          <lpage>9661</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Portisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Heist</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Paulheim</surname>
          </string-name>
          ,
          <article-title>Knowledge graph embedding for data mining vs. knowledge graph embedding for link prediction-two sides of the same coin?</article-title>
          ,
          <source>Semantic Web</source>
          <volume>13</volume>
          (
          <year>2022</year>
          )
          <fpage>399</fpage>
          -
          <lpage>422</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Krathwohl</surname>
          </string-name>
          ,
          <article-title>A revision of bloom's taxonomy: An overview</article-title>
          ,
          <source>Theory into practice 41</source>
          (
          <year>2002</year>
          )
          <fpage>212</fpage>
          -
          <lpage>218</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Nordsieck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hummel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hähner</surname>
          </string-name>
          ,
          <article-title>Towards conceptual and procedural models of operator knowledge in industrial information models</article-title>
          ,
          <source>First International Workshop On Semantic Industrial Information Modelling (SemIIM) at the 19th Extended Semantic Web Conference (ESWC</source>
          <year>2022</year>
          )
          <article-title>(</article-title>
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bordes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Usunier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Garcia-Duran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weston</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Yakhnenko</surname>
          </string-name>
          ,
          <article-title>Translating embeddings for modeling multi-relational data</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>26</volume>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Trouillon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Welbl</surname>
          </string-name>
          , S. Riedel, É. Gaussier, G. Bouchard,
          <article-title>Complex embeddings for simple link prediction</article-title>
          ,
          <source>in: International conference on machine learning</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>2071</fpage>
          -
          <lpage>2080</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-H.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-Y.</given-names>
            <surname>Nie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>Rotate: Knowledge graph embedding by relational rotation in complex space</article-title>
          , arXiv preprint arXiv:
          <year>1902</year>
          .
          <volume>10197</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Gutiérrez-Basulto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schockaert</surname>
          </string-name>
          ,
          <article-title>From knowledge graph embedding to ontology embedding? an analysis of the compatibility between vector space representations and rules</article-title>
          ,
          <source>in: Sixteenth International Conference on Principles of Knowledge Representation and Reasoning</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Gesese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sack</surname>
          </string-name>
          ,
          <article-title>A survey on knowledge graph embeddings with literals: Which model links better literal-ly?</article-title>
          ,
          <source>Semantic Web</source>
          <volume>12</volume>
          (
          <year>2021</year>
          )
          <fpage>617</fpage>
          -
          <lpage>647</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Portisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Paulheim</surname>
          </string-name>
          ,
          <article-title>Walk this way! entity walks and property walks for rdf2vec</article-title>
          ,
          <source>arXiv preprint arXiv:2204.02777</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Hörner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schamberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bodendorf</surname>
          </string-name>
          ,
          <article-title>Externalisierung von prozess-spezifischem mitarbeiterwissen im produktionsumfeld</article-title>
          ,
          <source>Zeitschrift für wirtschaftlichen Fabrikbetrieb</source>
          <volume>115</volume>
          (
          <year>2020</year>
          )
          <fpage>413</fpage>
          -
          <lpage>417</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Portisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Paulheim</surname>
          </string-name>
          , Putting rdf2vec in order,
          <source>arXiv preprint arXiv:2108.05280</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Quaternion knowledge graph embeddings</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>32</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kristiadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lukovnikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <article-title>Incorporating literals into knowledge graph embeddings</article-title>
          , in: International Semantic Web Conference,
          <year>2019</year>
          , pp.
          <fpage>347</fpage>
          -
          <lpage>363</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>