<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enhanced GAT: Expanding Receptive Field with Meta Path-Guided RDF Rules for Two-Hop Connectivity</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Julie Loesch</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michel Dumontier</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Remzi Celebi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Advanced Computing Sciences, Maastricht University</institution>
          ,
          <addr-line>Paul-Henri Spaaklaan 1, Maastricht, 6229 EN</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Neuro-Symbolic Artificial Intelligence is an emerging field that combines neural networks and symbolic reasoning to tackle complex tasks, such as ontology reasoning, i.e. inferring new facts that are not expressed in an ontology. However, the integration of symbolic reasoning in neural networks for eficient reasoning over very large and complex ontologies still remains relatively unexplored. Therefore, this paper introduces a scalable neural-symbolic method called 2-Hop GAT for reasoning over large and complex ontologies, which is an extension of the Graph Attention Network (GAT), leveraging meta paths of two hop to capture transitivity. By extending GAT to include nodes that are two hops away, the proposed method achieves enhanced reasoning capabilities. Additionally, the Filtered 2-Hop GAT variant is presented, which adds a filtering mechanism to guide meta paths of two hop to include two RDF rules. Namely, (1) subclass transitivity: if  is a subclass of , and  is a subclass of , then  is also a subclass of , and (2) if  is a type of  and  is a subclass of , then  is also a type of . This paper reports experimental results using the datasets from the SemRec Challenge at ISWC 2023, demonstrating the efectiveness of the proposed methods. The latter approach shows promising results, achieving a Hits@5 score of 0.752 and a Hits@10 score of 0.803 for the class subsumption task.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Neuro-Symbolic AI</kwd>
        <kwd>Ontology Reasoner</kwd>
        <kwd>Graph Neural Networks</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>A KG can take the form of an ontology, which is a formal and explicit specification of
concepts, relationships, constraints and rules within a specific domain and represents more
complex relationships (i.e. negation, conjunction, disjunction). However, the use of GNNs for
eficient reasoning over very large and complex ontologies remains relatively unexplored. One
possible reason is that GNNs, in their current form, operate in a data-driven manner and lack
explicit symbolic reasoning capabilities that are essential for ontologies. The receptive field
of a single GNN is restricted to one-hop network neighborhoods, which means that a node
can only attend to its immediate neighbors to compute its next layer representation. Although
stacking multiple GNN layers can enlarge the receptive field, such deeper networks are prone
to over-smoothing such that it may lead to a model where node representations converge to
indistinguishable vectors (i.e., all node embeddings converge to the same value) [6].</p>
      <p>
        In this work, we propose 2-Hop Graph Attention Network (GAT), a scalable and eficient
neuralsymbolic method to reason over large and complex ontologies with a focus on transitivity, which
is a fundamental property ontology reasoning. 2-Hop GAT is an extension of GAT [5], a neural
network architecture that leverages masked self-attentional layers. In this way the receptive
ifeld includes nodes that are two hops away. We also explore a Filtered 2-Hop GAT, which adds a
ifltering mechanism such that meta paths of two hop are guided to include two RDF rules [8] and
to capture transitivity, which is a fundamental property for ontology reasoning. The following
rules are comprised: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) subclass transitivity: if  is a subclass of , and  is a subclass of ,
then  is also a subclass of , and (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) if  is a type of  and  is a subclass of , then  is also
a type of . Evaluated against CaLiGraph10e5, the system achieves promising results for the
class subsumption reasoning task with a Hits@5 score of 0.752 and Hits@10 score of 0.803.
      </p>
      <p>The main contributions of this paper are:
• Development of 2-Hop GAT and Filtered 2-Hop GAT which do not generate random walks
as for metapath2vec, thereby significantly reducing time complexity.
• Including symbolic reasoning to Graph Neural Networks in the form of a filtering
mechanism to add two RDF rules (i.e., RDFS entailment rules1).
• Application of Graph Neural Networks to address the challenges of two key ontology
reasoning tasks, namely subclass and type reasoning tasks.</p>
      <p>The remainder of the paper is organized as follows: previous related studies and the datasets
provided by the SemRec Challenge at ISWC 2023 are presented in Section 2 and Section 3,
respectively. Section 4 introduces the neural-symbolic methods for reasoning over very large
ontologies. Section 5 explains the experimental setup and reports the obtained results, which is
followed by a discussion in Section 6. Finally, Section 7 concludes this paper. The implementation
to generate the analysis is made available at our Github repository: https://github.com/jloe2911/
SemREC2023.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Wang et al. [9] proposed Multi-hop Attention Graph Neural Network (MAGNA), which is an
efective multi-hop self-attention mechanism for graph-structured data. MAGNA captures the
1https://www.w3.org/TR/rdf-mt/#rules
long-range interactions between nodes that are not directly connected but may be multiple
hops away. In contrast, our proposed model simply enlarges the receptive field to nodes that
are two hops away using Graph Attention Network [5].</p>
      <p>Yuxiao et al. [10] propose metapath2vec for scalable representation learning in heterogeneous
networks. Metapath2vec aims to maximize the likelihood of preserving both the structures and
semantics of a given heterogeneous graph by simultaneously learning the low-dimensional and
latent embeddings for multiple types of nodes and edges. The approach is based on
meta-pathbased random walks; however, Sun et al. [11] showed that heterogeneous random walks are
biased toward types of nodes with a dominant number of paths and concentrated nodes, which
have a governing percentage of paths pointing to a small set of nodes. To address this issue, the
authors demonstrated how to use meta-paths to guide heterogeneous random walkers. Thus,
in metapath2vec, the random walk is driven by a meta-path that defines the node type order.
The random walks are then used to learn an embedding vector for each node in the graph,
employing the Word2Vec algorithm [12]. To overcome the mentioned bias problem and at the
same time, to integrate symbolic reasoning into Graph Neural Networks, we added a filtering
mechanism such that meta-paths of two hop are guided to include two RDF rules [8].</p>
      <p>Multi-hop has also been used in other neural reasoning methods than GNNs. Particularly,
Mehryar and Celebi [13] extended TransE and rTransE to make subsumption and
instancechecking reasoning possible by leveraging transitive relations. The authors further improved the
quality of the embeddings using multi-hop samples generated by an agent’s policy network. The
agent is a neural network that takes as input an entity vector embedding (i.e., state) and outputs
a relationship (i.e., action). The agent learns a good policy to choose actions that lead to a more
meaningful and longer sequence of translations. Employing rTransE on the CaLiGraph10e5
dataset provided by the SemRec Challenge 2022, they achieved a Hits@5 score of 0.678 and
a Hits@10 score of 0.713 for the class subsumption task, and a Hits@5 score of 0.017 and a
Hits@10 score of 0.095 for the class membership task.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Datasets</title>
      <p>This paper utilizes three ontologies that were provided by the SemRec Challenge at ISWC 2023,
which are OWL2Bench [14], ORE [15], and CaLiGraph [16]. Usually, an ontology comprises a
Terminological Box (Tbox) and an Assertional Box (Abox) axioms. The Tbox axioms describe
the relationships between classes, such as subclass relationships and property restrictions. The
Abox axioms represent individuals that are instances of the classes defined in the Tbox. Table 1
lists the frequency of each axiom. The OWL2Bench and ORE Tboxes encompass a wide range
of axioms, including Class Expression Axioms, Object Property Axioms, Data Property Axioms,
and Assertions, among others. In contrast, CaLiGraph’s Tbox is primarily composed of class
restrictions. In addition, the number of nodes/classes and edges is reported in Table 2, and
Table 3 indicates the number of edges for each edge type and their proportion.</p>
      <p>OWL2Bench [14] is a benchmark for evaluating the coverage, scalability, and query
performance of ontology reasoners across the four OWL 2 profiles (EL, QL, RL, and DL). The authors
extended the well-known University Ontology Benchmark (UOBM) to create four TBoxes for
each of the four OWL 2 profiles. Furthermore, OWL2Bench has an ABox generator and comes
with a set of 22 SPARQL queries that involve reasoning. OWL2Bench is provided in two separate
sub-datasets. OWL2Bench1 includes 7, 989 (99%) assertion relations in the training data and
2, 283 (99%) in the test data, as well as 105 (1%) subclass relations in the training data and
30 (1%) in the test data. OWL2Bench2, which is approximately twice the size of OWL2Bench1,
contains 15, 526 (99%) assertion relations in the training data and 4, 437 (99%) in the test
data, as well as 105 (1%) subclass relations in the training data and 30 (1%) in the test data. In
addition, both sub-datasets have very few equivalence relations between a pair of classes (i.e., #
of all other edges). Thus, OWL2Bench1 and OWL2Bench2 mainly comprise assertion edges
(i.e., around 99%).</p>
      <p>The second ontology that was provided by the challenge is OWL Reasoner Evaluation (ORE)
2014 [15]. The dataset is larger than OWL2Bench but similar in the sense that it includes mainly
subclass and assertion relations, and only very few equivalence relations between a pair of
classes. Compared to OWL2Bench, ORE comprises more subclass edges (i.e., around 13%)
and fewer assertion relations (i.e., around 87%). ORE 2014 was issued by the OWL Reasoner
Evaluation (ORE) competition, which is an annual competition (with an associated workshop)
that pits OWL 2 compliant reasoners against each other on various standard reasoning tasks
over naturally occurring problems.</p>
      <p>Finally, the third dataset that we used is CaLiGraph, which was curated by Heist and Paulheim
[16]. The dataset was generated from Wikipedia by exploiting the category system, list pages,
and other list structures in Wikipedia, containing more than 15 million typed entities and
around 10 million relation assertions. In addition, CaLiGraph has a large ontology, comprising
more than 200, 000 class restrictions. The authors also introduced diferently sized subsets of
CaLiGraph, allowing for performing scalability experiments. Since CaLiGraph has a large ABox
and ontology, it is an interesting benchmark dataset able to measure the trade-of between
scalability and accuracy. In contrast to the two previously mentioned datasets, CaLiGraph is
more challenging because it contains more relation types (i.e., edges that represent simple facts)
and poses high scalability requirements.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>
        This section introduces 2-Hop Graph Attention Network (2-Hop GAT) and Filtered 2-Hop Graph
Attention Network (Filtered 2-Hop GAT), whose primary goal is to reason over two types
of ontology relations, namely subclass and type reasoning. Subsection 4.1 provides a short
background on Graph Attention Network (GAT). Subsection 4.2 presents 2-Hop GAT, which is an
extension of GAT that follows a message passing schema to update node representations using
information from nodes that are two hops away. To add explicit symbolic reasoning to 2-Hop
GAT, which is essential for ontology reasoning, we proposed Filtered 2-Hop GAT presented in
Subsection 4.3. In short, Filtered 2-Hop GAT adds a filtering mechanism that guides meta paths
of two hop to include two RDF rules [8]. Specifically, (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) subclass transitivity: if  is a subclass
of , and  is a subclass of , then  is also a subclass of , and (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) if  is a type of  and 
is a subclass of , then  is also a type of .
      </p>
      <sec id="sec-4-1">
        <title>4.1. Graph Attention Network</title>
        <p>Graph Attention Networks (GATs) are neural network architectures that operate on
graphstructured data, leveraging masked self-attentional layers to address the shortcomings of prior
methods based on graph convolutions [5]. For instance, by stacking layers, in which nodes can
attend over their neighborhoods’ features, GATs enable specifying diferent weights to diferent
nodes in a neighborhood without requiring any kind of costly matrix operation or depending on
knowing the graph structure upfront. GATs feature an attention mechanism in the aggregation
process by learning extra attention weights to the neighbors of each node. All the neighbors
 ∈  () are not equally important to a node . Moreover, GATs have demonstrated promising
results across four established transductive and inductive graph benchmarks: the Cora, Citeseer,
and Pubmed citation networks, as well as a protein-protein interaction dataset. Formally, the l-th
GAT layer can be defined by ℎ() =  (∑︀∈()   ()ℎ(− 1)), where   are the attention
weights.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. 2-Hop Graph Attention Network</title>
        <p>Standard Graph Neural Networks (GNN) follow a message passing schema to update node
representations using information from one hop neighborhoods iteratively. In general, GNN layer
compresses a set of vectors into a single vector through a two-step process, namely, message and
aggregation. In the first step, each node computes a message. For each node  ∈ { () ∪ },
the message function is defined as follows: () =  ()(ℎ
(− 1)). In the second step, each
() = ()({(),  ∈  ()}, ()).
node aggregates the messages from its neighbors: ℎ
(− 1) and node
Hence, the l-th GNN layer takes as input node embeddings from the node itself ℎ
embeddings from the neighboring nodes ℎ(∈−1)() and outputs node embedding ℎ .
()</p>
        <p>However, to explicitly capture transitivity, a fundamental property for ontology reasoning, it
is essential to extend the Graph Attention Network such that it follows a message passing schema
incorporating information from nodes that are two hops away to learn node representations.
Thus, in this paper, we propose an extended version of the Graph Attention Network [5], which
we call 2-Hop Graph Attention Network (2-Hop GAT). Two hop is crucial for an ontology reasoner
so as to capture longer-range dependencies within the graph. By considering information beyond
immediate neighbors, the model gains a broader context of the graph structure and can better
understand how distant nodes influence each other.</p>
        <p>The graph attention layer is expanded to two hops, enhancing attention mechanism to
incorporate not only the direct neighbors of a node but also their neighbors up to two hops
away. Figure 1 illustrates how the node embedding of a node (i.e., yellow node ) is calculated
by aggregating all the messages from its direct neighbors and also their neighbors that are
two hops away. Concretely, the inputs of the two-hop graph attention layer are randomly
initialized node features/embeddings and adjacency matrices representing the graph’s one-hop
and two-hop connectivity structures. Then, for each node, the node embeddings with respect
to its direct neighbors (nodes up to one hop away) and the node embeddings with respect to
its neighbors’ neighbors (nodes up to two hops away) are computed by the message passing
schema. At the end, the node representations obtained from its direct neighbors are aggregated
with the node representations acquired from its neighbors’ neighbors.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Filtered 2-Hop Graph Attention Network</title>
        <p>We modified 2-Hop GAT to guide the message passing schema within the 2-hop graph attention
layer to include two RDF rules [8]. By introducing the filtered version of 2-Hop GAT, we direct
the attention mechanism to specific two-hop paths. This mechanism helps the model focus on
relevant information and reduce noise or irrelevant signals from the graph. The two RDF rules
that were put in the graph attention layer are as follows.</p>
        <p>In our implementation, we focused on subclass transitivity by taking one-hop and two-hop
subclass relations (corresponding to the path: A rdfs:subClassOf B &amp; B rdfs:subClassOf C) as
input to the two-hop graph attention layer. Additionally, we considered one-hop assertion
and two-hop subclass relations for the other rule (representing the path: A rdfs:type B &amp; B
rdfs:subClassOf C). Particularly, we trained the 2-Hop GAT for each rule separately and then
combined the models by concatenating them at the final stage.</p>
        <p>Mathematically, given a graph  = (, ), where  is the set of nodes and  is the set of
edges, and using the notation ℎ() to represent the node features of node  at layer , the filtered
2-Hop graph attention layer concatenates the node embeddings obtained from:
• The standard graph attention layer: ℎ() =  (∑︀∈()   ()ℎ(− 1)).
• The 2-hop graph attention layer taking as input one-hop and two-hop subclass relations:
ℎ() =  (∑︀∈()   ()ℎ(− 1)) +  (∑︀∈2()   ()ℎ(− 1)), where  ()
is the set of neighboring nodes connected to  by edges of type subclass and 2()
includes the nodes that are two hops away from  through second-order edges of type
subclass.
• The 2-hop graph attention layer taking as input one-hop assertion and two-hop subclass
relations: ℎ() =  (∑︀∈()   ()ℎ(− 1)) +  (∑︀∈2()   ()ℎ(− 1)), where
 () is the set of neighboring nodes connected to  by edges of type assertion.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>In the context of a link prediction task, computing the Hits@k metric is a common way to
evaluate the performance of a link prediction model. Hits@k measures the percentage of positive
examples that appear in the top-k ranked predictions. Specifically, for each test instance, we
predicted the tail entity  given the head entity ℎ and the relation  and assessed whether the
rank of the predicted tail entity is below .</p>
      <p>We calculated Hits@5 and Hits@10 by employing the Graph Attention Network (GAT) [5]
and our two proposed models 2-Hop GAT and Filtered 2-Hop GAT on the datasets provided by
the SemRec Challenge at ISWC 2023, namely OWL2Bench [14], ORE [15], and CaLiGraph [16].
We trained GAT, 2-Hop GAT, and Filtered 2-Hop GAT on all the training edges and reported
Hits@k by splitting the test edges into Class Subsumption edges, Class Membership edges, and
all the edges.</p>
      <p>Table 4 reveals that no specific model performs neither very well nor very poorly for
OWL2Bench. On the other hand, for ORE, we can observe that 2-Hop GAT and Filtered
2Hop GAT outperform GAT for the class subsumption and class membership reasoning tasks.</p>
      <p>For CaLiGraph10e4, the best results were achieved using GAT, while for CaLiGraph10e5, the
highest results were obtained employing Filtered 2-Hop GAT. Precisely, for CaLiGraph10e5 with
the Filtered 2-Hop GAT approach, we achieved a Hits@5 score of 0.752 and a Hits@10 score
of 0.803 for the class subsumption task, and a Hits@5 score of 0.230 and a Hits@10 score of
0.424 for the class membership task, outperforming the results of Mehryar and Celebi [13].</p>
      <p>Overall, the results suggest that the performance of diferent models varies depending on
the dataset and the type of relations considered, but it can be observed that Filtered 2-Hop GAT
generally performs well across several datasets and relation types.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>The performance of various models was found to vary with the dataset owing to the distribution
of edge types. GAT outperforms the other two models for the class membership reasoning task
with OWL2Bench, which has a high proportion of assertion relations (99%) and a low proportion
of subclass relations (1%). The two-hop model may not be appropriate as few instances follow
the rule: if  is a type of  and  is a subclass of , then  is also a type of . In contrast,
2-Hop GAT and Filtered 2-Hop GAT performed better than GAT for the subsumption reasoning
task, suggesting that they capture subclass transitivity.</p>
      <p>For CaLiGraph10e4, the best results were achieved using GAT, and for CaLiGraph10e5, the
highest results were obtained by employing Filtered 2-Hop GAT. CaLiGraph10e4 consists of 47%
subclass edges, 40% assertion edges, and 13% simple facts, whilst CaLiGraph10e5 contains 36%
subclass relations, 11% assertion relations, and 52% simple facts.</p>
      <p>The results for ORE are more consistent. 2-Hop GAT and Filtered 2-Hop GAT outperformed
GAT for the class subsumption and class membership reasoning tasks. This can be explained by
the fact that the (relative) number of subclass and assertion edges is similar across the diferent
ORE datasets.</p>
      <p>Furthermore, it is important to mention that we solely relied on the provided triplets to
train and test the models, without incorporating any axioms from the T-boxes. Consequently,
significant and relevant information necessary for ontology reasoning is missing.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>We developed an extension of the Graph Attention Network, allowing the receptive field to
reach nodes that are two hops away. We introduced a filtering mechanism to guide meta paths
of two hop to include two RDF rules. We demonstrated that our approaches 2-Hop GAT and
Filtered 2-Hop GAT performed well across several datasets and relation types. For CaliGraph10e5,
Filtered 2-Hop GAT achieved a Hits@5 score of 0.752 and a Hits@10 score of 0.803 for the
class subsumption task, and a Hits@5 score of 0.230 and a Hits@10 score of 0.424 for the class
membership task. Our results suggest that 2-Hop GAT and Filtered 2-Hop GAT enable transitive
reasoning over two types of ontology relations, namely subclass and type reasoning.
on Neural Information Processing Systems - Volume 2, NIPS’13, Curran Associates Inc.,
Red Hook, NY, USA, 2013, p. 2787–2795.
[3] T. Trouillon, J. Welbl, S. Riedel, Éric Gaussier, G. Bouchard, Complex embeddings for
simple link prediction, 2016. arXiv:1606.06357.
[4] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks,</p>
      <p>CoRR abs/1609.02907 (2016). URL: http://arxiv.org/abs/1609.02907. arXiv:1609.02907.
[5] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention
networks, in: International Conference on Learning Representations, 2018. URL: https:
//openreview.net/forum?id=rJXMpikCZ.
[6] Q. Li, Z. Han, X. Wu, Deeper insights into graph convolutional networks for
semisupervised learning, CoRR abs/1801.07606 (2018). URL: http://arxiv.org/abs/1801.07606.
arXiv:1801.07606.
[7] N. Dehmamy, A.-L. Barabasi, R. Yu, Understanding the representation power of graph
neural networks in learning graph topology, in: H. Wallach, H. Larochelle, A. Beygelzimer,
F. d'Alché-Buc, E. Fox, R. Garnett (Eds.), Advances in Neural Information Processing
Systems, volume 32, Curran Associates, Inc., 2019. URL: https://proceedings.neurips.cc/
paper_files/paper/2019/file/73bf6c41e241e28b89d0fb9e0c82f9ce-Paper.pdf.
[8] J. Urbani, S. Kotoulas, E. Oren, F. Harmelen, Scalable distributed reasoning using mapreduce,
volume 5823, 2009, pp. 634–649. doi:10.1007/978-3-642-04930-9_40.
[9] G. Wang, R. Ying, J. Huang, J. Leskovec, Direct multi-hop attention based graph
neural network, CoRR abs/2009.14332 (2020). URL: https://arxiv.org/abs/2009.14332.
arXiv:2009.14332.
[10] D. Yuxiao, N. Chawla, A. Swami, metapath2vec: Scalable representation learning for
heterogeneous networks, 2017, pp. 135–144. doi:10.1145/3097983.3098036.
[11] Y. Sun, J. Han, X. Yan, P. Yu, T. Wu, Pathsim: Meta path-based top-k similarity search in
heterogeneous information networks, PVLDB 4 (2011) 992–1003. doi:10.14778/3402707.
3402736.
[12] T. Mikolov, K. Chen, G. Corrado, J. Dean, Eficient estimation of word representations in
vector space, 2013. arXiv:1301.3781.
[13] S. Mehryar, R. Celebi, Improving transitive embeddings in neural reasoning tasks via
knowledge-based policy networks, ????, pp. 16–27. URL: https://ceur-ws.org/Vol-3337/
semrec_paper3.pdf.
[14] G. Singh, S. Bhatia, R. Mutharaju, OWL2Bench: A Benchmark for OWL 2 Reasoners, 2020,
pp. 81–96. doi:10.1007/978-3-030-62466-8_6.
[15] N. Matentzoglu, B. Parsia, Ore 2014 reasoner competition dataset, 2014. URL: https://doi.</p>
      <p>org/10.5281/zenodo.10791. doi:10.5281/zenodo.10791.
[16] N. Heist, H. Paulheim, The caligraph ontology as a challenge for OWL reasoners, CoRR
abs/2110.05028 (2021). URL: https://arxiv.org/abs/2110.05028. arXiv:2110.05028.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xiang</surname>
          </string-name>
          ,
          <article-title>A review: Knowledge reasoning over knowledge graph</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>141</volume>
          (
          <year>2020</year>
          )
          <article-title>112948</article-title>
          . URL: https://www.sciencedirect. com/science/article/pii/S0957417419306669. doi:https://doi.org/10.1016/j.eswa.
          <year>2019</year>
          .
          <volume>112948</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bordes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Usunier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Garcia-Durán</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weston</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Yakhnenko</surname>
          </string-name>
          ,
          <article-title>Translating embeddings for modeling multi-relational data</article-title>
          ,
          <source>in: Proceedings of the 26th International Conference</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>