<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. Pavlović);</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Raising the Eficiency of Knowledge Graph Embeddings While Respecting Logical Rules</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aleksandar Pavlović</string-name>
          <email>aleksandar.pavlovic@tuwien.ac.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emanuel Sallinger</string-name>
          <email>emanuel.sallinger@tuwien.ac.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Eficiency, Scalability, Data Management, Logical Rules, Knowledge Graph Completion</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>TU Wien</institution>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Oxford</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Knowledge graphs (KGs) are highly incomplete. As a result, researchers have proposed mostly machinelearning-based methods for knowledge graph completion (KGC), which is the task of predicting missing links from the information kept in the KG. Geometric KG embedding models (gKGEs) have demonstrated strong KGC results while providing the ability to respect major characteristics of KGs, typically represented in the form of logical rules by the data management community. However, for strong KGC performance, most gKGEs require high embedding dimensionalities or complex embedding spaces, severely restricting their time and space eficiency. This work addresses these challenges by proposing SpeedE, a lightweight Euclidean gKGE that (1) respects a set of core logical rules relevant to the data management community; (2) outperforms state-of-the-art gKGEs, particularly on YAGO3-10 and WN18RR; and (3) greatly boosts their eficiency, in particular requiring only a quarter of the parameters and a fith of the training time of the state-of-the-art ExpressivE model on WN18RR to achieve competitive KGC performance. This extended abstract is based on our recently published NAACL 2024 paper [1].</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        Geometric knowledge graph embedding models (gKGEs) represent entities and relations of a
knowledge graph (KG) as geometric shapes in the semantic vector space. gKGEs achieved
promising performance on knowledge graph completion (KGC) and knowledge-driven
applications [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]; while allowing for an intuitive geometric interpretation of their captured patterns
[
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ]. Recently, gKGEs with increasingly more complex embedding spaces were explored
[
        <xref ref-type="bibr" rid="ref7 ref8 ref9">7, 8, 9</xref>
        ]. However, more complex embedding spaces typically require more costly operations or
more parameters, lowering their time and space eficiency compared to Euclidean gKGEs [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
Even more, most gKGEs require high-dimensional embeddings to reach good KGC performance,
increasing their time and space requirements [
        <xref ref-type="bibr" rid="ref10">11, 10</xref>
        ]. Thus, the need for (1) complex embedding
spaces and (2) high-dimensional embeddings lowers the eficiency of gKGEs, hindering their
application in resource-constrained environments, especially in mobile smart devices [
        <xref ref-type="bibr" rid="ref10 ref7 ref8">7, 8, 10</xref>
        ].
      </p>
      <p>
        Challenge and Methodology. Although there has been much work on scalable gKGEs, any
such work has focused exclusively on either reducing the embedding dimensionality [12, 11, 13]
or using simpler embedding spaces [
        <xref ref-type="bibr" rid="ref6">14, 15, 6</xref>
        ], thus addressing only one side of the eficiency
problem. Facing these challenges, this work aims to design a Euclidean gKGE that performs
well on KGC under low-dimensional conditions, reducing its storage requirements, inference,
and training time. To reach this goal, we analyze ExpressivE [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], an Euclidean gKGE that has
shown promising performance on KGC under high-dimensional conditions.
      </p>
      <p>Contribution. Based on ExpressivE, we propose the lightweight SpeedE model that (1)
halves ExpressivE’s inference time and (2) significantly improves its KGC performance. We
evaluate SpeedE on the three standard KGC benchmarks, WN18RR, FB15k-237, and YAGO3-10,
ifnding that it (3) is competitive with SotA gKGEs on FB15k-237 and even outperforms them
significantly on WN18RR and YAGO3-10. Moreover, we find that (4) on WN18RR SpeedE
requires solely a fourth of ExpressivE’s number of parameters and solely a fith of its training
time to reach the same KGC performance (see Table 2 in Section 4).</p>
    </sec>
    <sec id="sec-3">
      <title>2. Preliminaries</title>
      <p>
        KGs can be viewed as sets of triples   ( ℎ,   ) over a finite set of relations   ∈ R and entities
 ℎ,   ∈ E. Given a triple   ( ℎ,   ),  ℎ is called its head and   its tail. Henceforth, we use the
standard definition of capturing rules [
        <xref ref-type="bibr" rid="ref6 ref7">7, 16, 6</xref>
        ], which intuitively states that a KGE captures a
rule if there is a parameter set such that the KGE captures the rule exactly (i.e., it predicts any
logically inferrable triple) and exclusively (i.e., it does not capture any undesired rule).
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. Min_SpeedE and SpeedE</title>
      <p>
        Min_SpeedE and SpeedE are Euclidean gKGEs based on ExpressivE [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Similarly to Pavlović
and Sallinger [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], Min_SpeedE embeds entities  ℎ ∈ E via vectors eh ∈ ℝ and relations   ∈ R
via hyper-parallelograms in ℝ2 . In contrast to ExpressivE, which parameterizes a
hyperparallelogram of a relation   with three vectors, Min_SpeedE solely uses a scalar width parameter
 and two vectors: a slope vector sj ∈ ℝ2 representing the slopes of its boundaries and a center
vector cj ∈ ℝ2 representing its center. The main diference between Min_SpeedE and ExpressivE
is that Min_SpeedE uses a constant width parameter  , thereby, halving ExpressivE’s inference
time, as we shall see soon. At an intuitive level, a triple   ( ℎ,   ) is captured to be true by
a Min_SpeedE embedding if the concatenation of its head and tail embeddings is within   ’s
hyper-parallelogram. Formally, this means that a triple   ( ℎ,   ) is true if the following is satisfied:
(eht − cj − sj ⊙ eth)|.| ⪯ wj
(1)
Where exy ∶= (ex||ey) ∈ ℝ2 with || representing concatenation and   ,   ∈ E. Furthermore,
the inequality uses the following operators: the element-wise less or equal operator ⪯, the
element-wise absolute value x|.| of a vector x, and the element-wise (i.e., Hadamard) product ⊙.
      </p>
      <p>
        Scoring. SpeedE further enhances Min_SpeedE by adding the following two carefully
designed scalar parameters to each relation embedding: (1) the inside distance slope   ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]
and (2) the outside distance slope   with   ≤   . Let   ∶= 2 

  + 1 ,   ∶= 2 
  + 1 , and
  ∶=   (  − 1)/2 − (  − 1)/(2  ), then SpeedE defines the following distance function:
(ℎ,   , ) = {
 rj(h,t) ⊘   ,
 rj(h,t) ⊙   −   , otherwise
if  rj(h,t) ⪯ 
(2)
The distance function is separated into two piece-wise linear functions: (1) the inside distance
  (ℎ,   , ) =  rj(h,t) ⊘   for triples that are captured to be true (i.e.,  rj(h,t) ⪯  ) and (2) the outside
distance   (ℎ,   , ) =  rj(h,t) ⊙   −   for triples that are captured to be false (i.e.,  rj(h,t)⪯̸ ).
Based on this function, SpeedE defines the score as (ℎ,   , ) = −||(ℎ,   , )|| 2. The intuition
of   and   is that they control the slopes of the respective linear inside and outside distance
functions. However, without any constraints on   and   , SpeedE would lose ExpressivE’s

intuitive geometric interpretation [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] as   and   could be chosen in such a way that distances of
embeddings within the hyper-parallelogram are larger than those outside. By constraining these
than outside and, thereby, the intuitive geometric interpretation of our embeddings.
parameters to   ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] and   ≤ 

 , we preserve lower distances within hyper-parallelograms
      </p>
    </sec>
    <sec id="sec-5">
      <title>4. Theoretical &amp; Empirical Results</title>
      <p>
        A gKGE’s inference capability is analyzed by studying which logical rules it captures. The set of
core logical rules, commonly studied in the gKGE literature [
        <xref ref-type="bibr" rid="ref6 ref7">7, 16, 6</xref>
        ], consists of (1) symmetry
 1( ,  ) ⇒ 
1( ,  )
, (2) anti-symmetry  1( ,  ) ∧ 
1( ,  ) ⇒ ⊥
, (3) inversion  1( ,  ) ⇔ 
2( ,  ) ,
(4) composition  1( ,  ) ∧ 
 1( ,  ) ∧ 
2( ,  ) ⇒ 
2( ,  ) ⇒ 
3( ,  )
      </p>
      <p>, (5) hierarchy  1( ,  ) ⇒ 
3( ,  )
, and (7) mutual exclusion  1( ,  ) ∧</p>
      <p>2( ,  )
2( ,  ) ⇒ ⊥
, (6) intersection
. Surprisingly,
we find in Theorem</p>
      <p>
        4.1 that SpeedE still captures all core logical rules (see [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], Appendix H).
      </p>
      <p>Theorem 4.1. SpeedE captures the set of core logical rules.</p>
      <p>
        Inference Time. The most costly operations during inference are operations on vectors. Thus,
we can estimate ExpressivE’s and SpeedE’s inference time by counting the number of vector
operations necessary for computing a triple’s score: By reducing the width vector to a scalar,
many operations reduce from a vector to a scalar operation. In particular, ExpressivE needs
15, whereas SpeedE needs solely 8 vector operations to compute a triple’s score. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], we
empirically measure the inference time of SpeedE, ExpressivE, RotH, and AttH under the same
parameter configurations on each benchmark, finding that SpeedE halves ExpressivE’s inference
time as expected and even solely requires about a sixth of RotH’s and AttH’s inference time.
      </p>
      <p>
        KGC Results. Following [11], we evaluate each gKGE’s performance under low
dimensionalities with  = 32 . Table 1 reports their MRR and H@1 scores (for the complete results, see [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]).
It reveals that on YAGO3-10 — the largest benchmark — SpeedE outperforms any SotA gKGE by
a relative diference of 7% on H@1, providing strong evidence for SpeedE’s scalability to large
KGs. Furthermore, it shows that our enhanced SpeedE model is competitive with SotA gKGEs
on FB15k-237 and even outperforms any competing gKGE on WN18RR by a large margin.
      </p>
      <p>Convergence Time &amp; Model Size. To quantify the convergence time, we measure for each
gKGE the time to reach a validation MRR score of 0.490, i.e., approximately 1% less than the
worst reported MRR score of Table 2. As outlined in Table 2, SpeedE converges already after 6 .
Thus, while keeping strong KGC performance on WN18RR, SpeedE speeds up ExpressivE’s
convergence time by a factor of 5, ConE’s by 15, and RotH’s by 20. Furthermore, the table shows
that SpeedE ( = 50 ) needs solely a quarter of ExpressivE’s ( = 200 ) and a tenth of ConE’s and
RotH’s ( = 500 ) parameters to achieve a similar or slightly better KGC performance.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion</title>
      <p>In this work, we introduce SpeedE, a lightweight gKGE that (1) captures the set of core logical
rules, (2) is competitive with SotA gKGEs, even significantly outperforming them on YAGO3-10
and WN18RR, and (3) dramatically increases the eficiency of current gKGEs, needing solely
a fith of the training time and a fourth of the number of parameters of the SotA ExpressivE
model on WN18RR to reach the same KGC performance. To facilitate the reproducibility of our
results and the use of our model, we provide SpeedE’s code in a public GitHub repository1.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>Financial support for this research has been provided by the Vienna Science and Technology
Fund (WWTF) under grants [10.47379/VRG18013, 10.47379/NXT22018, 10.47379/ICT2201], as
well as the Christian Doppler Research Association (CDG) JRC LIVE.
1https://github.com/AleksVap/SpeedE
tional Linguistics, Punta Cana, Dominican Republic, 2021, pp. 464–474. URL: https://doi.
org/10.18653/v1/2021.findings-emnlp.42. doi:10.18653/v1/2021.findings-emnlp.42.
[11] I. Chami, A. Wolf, D.-C. Juan, F. Sala, S. Ravi, C. Ré, Low-dimensional hyperbolic knowledge
graph embeddings, in: Proceedings of the 58th Annual Meeting of the Association for
Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp.
6901–6914. URL: https://doi.org/10.18653/v1/2020.acl-main.617. doi:10.18653/v1/2020.
acl-main.617.
[12] I. Balazevic, C. Allen, T. M. Hospedales, Multi-relational poincaré graph embeddings, in:
H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, R. Garnett (Eds.),
Advances in Neural Information Processing Systems 32: Annual Conference on Neural
Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver,
BC, Canada, 2019, pp. 4465–4475. URL: https://proceedings.neurips.cc/paper/2019/hash/
f8b932c70d0b2e6bf071729a4fa68dfc-Abstract.html.
[13] Y. Bai, Z. Ying, H. Ren, J. Leskovec, Modeling heterogeneous hierarchies with
relationspecific hyperbolic cones, in: M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang,
J. W. Vaughan (Eds.), Advances in Neural Information Processing Systems 34: Annual
Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December
6-14, 2021, virtual, 2021, pp. 12316–12327. URL: https://proceedings.neurips.cc/paper/2021/
hash/662a2e96162905620397b19c9d249781-Abstract.html.
[14] S. M. Kazemi, D. Poole, Simple embedding for link prediction in knowledge graphs,
in: S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett
(Eds.), Advances in Neural Information Processing Systems 31: Annual Conference on
Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018,
Montréal, Canada, 2018, pp. 4289–4300. URL: https://proceedings.neurips.cc/paper/2018/hash/
b2ab001909a8a6f04b51920306046ce5-Abstract.html.
[15] Z. Zhang, J. Cai, Y. Zhang, J. Wang, Learning hierarchy-aware knowledge graph
embeddings for link prediction, in: The Thirty-Fourth AAAI Conference on
Artiifcial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of
Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational
Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12,
2020, AAAI Press, 2020, pp. 3065–3072. URL: https://doi.org/10.1609/aaai.v34i03.5701.
doi:10.1609/aaai.v34i03.5701.
[16] R. Abboud, İ. İ. Ceylan, T. Lukasiewicz, T. Salvatori, Boxe: A box embedding
model for knowledge base completion, in: H. Larochelle, M. Ranzato, R. Hadsell,
M. Balcan, H. Lin (Eds.), Advances in Neural Information Processing Systems 33:
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020,
December 6-12, 2020, virtual, 2020. URL: https://proceedings.neurips.cc/paper/2020/hash/
6dbbe6abe5f14af882ff977fc3f35501-Abstract.html.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pavlović</surname>
          </string-name>
          , E. Sallinger,
          <article-title>SpeedE: Euclidean geometric knowledge graph embedding strikes back</article-title>
          , in: K. Duh,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , S. Bethard (Eds.),
          <source>Findings of the Association for Computational Linguistics: NAACL</source>
          <year>2024</year>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Mexico City, Mexico,
          <year>2024</year>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>92</lpage>
          . URL: https://aclanthology.org/
          <year>2024</year>
          .findings-naacl.
          <volume>6</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <article-title>Knowledge graph embedding: A survey of approaches and applications</article-title>
          ,
          <source>IEEE Trans. Knowl. Data Eng</source>
          .
          <volume>29</volume>
          (
          <year>2017</year>
          )
          <fpage>2724</fpage>
          -
          <lpage>2743</lpage>
          . URL: https://doi.org/ 10.1109/TKDE.
          <year>2017</year>
          .
          <volume>2754499</volume>
          . doi:
          <volume>10</volume>
          .1109/TKDE.
          <year>2017</year>
          .
          <volume>2754499</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Broscheit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gashteovski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gemulla</surname>
          </string-name>
          ,
          <article-title>Can we predict new facts with open knowledge graph embeddings? A benchmark for open link prediction</article-title>
          , in: D.
          <string-name>
            <surname>Jurafsky</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Chai</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Schluter</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          <string-name>
            <surname>Tetreault</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July</source>
          <volume>5</volume>
          -
          <issue>10</issue>
          ,
          <year>2020</year>
          , Association for Computational Linguistics,
          <year>2020</year>
          , pp.
          <fpage>2296</fpage>
          -
          <lpage>2308</lpage>
          . URL: https://doi.org/10.18653/v1/
          <year>2020</year>
          . acl-main.
          <volume>209</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .acl- main.209.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pavlović</surname>
          </string-name>
          , E. Sallinger,
          <article-title>Building bridges: Knowledge graph embeddings respecting logical rules (short paper)</article-title>
          , in: B.
          <string-name>
            <surname>Kimelfeld</surname>
            ,
            <given-names>M. V.</given-names>
          </string-name>
          <string-name>
            <surname>Martinez</surname>
          </string-name>
          , R. Angles (Eds.),
          <source>Proceedings of the 15th Alberto Mendelzon International Workshop on Foundations of Data Management (AMW</source>
          <year>2023</year>
          ), Santiago de Chile, Chile, May
          <volume>22</volume>
          -26,
          <year>2023</year>
          , volume
          <volume>3409</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2023</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3409</volume>
          /paper9.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pavlović</surname>
          </string-name>
          , E. Sallinger,
          <article-title>Expressive and geometrically interpretable knowledge graph embedding (extended abstract)</article-title>
          ,
          <source>in: The First Austrian Symposium on AI</source>
          ,
          <string-name>
            <surname>Robotics</surname>
          </string-name>
          , and
          <string-name>
            <surname>Vision</surname>
          </string-name>
          (AIROV24),
          <year>2024</year>
          . URL: https://semantic-systems.org/sites/KG-NeSy/papers/P60. pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pavlović</surname>
          </string-name>
          , E. Sallinger,
          <article-title>ExpressivE: A spatio-functional embedding for knowledge graph completion</article-title>
          ,
          <source>in: The Eleventh International Conference on Learning Representations, ICLR</source>
          <year>2023</year>
          , Kigal, Rwanda, May 1-
          <issue>5</issue>
          ,
          <year>2023</year>
          ,
          <year>2023</year>
          . URL: https://openreview.net/pdf?id=xkev3_
          <fpage>np08z</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>Rotate: Knowledge graph embedding by relational rotation in complex space</article-title>
          ,
          <source>in: 7th International Conference on Learning Representations, ICLR</source>
          <year>2019</year>
          ,
          <article-title>New Orleans</article-title>
          , LA, USA, May 6-
          <issue>9</issue>
          ,
          <year>2019</year>
          , OpenReview.net,
          <year>2019</year>
          . URL: https: //openreview.net/forum?id=HkgEQnRqYQ.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Quaternion knowledge graph embeddings</article-title>
          , in: H.
          <string-name>
            <surname>M. Wallach</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Larochelle</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Beygelzimer</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <article-title>d'Alché-</article-title>
          <string-name>
            <surname>Buc</surname>
            ,
            <given-names>E. B.</given-names>
          </string-name>
          <string-name>
            <surname>Fox</surname>
          </string-name>
          , R. Garnett (Eds.),
          <source>Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems</source>
          <year>2019</year>
          ,
          <article-title>NeurIPS 2019</article-title>
          , December 8-
          <issue>14</issue>
          ,
          <year>2019</year>
          , Vancouver, BC, Canada,
          <year>2019</year>
          , pp.
          <fpage>2731</fpage>
          -
          <lpage>2741</lpage>
          . URL: https://proceedings.neurips.cc/paper/2019/hash/ d961e9f236177d65d21100592edb0769-Abstract.html.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Dual quaternion knowledge graph embeddings</article-title>
          ,
          <source>Proceedings of the AAAI Conference on Artificial Intelligence</source>
          <volume>35</volume>
          (
          <year>2021</year>
          )
          <fpage>6894</fpage>
          -
          <lpage>6902</lpage>
          . URL: https://doi.org/10.1609/aaai.v35i8.16850. doi:
          <volume>10</volume>
          .1609/aaai.v35i8.
          <fpage>16850</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sheng</surname>
          </string-name>
          ,
          <article-title>Hyperbolic geometry is not necessary: Lightweight Euclidean-based models for low-dimensional knowledge graph embeddings, in: Findings of the Association for Computational Linguistics: EMNLP 2021</article-title>
          , Association for Computa-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>