<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Leveraging Literals for Knowledge Graph Embeddings</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Genet Asefa Gesese</string-name>
          <email>Genet-Asefa.Gesese@fiz-karlsruhe.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>FIZ Karlsruhe - Leibniz Institute for Information Infrastructure, Germany Karlsruhe Institute of Technology, Institute AIFB</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <fpage>9</fpage>
      <lpage>16</lpage>
      <abstract>
        <p>Nowadays, Knowledge Graphs (KGs) have become invaluable for various applications such as named entity recognition, entity linking, question answering. However, there is a huge computational and storage cost associated with these KG-based applications. Therefore, there arises the necessity of transforming the high dimensional KGs into low dimensional vector spaces, i.e., learning representations for the KGs. Since a KG represents facts in the form of interrelations between entities and also using attributes of entities, the semantics present in both forms should be preserved while transforming the KG into a vector space. Hence, the main focus of this thesis is to deal with the multimodality and multilinguality of literals when utilizing them for the representation learning of KGs. The other task is to extract benchmark datasets with a high level of difficulty for tasks such as link prediction and triple classification. These datasets could be used for evaluating both kind of KG Embeddings, those using literals and those which do not include literals.</p>
      </abstract>
      <kwd-group>
        <kwd>Knowledge Graph Embedding</kwd>
        <kwd>Knowledge Graph Completion</kwd>
        <kwd>Link Prediction</kwd>
        <kwd>Literals</kwd>
        <kwd>Benchmark Datasets</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Knowledge Graphs (KGs) consist of facts about any discipline in the real world
in the form of entities, attributes of entities, and interrelations between entities.
Various KGs have been published so far such as Wikidata [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], DBpedia [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and
YAGO [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], which have become crucial for different applications in the area of
natural language processing, machine learning, information retrieval, and etc.
As discussed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], due to the rigorous symbolic frameworks used by KGs it is
difficult to use their data for other systems [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and also the complexity of
different important graph mining algorithms on KGs are proven to be NP-complete.
Hence, to deal with these issues, it is beneficial to learn latent representation of
the KGs while preserving the semantics present in these graphs.
      </p>
      <p>When learning latent representations of KGs, it is necessary to capture the
semantics contained in all elements of the KGs, i.e., from both relational and
attributive triples. Relational triples are triples with relations between entities
(object properties) whereas attributive triples are those with attributes (datatype
properties). Figure 1 presents, as an example, a graph which is part of the
content about the entity Covid-19 from Wikidata and Wikipedia1. In this graph,
Covid-19 and Wuhan are entities connected with the relation location of
discovery creating a relational triple &lt;Covid-19 location-of-discovery Wuhan&gt;. The
attributes official name, short name, and also known as could be considered as
attributes taking short text literals as values whereas description from Wikipedia
takes long text literals. The other attributes except image, take datetime values
or measurements with/without units. These attribute values are either numeric
or they could easily be converted to numeric.</p>
      <p>The different type of literals associated with the entity Covid-19 as shown
in the example KG, hold important information which could not be found with
just relational triples. Hence, a KG Embedding (KGE) model which makes use
of all these literal values (i.e., Multimodal KGEs) would be able to learn better
representations that are rich in semantics for the entities Covid-19 and Wuhan
as compared to models that do not use literals (Unimodal KGEs). Therefore, this
thesis focuses on i) conducting a survey about current KGE models by
performing experimentally-supported comparative analysis as discussed in Section 5.1,
ii) building benchmark datasets which would be appropriate for evaluating both
Unimodal KGEs and Multimodal KGEs (refer to Section 5.2 for details), and
then proposing a new KGE model which addresses the shortcomings with
existing models in terms of utilizing literals.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Importance</title>
      <p>Most KGs contain a significant amount of information represented in the form
of literals. It is common to find different types of literals such as measurements
and date values in various KGs. For instance, in Wikidata, DBpedia, and YAGO
there are date values associated with different events (birth date, date of death,
...) and measurements of length, size, density, and so on. When learning KGEs,
it is important to handle literals effectively as they contain additional or
complementary information to what is already present in the relational triples.</p>
      <p>There are even more specific areas that would highly benefit from properly
incorporating literals into the representation learning. One of them is KGs for
IoT (Internet of Things) where there exist an enormous amount of literals such
as measurements collected from sensors, like datetime, latitude, longitude, and
temperature values. The Unimodal KGEs, those which do not make use of literals,
would not perform well in such cases. Therefore, it is necessary to design a
Multimodal KGE model to capture the semantics present in literals.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Related Work</title>
      <p>
        Some attempts have been made to incorporate literals into the representation
learning of KGs. Detailed analysis of these approaches is presented in a survey [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
1 https://www.wikipedia.org/
conducted as part of this thesis. These KGE approaches could be grouped into
different categories based on the kind of literals they use as follows:
Text literals: The approaches that make use of text literals are DKRL [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ],
Jointly [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], SSP [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], KDCoE [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and KGloVe with literals [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. DKRL is an
extension of TransE [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], combining relational triples with textual descriptions
to learn KG representations by encoding the descriptions using CNN. Similarly,
Jointly extends TransE by capturing semantics from entity descriptions but it
uses Attentive LSTM instead of CNN. SSP also combines relational triples and
textual descriptions of entities for the embedding task by applying first-order
constraints to capture the correlations of the triples and the descriptions. On the
other hand, KDCoE learns KG representations using an entity alignment task. It
applies a multilingual KG embedding model and a multilingual entity description
embedding model over a weakly aligned multilingual KG for semi-supervised
cross-lingual learning. KGloVe with literals is an attempt to incorporate entity
descriptions into the KGloVe KG embedding approach. One common drawback
with these KGEs is the fact that they focus on long texts as literals and do not
give attention to short text literals such as names and labels.
      </p>
      <p>
        Numeric literals: MT-KGNN [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], KBLRN [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], LiteralE [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and TransEA [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]
are the ones using numeric literals. MT-KGNN applies a binary (pointwise)
classifier for relational triple prediction and regression task for non-discrete attribute
value prediction to learn embeddings for KGs. KBLRN uses relational, latent,
and numerical feature types and trains them jointly end to end via a
probabilistic PoE (Product of Experts) method. LiteralE works by incorporating literals
into other existing unimodal KGE models. In this approach, two kinds of
vectors for each entity are created, entity vector by the given unimodal KGE and
literal vector from the entity’s corresponding attribute values. These two
vectors would be mapped to a new literal enriched entity vector using a learnable
transformation function. On the other hand, TransEA extends TransE with an
attributive embedding model which is based on a linear regression task. One of
the drawbacks of these models is the fact that they fail to interpret datatypes of
attributes. Besides, most of these models do not handle multi-valued attributes
properly.
      </p>
      <p>
        Others: IKRL [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] and MTKGRL [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] use images of entities in addition to the
relational triples. On the other hand, MKBE [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] leverages both text and numeric
literals along with images. As in the models with numeric literals, MKBE is not
capable of capturing the semantics present in the data types/units of attribute
values. For more details about all of the KGE models discussed in this section,
refer to the survey [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Research Questions</title>
      <p>In the previous sections, the advantages of using literals for KGEs and the current
embedding models leveraging literals are discussed. Here, based on the
shortcomings of the existing approaches in making use of literals, the following research
questions are formulated.</p>
      <p>
        – RQ1: Which ones of the current KGEs using literals perform better for the
task of link prediction?
It is beneficial to perform experiment-based comparison in order to better
analyze and understand the capability of the existing Multimodal KGE
models for the task of link prediction. Hence, experiments have been conducted
on KGE models which use numeric and/or text literals and the results
obtained are reported in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. More details on the models and evaluations are
provided in Section 5.1.
– RQ2: How to extract benchmark datasets from popular KGs such as
Wikidata, focusing primarily on literals?
High quality benchmark datasets containing literals are required in order to
properly evaluate Multimodal KGE models. Hence, some details about the
collection of benchmark datasets LiterallyWikidata [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] are given in 5.2.
– RQ3: How to effectively use literals together with relations between entities
to learn KG representations?
Here, the main goal is to properly deal with the multimodality and
multilinguality of literals when combining relational triples and attributive triples
into the representation learning. Providing such a Multimodal KGE model
which also addresses the weaknesses of the current models is yet to be done.
– RQ4: How to evaluate KG embeddings on downstream tasks?
Evaluating KG embeddings on the tasks different from what they are trained
on, would give insights into the re-usability of the embeddings for other tasks.
Therefore, the main contributions of this thesis would be:
– Providing an extensive survey of existing Multimodal KGE approaches,
which includes experiments on the task of link prediction.
– A set of benchmark datasets LiterallyWikidata extracted from Wikidata and
      </p>
      <p>Wikipedia.
– A novel Multimodal KGE approach which leverages both relational triples
and attributive triples for the link prediction and triple classification tasks.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Preliminary Results</title>
      <p>In this section, the works that have been done towards solving the research
questions defined in section 4 are presented.
5.1</p>
      <p>
        Comparative analysis of existing approaches on link prediction
We have conducted an extensive survey on KG embedding models which use
literals [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This survey presents a detailed analysis of the models in terms of the
scoring function, the tasks used for training or evaluation, and model complexity.
Furthermore, experimental results on the task of link prediction with models
taking numeric and/or text literals are also presented. The models using numeric
literals are the different varaities of LiteralE (ComplEx-LiteralEg,
ComplExLiteralEg, and ComplEx-LiteralEg), KBLN, MTKGNN, TransEA whereas the
model DKRLBern is the one with text literals. In addition to these models,
DistMult-LiteralEg-text (i.e., another variety of LiteralE) is also included in the
experiments as a model making use of both numeric and text literals. Table 1
presents the results reported in the survey with these models on the dataset
FB15K-237 [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], refer to the survey for the details about the dataset. As the
result indicates, DistMult-LiteralEg-text performs better than DistMult-LiteralEg
which means combining numeric gives better results than just numeric.
5.2
      </p>
      <p>
        Benchmark datasets for Knowledge Graph Completion (KGC)
with literals
The ways existing KGC datasets such as FB15K-237 [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and CoDEx [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] are
created do not give attention to literals. In order to address this problem, we have
created a collection of KGC benchmark datasets named LiterallyWikidata[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ],
with a primary focus on numeric and text literals. LiterallyWikidata contains
three datasets varying in size and structure, namely, LitWD1K, LitWD19K, and
LitWD48K. These datasets contain relational triples and attributive (numerical)
triples together with entity/relation/attribute labels, aliases, and descriptions
from Wikidata. Furthermore, LiterallyWikidata contains textual descriptions for
the entities from their corresponding summary sections of English, German,
Chinese, and Russian Wikipedia pages. Benchmarking experiments are conducted
on the task of link prediction with the models DistMult [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], ComplEx [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ],
and DistMultLiteral [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The statistics of the datasets are given in Table 2. The
LiterallyWikidata benchmark paper is currently under review at a conference.
When conducting the experiments on link prediction with existing Multimodal
KGEs for addressing the research question RQ1 as discussed in Section 5.1, the
datasets FB15K and FB15K-237 are used to evaluate the models. The
evaluation metrics used are MR, MRR, and Hits@K. The same procedure is followed
to evaluate the quality of the benchmark datasets LiterallyWikidata which is
created to solve RQ2. On the other hand, for evaluating the solution that will
be provided for RQ3, more tasks other than link prediction such as triple
classification would be used and evaluated with the same metrics.
7
      </p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and Future Work</title>
      <p>As the results discussed in Section 5.1 indicate, the current Multimodal KGEs
suffer from various drawbacks such as not handling multi-valued attributes well
and failing to capture the semantics in data types/units. Hence, there arises the
necessity to design and implement a Multimodal KGE model which addresses
these drawbacks and leverages literals for better KG embedding. Besides, the
discussion in Section 5.2 shows the need for better benchmark datasets for
Multimodal KGEs. Therefore, this thesis work provides a collection of benchmark
datasets extracted from Wikidata and Wikipedia, named LiterallyWikidata.</p>
      <p>Developing the Multimodal KGE model, to address the research question
RQ3, is yet to be done and hence, it is part of the future work. Besides, the
model would be evaluated on KGs from other domain such as scholarly articles
on downstream tasks. This would be a solution for the research question RQ4.
Acknowledgements I would like to thank my supervisors Prof. Dr. Harald
Sack and Dr. Mehwish Alam for their invaluable mentoring and support.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bordes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Usunier</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia-Duran</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weston</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yakhnenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Translating Embeddings for Modeling Multi-Relational Data</article-title>
          . In: NIPS (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bordes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weston</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Collobert</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Learning structured embeddings of knowledge bases</article-title>
          .
          <source>In: AAAI</source>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tian</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>K.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skiena</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaniolo</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment</article-title>
          . arXiv preprint arXiv:
          <year>1806</year>
          .
          <volume>06478</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Cochez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garofalo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenßen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pellegrino</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <article-title>A first experiment on including text literals in kglove</article-title>
          .
          <source>In: Joint Proceedings of ISWC 2018 Workshops SemDeep-4 and NLIWOD-4</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>García-Durán</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niepert</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          : Kblrn:
          <article-title>End-to-end learning of knowledge base representations with latent, relational, and numerical features</article-title>
          . In: Globerson,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Silva</surname>
          </string-name>
          ,
          <string-name>
            <surname>R</surname>
          </string-name>
          . (eds.)
          <source>Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence</source>
          . pp.
          <fpage>372</fpage>
          -
          <lpage>381</lpage>
          . AUAI Press (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Gesese</surname>
            ,
            <given-names>G.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sack</surname>
          </string-name>
          , H.:
          <article-title>LiterallyWikidata - A Benchmark for Knowledge Graph Completion using Literals (Apr</article-title>
          <year>2021</year>
          ). https://doi.org/10.5281/zenodo.4701190, https://doi.org/10.5281/zenodo. 4701190
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Gesese</surname>
            ,
            <given-names>G.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Biswas</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sack</surname>
          </string-name>
          , H.:
          <article-title>A survey on knowledge graph embeddings with literals: Which model links better literal-ly? arXiv preprint</article-title>
          arXiv:
          <year>1910</year>
          .
          <volume>12507</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kristiadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lukovnikov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fischer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Incorporating literals into knowledge graph embeddings</article-title>
          .
          <source>In: International Semantic Web Conference</source>
          . pp.
          <fpage>347</fpage>
          -
          <lpage>363</lpage>
          . Springer (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Isele</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jakob</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jentzsch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kontokostas</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mendes</surname>
            ,
            <given-names>P.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hellmann</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morsey</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Kleef</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Auer</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , et al.:
          <article-title>Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia</article-title>
          .
          <source>Semantic Web</source>
          <volume>6</volume>
          (
          <issue>2</issue>
          ),
          <fpage>167</fpage>
          -
          <lpage>195</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Mousselly-Sergieh</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Botschen</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurevych</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A multimodal translation-based approach for knowledge graph representation learning</article-title>
          .
          <source>In: Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics</source>
          . pp.
          <fpage>225</fpage>
          -
          <lpage>234</lpage>
          . Association for Computational Linguistics, New Orleans,
          <source>Louisiana (Jun</source>
          <year>2018</year>
          ). https://doi.org/10.18653/v1/
          <fpage>S18</fpage>
          -2027, https://www. aclweb.org/anthology/S18-2027
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Pezeshkpour</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Embedding multimodal relational data for knowledge base completion</article-title>
          .
          <source>In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing</source>
          . pp.
          <fpage>3208</fpage>
          -
          <lpage>3218</lpage>
          .
          <article-title>Association for Computational Linguistics (Oct-Nov</article-title>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Safavi</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koutra</surname>
            ,
            <given-names>D.:</given-names>
          </string-name>
          <article-title>CoDEx: A Comprehensive Knowledge Graph Completion Benchmark</article-title>
          .
          <source>In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Nov</source>
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Suchanek</surname>
            ,
            <given-names>F.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kasneci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weikum</surname>
          </string-name>
          , G.:
          <article-title>Yago: A Core of Semantic Knowledge</article-title>
          .
          <source>In: 16th International Conference on the World Wide Web</source>
          . pp.
          <fpage>697</fpage>
          -
          <lpage>706</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Tay</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tuan</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Phan</surname>
            ,
            <given-names>M.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hui</surname>
          </string-name>
          , S.C.
          <article-title>: Multi-task neural network for nondiscrete attribute prediction in knowledge graphs</article-title>
          .
          <source>In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management</source>
          . p.
          <fpage>1029</fpage>
          -
          <lpage>1038</lpage>
          . Association for Computing Machinery (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Toutanova</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Observed versus latent features for knowledge base and text inference</article-title>
          .
          <source>In: Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Trouillon</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Welbl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riedel</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gaussier</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouchard</surname>
          </string-name>
          , G.:
          <article-title>Complex embeddings for simple link prediction</article-title>
          . p.
          <fpage>2071</fpage>
          -
          <lpage>2080</lpage>
          .
          <article-title>ICML'16, JMLR</article-title>
          .org (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Vrandečić</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krötzsch</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Wikidata: a free collaborative knowledgebase</article-title>
          .
          <source>Communications of the ACM</source>
          <volume>57</volume>
          (
          <issue>10</issue>
          ),
          <fpage>78</fpage>
          -
          <lpage>85</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Knowledge graph embedding with numeric attributes of entities</article-title>
          .
          <source>In: Proceedings of The Third Workshop on Representation Learning for NLP</source>
          . pp.
          <fpage>132</fpage>
          -
          <lpage>136</lpage>
          . Association for Computational Linguistics (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Xiao</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>Ssp: semantic space projection for knowledge graph embedding with text descriptions</article-title>
          .
          <source>In: Thirty-First AAAI Conference on Artificial Intelligence</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jia</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Representation learning of knowledge graphs with entity descriptions</article-title>
          .
          <source>In: AAAI</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Image-embodied knowledge representation learning</article-title>
          .
          <source>In: Proceedings of the 26th International Joint Conference on Artificial Intelligence</source>
          . p.
          <fpage>3140</fpage>
          -
          <lpage>3146</lpage>
          . IJCAI'17, AAAI Press (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qiu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>Knowledge graph representation with jointly structural and textual encoding</article-title>
          . pp.
          <fpage>1318</fpage>
          -
          <lpage>1324</lpage>
          (08
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yih</surname>
          </string-name>
          , W.t.,
          <string-name>
            <surname>He</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Embedding entities and relations for learning and inference in knowledge bases</article-title>
          .
          <source>In: International Conference on Learning Representations (ICLR)</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>