<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Knowledge Graph Embeddings or Bias Graph Embeddings? A Study of Bias in Link Prediction Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrea Rossi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Donatella Firmani</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Merialdo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Roma Tre University</institution>
          ,
          <addr-line>Roma</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Sapienza University</institution>
          ,
          <addr-line>Roma</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>2</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>Link Prediction aims at tackling Knowledge Graph incompleteness by inferring new facts based on the existing, already known ones. Nowadays most Link Prediction systems rely on Machine Learning and Deep Learning approaches; this results in inherent opaque models in which assessing the robustness to data biases is not trivial. We define 3 specific types of Sample Selection Bias and estimate their presence in the 5 best-established Link Prediction datasets. We then verify how these biases afect the behaviour of 9 systems representative for every major family of Link Prediction models. We find that these models do indeed learn and incorporate each of the presented biases, with a heavily negative efect on their behaviour. We thus advocate for the creation of novel more robust datasets and of more efective evaluation practices.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Link Prediction</kwd>
        <kwd>Knowledge Graph Embeddings</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Knowledge Graphs (KGs) are structured repositories of information where nodes modeling
real-world entities are linked by labeled directed edges; each label represents a semantic relation,
therefore each edge linking a pair nodes represents a fact conveying that the corresponding
two entities are connected via that relation.</p>
      <p>
        KGs have recently achieved widespread popularity in a variety of contexts. Large open KGs,
such as DBpedia [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], YAGO [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and Wikidata [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], are used on a daily basis for Semantic Web
projects [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and question answering [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Meanwhile, a growing number of companies rely on
private KGs to support their services. Google and Microsoft use respectively the Google KG [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
and Satori [7] to enhance their search engines; Amazon [8] and Ebay [9] use product graphs to
improve their recommendations; social networks like Facebook [10] and LinkedIn [11] use KGs
for user profiling and advertisement.
      </p>
      <p>It is therefore unsurprising that many KGs have achieved web-scale dimensions, featuring
millions of entities and billions of facts. Nonetheless, it is well-known that even even the largest
and richest KGs sufer from incompleteness, as they only hold a small portion of the real-world
knowledge they should encompass [12].</p>
      <p>Link Prediction (LP) tackles this issue by leveraging the already known facts in the KG to infer
new ones. Nowadays the vast majority of LP models learn vectorized representations of entities
and relations called embeddings using Machine Learning (ML) techniques; many of them rely
on Deep Learning architectures, featuring sequences of neural layers interspersed by activation
functions. These models have been shown to achieve state-of-the-art performances [13, 14].
In the last decade embedding-based LP has become a sparkling research area, with dozens of
novel models being proposed every year (see the work by Wang et al. [15] for a comprehensive
survey). The pioneering model TransE [16] has established the practice of evaluating these
systems computing global metrics of their predictive performances over datasets obtained from
real-world KGs.</p>
      <p>LP datasets are usually obtained by extracting the most mentioned entities from real-world
KGs. We find that this policy leads to various forms of imbalances and biases. For instance,
the datasets sampled from Freebase [17] tend to only feature people with nationality  .
Furthermore, the same biases observed in training are also present in validation and testing: this
indirectly incentivizes models to incorporate the biases, as they can be instrumental to produce
the correct predictions and boost the evaluation results. In short, our models are yielding
the correct answers for the wrong reasons. For instance, in FB15k-237, which is considered
the most reliable dataset [13], among the 1210 training facts and 151 test facts with relation
iflm_budget_currency , 1164 and 146 facts respectively feature the same tail USA_dollar. We find
that in those test facts, models always predict the correct tail on the first try when the answer
is USA_dollar, whereas they never manage to guess if it is a diferent currency.</p>
      <p>A few studies in the past have highlighted criticalities in LP benchmarks, mainly with regard
to test leakage [18, 19] or of unnatural distributions and structures [20, 13]. Nonetheless, to
the best of our knowledge the presence of straight-up biases had gone almost unnoticed so
far. This motivates a systematic analysis of how they afect datasets and models, especially
considering that KG embeddings have been shown to be just as vulnerable to biases as word
embeddings [21, 22]. We report an accurate comparison with these other researches in Section 5.</p>
      <p>We provide a formal definition of 3 types of data bias that can afect LP models. We focus on the
5 best-established LP datasets and estimate for each of them the number of test samples afected
by each type of bias. We then conduct an extensive re-evaluation of 9 models representative for
all the major families of LP systems, showing how removing the biased test facts afects their
overall predictive performance. All the code and resources generated in our work are available
at our public repository.1</p>
      <p>The paper is organized as follows. In Section 2 we overview how embedding-based models
perform the LP task; in Section 3 we discuss the main types of data bias we analyze in our work;
in Section 4 we report our experimental findings on how they afect LP models; in Section 5 we
present works related to ours and in Section 6 we provide concluding remarks.</p>
      <p>1https://github.com/merialdo/research.lpbias</p>
    </sec>
    <sec id="sec-2">
      <title>2. Link Prediction on Knowledge Graphs</title>
      <p>We define any Knowledge Graph as  = (ℰ , ℛ, ), where ℰ is a set of entities, ℛ is a set of
relations, and  ∈ ℰ × ℛ × ℰ is a set of facts connecting the entities via relations. Each fact
can be formulated as a triple ⟨ℎ, , ⟩, where ℎ is the head,  is the relation, and  is the tail.</p>
      <p>Most LP models nowadays map entities and relations to vectorized representations called
KG embeddings. These models usually define a scoring function  to estimate the plausibility of
facts based on the embeddings of their elements. Embeddings are initialized randomly; then,
they are trained with ML methods to optimize the scores of the known facts. When the training
is over, the learned embeddings should be able to generalize and yield good  values for unseen
true facts as well. Models may also feature deep architectures of neural layers, which can be
used in  to process the embeddings of elements of the facts to score. The weights of neural
layers are trained jointly with the KG embeddings.</p>
      <p>Given a trained model, a tail prediction ⟨ℎ, , ⟩ is the process that finds  to be the
bestscoring entity to complete the triple ⟨ℎ, , ?⟩: i.e.,  is the answer to the question «What is the
most likely tail for head ℎ and relation ?»2:
 = argmax (ℎ, , ).</p>
      <p>∈ℰ
(1)
A formulation for head predictions can be defined analogously.</p>
      <p>LP research typically relies on datasets sampled from real-world KGs. Any dataset has its
own sets ℰ , ℛ, and , and  is usually split into a training set , a validation set  and
a test set . To evaluate the predictive performance of a model on a dataset, head and tail
predictions are performed on each test fact in its . For each prediction, the target entity
featured in the test fact (i.e., the expected prediction) is ranked against all other entities in ℰ .
Given any test fact ⟨ℎ, , ⟩, the tail rank can be thus computed as:
(ℎ, , ) = |{ ∈ ℰ |(ℎ, , ) &gt;= (ℎ, , )}|
(2)
Head ranks can be computed analogously. An ideal model, or an oracle, would obtain rank 1 in
the head and tail predictions of all test facts. The set  of all the head and tail ranks obtained in
testing are then gathered into global metrics:
• Hits@K (H@K): the fraction of ranks in  lesser or equal than a value .</p>
      <p>• Mean Reciprocal Rank (MRR): the average of the inverse values of all the ranks in  .
Both @ and   are always between 0 and 1; the higher their value, the better the result
they convey. These metrics can be computed in two separate settings: in the raw setting correct
answers that outrank the target one are still deemed wrong; in the filtered setting they are not
considered mistakes and do not contribute to rank computation. Raw metrics can sometimes be
misleading, so filtered metrics are generally preferred in literature [16]; therefore, in this work
we always use filtered metrics.</p>
      <p>2In this formulation we assume higher  scores convey better plausibility; analogous formulations are defined
for models where higher scores convey worse plausibility.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Forms of Data Bias</title>
      <p>In this section, we define 3 main types of bias commonly found in LP datasets. All of them are
forms of Sample Selection Bias [23], i.e., unwanted, unrealistic patterns in a dataset caused by
imbalances in the processes and sources used to construct it. We provide definitions from the
perspective of a tail prediction ⟨ℎ, , ⟩; analogous definitions can be used for head predictions.
Type 1 Bias. A tail prediction ⟨ℎ, , ⟩ is prone to Type 1 Bias if the training facts mentioning
 tend to always feature  as tail. For example, the tail prediction ⟨_, ,
⟩ is prone to this type of bias if the vast majority of gendered entities in the training set
are males: this artificially favours the prediction of male genders. In practice, we verify if the
fraction between the number of training facts featuring both  and  and the number of training
facts featuring  exceeds a threshold  1. In our experiments we set  1 = 0.75.
Type 2 Bias. A tail prediction ⟨ℎ, , ⟩ in which  is a one-to-many or a many-to-many relation
is prone to Type 2 Bias if, whenever an entity  is seen as head for relation , fact ⟨, , ⟩ also
exists in . Type 2 Bias afects relations that have a "default" correct answer. Diferently
from Type 1, facts mentioning  may feature a variety of tails diferent from ; however, for each
entity  seen as head these facts,  tends to always be among the correct tails too. This makes
⟨, , ⟩ artificially easier to predict. For instance, the tail prediction ⟨_,
, ℎ⟩ is prone to Type 2 Bias if most people, in addition to other languages, also
speak English. In practice, we verify if the fraction of entities  seen as heads for relation  and
that also display a fact ⟨, , ⟩ exceeds a threshold  2. In our experiments we use  2 = 0.5.
Type 3 Bias. A tail prediction ⟨ℎ, , ⟩ is prone to Type 3 Bias if a relation  exists such that:
(i) whenever  links two entities,  links them as well; and (ii) the fact ⟨ℎ, , ⟩ is present in the
training set. For example, in the FB15k dataset the producer of a TV program is almost always
its creator too; this may lead to assume that creating a program implies being its producer. In
practice, to verify if  and  share this correlation we check if the fraction of  mentions in which
 also co-occurs with  is greater than a threshold  3. In our experiments we set  3 = 0.5.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Data Bias in Link Prediction Benchmarks</title>
      <p>We discuss in this section our experimental findings on the three forms of data bias defined
in Section 3. We first describe how these biases afect LP datasets; we then analyze their
consequences on the behaviour of LP models.</p>
      <sec id="sec-4-1">
        <title>4.1. Data Biases in LP Datasets</title>
        <p>We take into account the 5 most popular LP datasets in literature: FB15k, WN18, FB15k-237,
WN18RR and YAGO3-10. FB15k and WN18 have been built by Bordes et al. [16] by selecting the
facts featuring the richest entities in Freebase [17] and WordNet [24]. Toutanova and Chen [18]
and Dettmers et al. [19] have observed that such datasets sufer from test leakage due to the
pervasive presence of inverse and equivalent relations; they have filtered away such relations to
create the more challenging subsamples FB15k-237 and WN18RR. Finally, Dettmers et al. [19]
FB15k
14.951
have also created the YAGO3-10 dataset selecting the facts featuring entities linked by at least
10 diferent relations in the YAGO3 KG [25].</p>
        <p>In these datasets the training, validation and test sets are sampled from the same distributions,
so any bias seen in training will also be featured in validation and testing. For each test prediction
in each dataset we verify if it is prone to any type of bias applying the definitions in Section 3.
Table 1 reports the main statistics and the number of test predictions not afected by Type 1
Bias (w/o B1), Type 2 Bias (w/o B2), Type 3 Bias (w/o B3), and by any type of bias at all (w/o
B⋆). Note that these numbers are not cumulative because the same prediction may be afected
by multiple types of bias.</p>
        <p>The most afected dataset is YAGO3-10, with 54.1% of its test facts being prone to bias,
especially of Type 3. This confirms the finding of Akrami et al [13] that the two most common
relations in the dataset (afiliated_to and plays_for) are almost interchangeable. FB15k and
FB15k-237 are noticeably afected too, with 26.7% and 9.8% of their test facts prone to bias
respectively. The Type 3 Bias is not present at all in FB15k-237; this is not surprising, because
this dataset was built by design with no inverse or equivalent relations. WN18 and WN18RR,
ifnally, seem completely immune to any type of bias; this is possibly due the nature of WordNet,
which is a lexical ontology rather than a KG.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Data Biases and LP models</title>
        <p>In this section we study how data bias afects the behaviour of LP models. As mentioned in
Section 4.1, LP benchmarks display the same biases across their training and test sets. Hence, in
addition to being exposed to bias, unbeknownst to developers, LP models are actually encouraged
to incorporate it, because such bias can help to identify the correct answers in testing. In the
long run this may favour the architectures most vulnerable to bias over the most robust ones.</p>
        <p>To assess the severity of this condition we remove from each dataset the test predictions
prone to each type of bias, and measure how this afects the evaluation results of LP models
relying on diverse architectures. We focus on H@1 and MRR, usually considered the most
characterizing metric in LP. We stress that we do not modify the training sets of the datasets:
we just filter away the bias-prone test predictions, in the quantities reported in Table 1.</p>
        <p>We use as a starting point the evaluation results of 19 LP models publicly shared by
Rossi et al. [14]. Due to space constraints, we report our findings on 9 models
representn
o
i
t
i
s
o
p
m
o
c
e
D
x
i
r
t
a
M
c
i
r
t
e
m
o
e
G
g
n
i
n
r
a
e
L
p
e
e
D
x
E
l
p
m
o
C
R
E
k
c
u
T
E
s
n
a
r
T
E
s
s
o
r
C
E
K
A
H
E
t
c
a
r
e
t
n
I
E
s
p
a
C
N
S
R
L
R
U
B
y
n
A</p>
        <p>FB15k
R
R 0.777
M
R
R 0.837
M</p>
        <p>0.819 0.814 0.799 0.788
(-0.6%) (-1.1%) (-2.9%) (-4.3%)
ing all the families in their work: ComplEx [26] and TuckER [27] for the Matrix Decomposition
models; TransE [16], CrossE [28] and HAKE [29] for the Geometric models; InteractE [30],
RSN [31] and CapsE [32] for the Deep Learning models; and the rule-based AnyBURL [33] as a
baseline. Complete results for all the 19 models are available in our online repository.</p>
        <p>Table 2 shows the results for datasets FB15k, FB15k-237 and YAGO3-10. If models were
immune to biases, removing bias-prone test predictions should not heavily afect their metrics.
On the contrary, we observe impressive performance drops across all models. YAGO3-10 results
display the largest worsening, with models losing 50% to 70% of their H@1 and MRR. In
FB15kw/o w/o w/o w/o w/o invwer/soe or
Orig. w/o B✻ inverse symmetric symmetric</p>
        <p>inverse or Orig. w/o B✻ inverse symmetric symmetric
237, often considered the most reliable dataset, their decrease is still between 20% and 35%. In
FB15k they lose around 5% of the original metrics, but this apparent robustness is likely due to
the presence of inverse relations facilitating predictions.</p>
        <p>Table 3 reports the results for WN18 and WN18RR. As already described in Section 4.1, in
these datasets test predictions do not appear prone to bias. Nonetheless this does not make
them more reliable; on the contrary, their test predictions seem only enabled by the presence
of symmetric and (in WN18) inverse relations. Table 3 also displays that removing the test
predictions featuring symmetric or inverse relations results in plummeting H@1 and MRR
values, with a decrease usually between 50% and 75%.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Key Takeaways</title>
        <p>We find that the policies used to generate LP datasets have led to severe forms of selection bias;
this, in turn, has made significant fractions of their test sets artificially easier to predict than the
others. Our results prove that LP models are indeed sensitive to these forms of bias, as filtering
away the afected test predictions heavily worsens their evaluation metrics. Both neural and
rule-based LP models appear equally vulnerable to this phenomenon: this proves that the issue
is not rooted in the technique used to learn the facts, but rather in the data sources themselves.
Not all datasets are interested by this condition to the same extent. YAGO3-10 and FB15k-237
are the most afected, and show the heaviest drop in performance when removing the biased
test facts. Wordnet-based datasets, on the other hand, do not display bias at all.</p>
        <p>The bias problem, in itself, can probably be avoided by just skipping the test facts prone to
bias during evaluation. However, suggesting to just adopt this practice would be naive from our
part. Quite worryingly, even in bias-free datasets we observe that most correct predictions are
just enabled by the presence of inverse and symmetric relations. In other words, in  datasets,
when removing the test predictions afected by either bias or inverse/symmetric relations, the
predictive performance of models plummets.</p>
        <p>This makes us wonder how many of the remaining test facts are actually predictable. If we
just removed the bias-prone test predictions, we may mostly end up with test sets that not even
humans, with the information available in training, can infer; if this was the case, the whole
task would become pointless. We intend to conduct further studies in this regard, asking this
question directly to human workers. If the outcome will prove that most non-biased test facts
are indeed unpredictable, then it will be painfully necessary to replace the current datasets
with novel ones extracted in more sensible ways, possibly keeping humans in the loop for the
selection of training and test facts.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Related Works</title>
      <p>The works most related to ours consist in analyses that highlight criticalities of the current LP
benchmarking techniques. So far, compared to the wide body of literature proposing new LP
models, a relatively small efort has been devoted to analyzing their evaluation practices.</p>
      <p>Toutanova and Chen [18] have been the first to notice the presence of test leakage in FB15k
due to inverse relations. They have assessed the severity of the issue by proving that a simple
model based on observable features achieves competitive performance on the dataset; they have
proceeded to remove these relations from FB15k generating the more challenging FB15k-237.</p>
      <p>A similar study has been carried out by Dettmers et al. [19] on FB15k and WN18, showing
that a trivial system based on inverse relations can achieve state-of-the-art results on both
datasets; the authors have then generated WN18RR as a more challenging subsample of WN18.</p>
      <p>Akrami et al. [13] have carried out an extensive analysis quantifying the efect of various
artificial patterns in LP datasets, such as inverse relations and Cartesian product relations.
In addition to confirming the above mentioned observations on FB15k and WN18 they have
found that LP performances are boosted in FB15k and FB15k-237 by the redundant structures
of Cartesian product relations, and in YAGO3-10 by the presence of equivalent relations.</p>
      <p>Nayyeri et al. [34] refer to KGs in which facts also feature additional numerical weights to
convey various meanings, e.g., the confidence of each fact. They acknowledge that the presence
of bias in these values hinders the efectiveness of the learned embeddings, and propose a
Weighted Triple Loss that, while taking advantage of these weights, is also robust to their biases.</p>
      <p>Rossi and Matinata [20] have shown that the distributions of entities in all LP datasets are
wildly skewed: a few entities are featured in thousands of training facts, making them easier to
learn and predict, whereas the others may only occur a handful of times. The rich and easily
predictable entities are over-represented in testing, thus afecting fairness of such benchmarks.</p>
      <p>The same concerns have been shared by Mohamed et al. [35]; to overcome this issue, they
have proposed novel stratified versions of the Hits@K and MRR metrics, called Strat-Hits@K
and Strat-MRR respectively. These metrics should estimate of the predictive performance of
models in a fairer way, unbiased by the popularity over-represented entities.</p>
      <p>All these works share the same spirit of ours in their goal to identify the shortcomings of
current LP evaluation approaches, in order to drive research towards more realistic and healthy
practices. Our main diference lies in our definition and identification of bias structures that had
so far gone unnoticed, as well as our systematic methodology of re-computing the predictive
performances of a wide variety of models after removing the biased test facts.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>We have reported an analysis on the presence of bias across the 5 best-established datasets
for Link Prediction on KGs. We have defined 3 main types of Selection Sample Bias, and we
have observed that they afect significant portions of the test predictions in 3 datasets out of
5. We have then analyzed how removing such bias-prone predictions alters the evaluation
results of 9 models representing the main families of Link Prediction systems. The result is
generally a significant drop in predictive performance. This proves that a large part of the
correct predictions output by models on those datasets is indeed facilitated by the presence of
bias. The very low values obtained on this de-biased test scenario suggests that many of the
remaining test facts may not be predictable at all.</p>
      <p>We thus call for the production of more efective and robust datasets for Link Prediction, and
for the definition of more thorough evaluation methods that take into account their properties.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          , G. Kobilarov,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cyganiak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ives</surname>
          </string-name>
          ,
          <article-title>Dbpedia: A nucleus for a web of open data, in: The semantic web</article-title>
          , Springer,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Tanon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Weikum</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          <article-title>M. Suchanek, YAGO 4: A reason-able knowledge base</article-title>
          ,
          <source>in: ESWC</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Vrandecic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Krötzsch</surname>
          </string-name>
          ,
          <article-title>Wikidata: a free collaborative knowledge base</article-title>
          ,
          <source>CACM</source>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Hovy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Navigli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Ponzetto</surname>
          </string-name>
          ,
          <string-name>
            <surname>Collaboratively Built</surname>
          </string-name>
          Semi-structured
          <source>Content and Artificial Intelligence: The Story So Far</source>
          , Artif. Intell. (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>W.</given-names>
            <surname>Yih</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <article-title>Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base</article-title>
          , in: ACL,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Singhal</surname>
          </string-name>
          ,
          <article-title>Introducing the knowledge graph: things, not strings</article-title>
          ,
          <year>2012</year>
          . URL: https://www.blog.google/ products/search/introducing
          <article-title>-knowledge-graph-things-not/, blogpost in the Oficial Google Blog</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>