<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Are knowledge graph embedding models biased, or is it the data that they are trained on?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Wessel Radstok</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Melisachew Wudage Chekol</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mirko Tobias Schafer</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Data Intensive Systems Group, Utrecht University</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Media and Culture Studies, Utrecht University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recent studies on bias analysis of knowledge graph (KG) embedding models focus primarily on altering the models such that sensitive features are dealt with di erently from other features. The underlying implication is that the models cause bias, or that it is their task to solve it. In this paper we argue that the problem is not caused by the models but by the data, and that it is the responsibility of the expert to ensure that the data is representative for the intended goal. To support this claim, we experiment with two di erent knowledge graphs and show that the bias is not only present in the models, but also in the data. Next, we show that by adding new samples to balance the distribution of facts with regards to speci c sensitive features, we can reduce the bias in the models.3 For several days in early July 2018, Google and Apple's search assistants wrongfully reported that the man behind the Marvel comic books, Stan Lee, had passed away .4 It did not take long for news articles to start popping up noting the unjusti ed death declaration. Although Google and Apple never o cially reported on this issue, its source is likely traced back to Wikidata. On June 27th, a Wikidata user ran their own script made to parse data from Wikipedia and insert them as claims into Wikidata. This script then mistakenly pronounced Stan Lee dead. Other users soon corrected the error, which resulted in an edit war that became so bad that the page had to be temporarily locked against vandalism. This is not the only occurrence of incorrect information in knowledge graphs causing issues in downstream search queries. In the second half of 2018, the former Guantanamo Bay detainee Omar Khadr was incorrectly reported by Google search for the query 'Canadian Soldiers'. Again the cause was the script written by the aforementioned user. Although the issue was quickly resolved after online outrage, it cropped up twice more over a period of several months. It eventually led Google to take manual action to x the knowledge graph.5</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        In addition to the presence of incorrect information in knowledge graphs due
to either an error in the KG construction or intentionally supplied by content
curators, KGs can also be incomplete. As an example, in Freebase [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], over 70%
of person entities have no known place of birth and over 75% have no known
nationality [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In Wikidata [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], we observe a similar behavior, for instance,
over 97% of humans have no known religion and over 83% of humans have
no known spoken, written or signed languages. Subsets of both Wikidata and
Freebase have been widely used for testing knowledge graph completion
models. However, these subsets do not take into account the incompleteness of the
KGs and are prepared in a way to test solely the accuracy of models. However,
if the subsets are incomplete (or unbalanced), the models can be biased. For
instance, the Wikidata12K [
        <xref ref-type="bibr" rid="ref11 ref6">11, 6</xref>
        ] dataset contains 80% male and 20% female
politicians. Clearly, this dataset is unbalanced and a model trained on it will
likely overrepresent men in its predictions.
      </p>
      <p>
        Indeed this is shown in our experiments using the TransE [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] model; when
asked to predict people most likely to be politicians, the top 100 ranked answers
contain just 12.4% while the remaining 83.6% are male. In order to mitigate
such biased predictions, recently there has been a growing e ort towards
adapting/extending KG completion models [
        <xref ref-type="bibr" rid="ref10 ref4 ref9">4, 9, 10</xref>
        ]. These studies on bias analysis of
KG embedding models focus primarily on altering the models such that sensitive
features (such as gender, sexual orientation, etc) are dealt with di erently from
other features. The underlying implication is that the models cause bias, or that
it is their task to solve it. However, we found out that the datasets on which
the models are trained on are biased/unbalanced. Although algorithms for the
automatic balancing of data do exist [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], these are not trivial to apply to graph
datasets. Our experiments showed unsatisfactory results using these methods.
      </p>
      <p>Furthermore, adapting models to remove bias means that the resulting
embeddings will be bias-neutral with regards to the strength of the model used.
That is, removing bias requires a bias detection model and the extent of the the
bias removed depends on how much bias is detected. As a result, embeddings
are not truly neutral: a more powerful model might still be able to detect biases.
Therefore we argue that a domain expert must remain in the loop.</p>
      <p>In this work, we address the problem by working directly on the data rather
than altering KG embedding models. In other words, we investigate a new
approach in order to balance (mitigate bias) a given dataset: we automatically
extend a dataset by extracting additional facts to complete missing values of
sensitive features. Moreover, so as to motivate the proposed approach, we
carried out a comprehensive analysis of the distribution of sensitive features in
Wikidata highlighting various skewed data distributions.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        We group the related work into two classes of bias analysis: (i) knowledge graphs
and (ii) embedding models.
Bias analysis of knowledge graphs. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] proposes methods to trace the provenance
of crowdsourced fact checking to enable bias transparency rather than aiming
at eliminating bias from a KG. Furthermore, they investigate how paid
crowdsourcing can be used to understand contributors' implicit bias. Speci cally, they
recruit click workers to verify controversial facts and study them as they do so.
I.e., they track what search engines are used and which position the URL used
to validate was ranked in the result page. An example veri cation task is the
question of whether Catalonia is a part of Spain or an independent country.
The paper proposes adding both facts to the knowledge graph, with a statement
testifying how much support there is for each fact.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] introduces ProWD, a framework and tool for pro ling the completeness
of Wikidata. Completeness measure is based on Class-Facet-Attribute (CFA)
pro les. For example one could compare how often the attribute "educated at"
or"date of birth" compare between male, German computer scientists, and
female, Indonesian computer scientists.
      </p>
      <p>
        Bias analysis of embedding models. Bourli et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] present an analysis method
for investigating gender bias with regards to occupation in entity embeddings.
Speci cally, they subtract the male embedding from the female entity
embedding to get the bias vector. Projecting an occupation on this vector then gives
them the bias in this occupation. Furthermore, they introduce a de-biasing
approach that generates new de-biased embedding vectors from the existing one
by subtracting it from the bias vector.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] conduct experiments on Wikidata and Freebase, and show that harmful
social biases related to professions are encoded in the embeddings with respect
to gender, religion, ethnicity and nationality. They rst explain how traditional
word embeddings metrics do not apply to KG embeddings due to the
transformations applied. They then provide a method for evaluating bias. Their method
operates through increasing/decreasing an entities score of a sensitive attribute
(e.g., make it more male and less female) and then recording how the likelihood
of a certain target triple being true changes (e.g., whether they are a nurse of a
lawyer). As a followup, the authors present a novel approach to KG embedding
where embeddings are trained to be neutral with respect to sensitive features
using an adversarial loss function [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] . To achieve this, they add a neural-network
based classi er to the scoring function: scores are penalized when this classi er
can predict the value of the sensitive attribute from the existing embedding.
However, this means that the embeddings are only neutral with respect to the
power of the model: a more powerful model might be able to infer the sensitive
values.
      </p>
      <p>These (and other initiatives) indicate that there is a growing attention to
bias in knowledge graphs, and e orts to make bias visible. As knowledge graphs
often are collaborative repositories, it is relevant to provide users with accessible
means for identifying possible bias. The examples above are helpful but limited
in two ways: they either are valid for a speci c knowledge graph, and/or a
limited number of attributes. A general framework might provide more possibilities
to map bias in knowledge graphs and enable users to become aware of the
distribution of items and attributes in a given knowledge graph. With their own
subject speci c expertise, these users can then decide which bias is problematic,
and how to address it.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Wikidata Completeness Analysis</title>
      <p>Wikidata is the large, open knowledge graph which acts as central storage for the
structured data of other Wikimedia projects such as Wikipedia. Data is stored
as claims or triples, containing a subject item, a property and a value. Values
are entities or literals such as a quantity, a string or even a coordinate. Items
are identi ed through URIs starting with 'Q' (e.g. Q22686 for Donald Trump)
and properties are identi ed through URIs starting with 'P' (e.g., P40 for Child ).
Claims can be contextualized with additional data such as sources (for the data),
ranks (in case of multiple values for a property) and quali ers (e.g., to note that
a fact was true at a speci c point in time, or that a fact is disputed). A claim
and its additional data are collectively referred to as a statement.
3.1</p>
      <sec id="sec-3-1">
        <title>Completeness</title>
        <p>We investigated how several properties were distributed among the class of
humans in Wikidata. An item x is a human when it is an instance of (P31) human
(Q5), i.e., item x must have the claim (x, P31, Q5). Using the Wikidata dump
from 2021/03/31, we extracted 9,028,271 such items. We will now give a brief
overview of some of our preliminary ndings.</p>
        <p>To begin we, for each item in the subset we counted whether or not a property
occurs among its claims. This gives us an overview of how often a property
occurs at least once. The result is displayed in Figure 1. Ignoring the predicate
instance of, which per our de nition is present on all humans, the most occurring
predicates are sex or gender (P21), occupation (P106), and given name (P735).
These occur on 7,079,543 (78%), 6,359,256 (70%) and 5,635,238 (62%) humans
respectively.</p>
        <p>Additionally, we counted the number of languages each item had a label in.
This gives us an overview of how complete Wikidata is over several languages.
The result of this is displayed in Figure 2. Expectedly, the most common
language is English, with 8.517.283 (94%) humans having an English label. More
unexpectedly however is the fact that, in spite of being a small country with
only 17 million inhabitants, the second most common label language is Dutch
with 7.785.518 (86%) humans having a Dutch label.</p>
        <p>Next, we can look at the distribution of object entities for a given predicate.
I.e., given a predicate such as place of death (P20) we can count how many
people have object values such has Moscow or Paris. From this data, we have
created a bar graph for a selection of predicates in Figure 3.</p>
        <p>Looking at this data, it is immediately clear that it is not representative
of the common population. For instance, the most common occupation by far
is researcher (20%). Yet in reality, even in the USA only around 2% of the
population has a PhD.6 We of course understand that an encyclopedia covers
persons of interest and not the general population. Hence it is logical that there
is a bias. However, the problematic bias is not the overrepresentation of scholars
but the overrepresentation of of white male scholars at western universities. If
we want to inquire to what extend the population of researchers in Wikipedia is
skewed we need to inquire about the presence of other occupations for persons
of interest for an encyclopedia, such as athletes, activists, politicians, engineers
and inventors.</p>
        <p>We hypothesize that there are two main sources of bias present in this data.
The rst is availability bias, i.e., much of the data present in Wikidata is there
because it could be easily imported. For instance through the use of bots. The
second is interest bias, where the interests of the people who work on Wikidata
end up deciding what content will dominate the dataset. Examples of this bias are
the most common occupation being researcher (imported through article papers)
and the second most common place of death being a concentration camp.
6 https://data.worldbank.org/indicator/SE.TER.CUAT.DO.ZS?locations=US
(a) Most common place of birth (left) and place of death (right) entities.
(b) Most common ethnic groups (left) and languages spoken (right) entities.</p>
        <p>(c) Most common occupations (left) and religions (right) entities.
Temporal information in Wikidata present itself in two ways. Firstly, predicates
can directly have timestamps as their object value. For instance, the date of
birth of a person. All predicates that can have a timestamp as object value
must be instances of (P31) Wikidata property with datatype 'time' (Q18636219).
There are 34 such predicates. Secondly, temporal information can be included
in any other predicate through the use of quali ers. I.e., predicates start time
(P569) and end time (P570) can be applied to a triple through rei cation to add
temporal information to that triple.</p>
        <p>Since we are interested in how humans are represented in Wikidata, we
restrict spatiotemporal analysis to the human class. Speci cally, we ground data
in space by looking at a persons place of birth (P19 ) and in time by looking
at the place of birth (P569 ). Through this we can analyse the completeness of
(a) Number of countries with at least one fact in the given century.
(b) Comparison between the most common ethnic groups listed in Wikidata in the
18th (left) and 21st (right) century.
Wikidata over time. Some results are displayed in Figure 4. We observe that the
further we go back in time, the less distinct countries are observed in Wikidata.
I.e., facts seem to be more based on a few countries. Addtionally, we investigate
the occurrences of the most common ethnic groups listed in Wikidata.
Interestingly, the use of ethnic group seems to have fallen out of favour for people born
more recently. In the 18th century the most common ethnic groups was Greeks
with over 400 occurrences, whereas in the 21st century the most common ethnic
group is African American with just over 100 occurrences.
4</p>
        <p>
          Bias Analysis of Knowledge Graph Embedding Models
In this section we perform bias analysis of the knowledge graph embedding
models. Speci cally, we analyze the e ect of balancing the data on link prediction
performance. For this task we utilize two popular models, TransE [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] and
DistMult [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. We perform our experiments on two state-of-the-art knowledge graphs.
The rst is Wikidata12k, a subset of Wikidata extracted by [
          <xref ref-type="bibr" rid="ref11 ref6">11, 6</xref>
          ]. The second
is DBP15k, a subset of DBpedia [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] originally created by [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] to test
EntityAlignment models. As we are interested in link prediction rather than entity
alignment, we select a single instance of the dataset (the English version) and
perform our experiments on it. All of our code is available on Github 7
7 https://github.com/wradstok/KGE-bias-analyzer
4.1
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Embedding Models</title>
        <p>For a triple (s; p; o), let (es; ep; eo) denote its embedding vectors. Taking a KG
and a random initialization of the vectors as an input, a vector representation
of the KG is gradually learned using a scoring function (s; p; o). The scoring
function should re ect how well the embedding captures the semantics of the KG.
The learned embeddings can be used in tasks such as classi cation, clustering,
and link prediction. In this work, we are focussed on the last. Link prediction is
the task of predicting the most likely element given a tuple where one element
is missing, e.g., given a triple (s,p,? ), to predict the most likely object entity.</p>
        <p>The most popular embedding model is TransE (Translating embeddings).
Its scoring function is based on the intuition that the the subject and object
vector should be close together after adding the predicate vector. Its scoring
function is written as (s; p; o) = jjes + ep eojj1;2. While being very powerful,
it has limited expressiveness due to its is simplicty. Therefore, we also perform
experiments with DistMult, a multiplicative model. Its scoring function is written
as (s; p; o) = jjes ep eojj1;2;. In our experiments we do not use pre-trained
models and instead train the embeddings from scratch.
4.2</p>
      </sec>
      <sec id="sec-3-3">
        <title>SMOTE</title>
        <p>
          One way of balancing datasets is to use Synthetic Minority Over-sampling
TEchnique (SMOTE) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. SMOTE is an over-sampling technique that allows one to
construct new examples of a given class based on existing examples in order to
address imbalances in the dataset. An example would be oversampling female
football players. However, SMOTE is not intended for graph datasets and as
such is not trivial to apply to knowledge graphs while maintaining the
underlying structure.
        </p>
        <p>
          One way to apply SMOTE to graph data is through embedding the graph
rst. We use this approach to evaluate how well SMOTE is suited for our
scenario. Our method is as follows. After obtaining the embeddings, we create a
categorical variable with a category for each possible combination of sensitive
features. In the case of 5 occupations and 2 genders, this implies 10 categories.
The embeddings vectors are then combined with this categorical value associated
with the sample it represents. Finally, we instruct SMOTE to generate the
maximum number of samples for each possible entry using the python
imbalancedlearn library [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], i.e., given that there are 1850 male association football players,
we create both male and female physicists until there are 1850 of each.
        </p>
        <p>However, preliminary experiments found that this method did not su ce
for generating a balanced set of embeddings. Applying our evaluation method
to datasets produced by above procedure did not create balanced predictions.
We hypothesize that it is because SMOTE generates new examples based on
existing biases. By interpolating new `female' examples from existing female
embeddings, we are only creating new examples in the same cluster. That means
that locations in the embedding space which are already female become much
more so. Therefore, we instead extend the datasets by sampling additional triples
from the original knowledge graphs.
Dataset
Wikidata12k (original)
Wikidata12k (balanced)
# Triples # Entities # Pred. # Men # Women
38,970 12,848 25 4,905 717
51,682 15,957 25 4,905 3,610
DBpedia15k (original)
DBpedia15k (balanced)
92,746
95,827
18,716
27,459
206
206
6,767
5,916
1,087
5,917
We enrich the original knowledge graphs by adding female triples, i.e. extra
triples with female entities as subject. The data is enriched in such a way that
the number of men and women associated with each of the top 5 most common
occupations becomes approximately equal. The triples are obtained from the
complete Wikidata and DBpedia datasets.</p>
        <p>To ensure that the new triples have healthy connectivity with regards to the
rest of the graph this is done in a three step process. Firstly, all women with the
required occupations are selected from the complete knowledge graph. Secondly,
from this selection the women which have the largest number of predicates which
are also in the original dataset are picked. Finally, we select the women whose
object values are already in the graph. The last step ensures that we do not add
object entities which occur only a few times, and only with women.
4.4</p>
      </sec>
      <sec id="sec-3-4">
        <title>Wikidata12k</title>
        <p>Wikidata12k does not contain any information about gender or occupation.
However, we can look up this data by querying the original Wikidata knowledge
graph. As Wikidata12k is originally a temporal knowledge graph, we strip out
the temporal information and remove and duplicate triples that may be created
by this process.</p>
        <p>The ve most common occupations are association football player Q937857
(1867), politician Q82955 (918), actor Q33999 (211), writer Q36180 (184) and
physicist Q169470 (143). These occupations are not uniformly distributed with
regards to gender: there are only a handful of women football players, and there
is not a single woman physicist in the entire dataset.</p>
        <p>In total, we add around 10,000 triples with female entities as subject to the
Wikidata12k knowledge graph, resulting a new graph over 50,0000 triples. This
increases the average number of mentions as subject (i.e., the average number of
outlinks) per female entity from 3.49 in the original graph to 4.74 in the balanced
graph. However, the number of outlinks still falls short of that of men, which is
5.41.
4.5</p>
        <p>
          DBP15k
DBP15k is a subset of DBpedia [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] created by [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] to test Entity-Alignment
models. The majority of predicates in DBP15k have very few triples associated
        </p>
        <p>Data Prediction</p>
        <p>Men Women Women (%) Men Women Women (%) Di (p.p.)
O ceholder 1803 180 9.1% 85 15 15.0% 6.9
Athlete 1142 6 0.5% 100 0 0.0% -0.5
Royalty 569 235 29.2% 60 40 40.0% 10.8
Sportsmanager 225 0 0.0% 100 0 0.0% 0.0
Scientist 216 6 2.7% 95 5 5.0% 2.3
Total 3955 427 9.7% 440 60 12.0%</p>
        <p>Data Prediction</p>
        <p>Men Women Women (%) Men Women Women (%) Di (p.p.)
O ceholder 1498 1770 54.2% 16 83 83.8% 29.7
Athlete 990 1320 57.1% 55 45 45.0% -12.1
Royalty 472 596 55.8% 25 74 74.7% 18.9
Sportsmanager 188 31 14.2% 85 15 15.0% 0.8
Scientist 195 225 53.6% 32 67 67.7% 14.1
Total 3343 3942 54.1% 213 284 57.1%
with them. To prevent the graph from being too sparse for an embedding model
to learn we delete all predicates which occur less than 50 times.</p>
        <p>DBpedia does not store any information about peoples sex or gender in a
structured way. I.e., although a person can be of rdf:type of Man or Woman,
manual inspection of the data did not reveal that this information was
consistently present. However, most entities do contain their Wikidata identi ers. Since
Wikidata does list peoples gender, we determine a persons gender by querying
Wikidata for the given identi ers.</p>
        <p>The ve most common occupations are O ceHolder (2508), Athlete (1436),
Royalty (1002), SportsManager (288), and Scientist (282). Like Wikidata12k, the
male/female ratio in these occupations is unbalanced, skewing heavily towards
men. In addition to balancing the data by adding additional samples, we remove
some male entities and their triples to create the balanced dataset.
4.6</p>
      </sec>
      <sec id="sec-3-5">
        <title>Evaluation</title>
        <p>To evaluate whether an embedding model contains bias with regards to
gender and occupation we perform the following procedure. Firstly, we count the
fraction of men and women that have a certain occupation (P106 ) x. Then, we
ask the model to predict the n most likely entities for the query (?, P106, x).
If the fraction of men or women returned is consistently larger than that the
fraction present in the data, the model is biased. Speci cally, when more men
are predicted the model is biased against women and vice versa. If this bias is
only present in the unbalanced dataset and not the balanced datasets, then the
model re ects the data it has been trained on. However, if the bias is present
in both scenarios, the models are either inherently biased or manage to pick up
some form of bias in the data which is not re ected in our analysis.
4.7</p>
      </sec>
      <sec id="sec-3-6">
        <title>Results</title>
        <p>
          Our results are displayed in Tables 2 and 3 for DBpedia and Wikidata12k
respectively using TransE [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], and in Tables 4 and 5 using DistMult [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. We observe
that in both original datasets, the percentage of women predicted is very low
and close to the percentage of women in the dataset. The largest di erence is
observed on the occupation Royalty in the DBpedia15k dataset. Here the di erence
is just over 10 percentage points.
        </p>
        <p>When we extend our view to the balanced datasets, we nd that the
percentage of women predicted has moved upwards with the percentage of women in the
dataset. Balancing the datasets thus helps with improving the representation of
minority classes in the model output. However, we do observe that the absolute
di erences between the number of men in the dataset, and the number of men
predicted (and for women) has increased, suggesting that the model has become
less accurate.</p>
        <p>Even so, we believe a more likely explanation to be that the larger number
of entities predicted induces more variance in the predictions. This explanation
is strengthened by the fact that the di erence is smaller when using DistMult,</p>
        <p>Data Prediction</p>
        <p>Men Women Women (%) Men Women Women (%) Di (p.p.)
O ceholder 1803 180 9.1% 87 13 13.0% 3.9
Athlete 1142 6 0.5% 100 0 0.0% 0.5
Royalty 569 235 29.2% 68 32 32.0% 2.8
Sportsmanager 225 0 0.0% 100 0 0.0% 0.0
Scientist 216 6 2.7% 96 4 4.0% 1.3
Total 3955 427 9.7% 451 49 9.8%</p>
        <p>Data Prediction</p>
        <p>Men Women Women (%) Men Women Women (%) Di (p.p.)
O ceholder 1498 1770 54.2% 47 53 53.0% -1.2
Athlete 990 1320 57.1% 78 22 22.0% -35.14
Royalty 472 596 55.8% 39 61 61.0% 5.2
Sportsmanager 188 31 14.2% 88 12 12.0% -2.2
Scientist 195 225 53.6% 51 49 49.0% -4.6
Total 3343 3942 54.1% 303 197 39.4%
which is a more expressive model and can thus model the information more
accurately.</p>
        <p>Another point of note is the observation that on both datasets and almost
all occupations, the sign of the di erence between the percentage of women
predicted and the percentage of women in the dataset is mostly positive. This means
that women are actually overrepresented in the models predictions, indicating
that the model is actually less biased than the data.
5</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>In this paper we proposed a new approach to mitigate bias in knowledge graphs
embedding models by leveraging the distribution of the datasets in which the
models are trained on. Speci cally, rather than adapting models to mitigate bias,
we instead analyze and augment the data that is fed into the model. We carried
out several experiments using state of the art embedding models (namely, TransE
and DistMult) and two knowledge graphs (namely DBpedia and Wikidata) and
showed that balancing the data with regards to speci c sensitive features (e.g.
gender and occupation) improves the overall prediction capabilities of the
models. Additionally, to motivate our work, we have done a completeness analysis of
Wikidata using a number of sensitive features.</p>
      <p>As a future work, we will extend the proposed approach to build a system that
takes as an input a dataset and a selection of sensitive features and automatically
balances the data with respect to the given features.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bizer</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kobilarov</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Auer</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Becker</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cyganiak</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hellmann</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Dbpedia-a crystallization point for the web of data</article-title>
          .
          <source>Journal of web semantics 7</source>
          (
          <issue>3</issue>
          ),
          <volume>154</volume>
          {
          <fpage>165</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bollacker</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paritosh</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sturge</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taylor</surname>
          </string-name>
          , J.:
          <article-title>Freebase: a collaboratively created graph database for structuring human knowledge</article-title>
          .
          <source>In: Proceedings of the 2008 ACM SIGMOD international conference on Management of data</source>
          . pp.
          <volume>1247</volume>
          {
          <issue>1250</issue>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bordes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Usunier</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia-Duran</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weston</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yakhnenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Translating embeddings for modeling multi-relational data</article-title>
          .
          <source>Advances in neural information processing systems</source>
          <volume>26</volume>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bourli</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pitoura</surname>
          </string-name>
          , E.:
          <article-title>Bias in knowledge graph embeddings</article-title>
          .
          <source>In: 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)</source>
          . pp.
          <volume>6</volume>
          {
          <issue>10</issue>
          (
          <year>2020</year>
          ). https://doi.org/10.1109/ASONAM49781.
          <year>2020</year>
          .9381459
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Chawla</surname>
            ,
            <given-names>N.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bowyer</surname>
            ,
            <given-names>K.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hall</surname>
            ,
            <given-names>L.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kegelmeyer</surname>
            ,
            <given-names>W.P.</given-names>
          </string-name>
          :
          <article-title>Smote: synthetic minority over-sampling technique</article-title>
          .
          <source>Journal of arti cial intelligence research 16</source>
          ,
          <volume>321</volume>
          { 3x57 (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Dasgupta</surname>
            ,
            <given-names>S.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ray</surname>
            ,
            <given-names>S.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Talukdar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : Hyte:
          <article-title>Hyperplane-based temporally aware knowledge graph embedding</article-title>
          .
          <source>In: Proceedings of the 2018 conference on empirical methods in natural language processing</source>
          . pp.
          <year>2001</year>
          {
          <year>2011</year>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Demartini</surname>
          </string-name>
          , G.:
          <article-title>Implicit bias in crowdsourced knowledge graphs</article-title>
          .
          <source>In: Companion Proceedings of The 2019 World Wide Web Conference</source>
          . p.
          <volume>624</volume>
          {
          <fpage>630</fpage>
          . WWW '
          <volume>19</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA (
          <year>2019</year>
          ). https://doi.org/10.1145/3308560.3317307, https://doi.org/10.1145/3308560. 3317307
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Dong</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabrilovich</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heitz</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horn</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lao</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Murphy</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Strohmann</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Zhang, W.:
          <article-title>Knowledge vault: A web-scale approach to probabilistic knowledge fusion</article-title>
          .
          <source>In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining</source>
          . pp.
          <volume>601</volume>
          {
          <issue>610</issue>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. Fisher, J.,
          <string-name>
            <surname>Mittal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palfrey</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Christodoulopoulos</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Debiasing knowledge graph embeddings</article-title>
          .
          <source>In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          . pp.
          <volume>7332</volume>
          {
          <fpage>7345</fpage>
          . Association for Computational Linguistics,
          <source>Online (Nov</source>
          <year>2020</year>
          ). https://doi.org/10.18653/v1/
          <year>2020</year>
          .emnlpmain.
          <volume>595</volume>
          , https://www.aclweb.org/anthology/2020.emnlp-main.
          <fpage>595</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Fisher</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Palfrey</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Christodoulopoulos</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mittal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Measuring social bias in knowledge graph embeddings</article-title>
          . arXiv preprint arXiv:
          <year>1912</year>
          .02761 todo (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Leblay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chekol</surname>
            ,
            <given-names>M.W.</given-names>
          </string-name>
          :
          <article-title>Deriving validity time in knowledge graph</article-title>
          .
          <source>In: Companion Proceedings of the The Web Conference</source>
          <year>2018</year>
          . pp.
          <volume>1771</volume>
          {
          <issue>1776</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. Lema^tre, G.,
          <string-name>
            <surname>Nogueira</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aridas</surname>
            ,
            <given-names>C.K.</given-names>
          </string-name>
          :
          <article-title>Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          <volume>18</volume>
          (
          <issue>17</issue>
          ),
          <volume>1</volume>
          {
          <issue>5</issue>
          (
          <issue>2017</issue>
          ), http://jmlr.org/papers/v18/
          <fpage>16</fpage>
          -
          <lpage>365</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Cross-lingual entity alignment via joint attribute-preserving embedding</article-title>
          . In: d'Amato,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Fernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Tamma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Lecue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>CudreMauroux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Sequeda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Lange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>He</surname>
          </string-name>
          <string-name>
            <surname>in</surname>
          </string-name>
          , J. (eds.)
          <source>The Semantic Web { ISWC 2017</source>
          . pp.
          <volume>628</volume>
          {
          <fpage>644</fpage>
          . Springer International Publishing,
          <string-name>
            <surname>Cham</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Vrandecic</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Krotzsch, M.:
          <article-title>Wikidata: a free collaborative knowledgebase</article-title>
          .
          <source>Communications of the ACM</source>
          <volume>57</volume>
          (
          <issue>10</issue>
          ),
          <volume>78</volume>
          {
          <fpage>85</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Wisesa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Darari</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krisnadhi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nutt</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Razniewski</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Wikidata completeness pro ling using prowd</article-title>
          .
          <source>In: Proceedings of the 10th International Conference on Knowledge Capture</source>
          . p.
          <volume>123</volume>
          {
          <fpage>130</fpage>
          .
          <string-name>
            <surname>K-CAP</surname>
          </string-name>
          '
          <fpage>19</fpage>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA (
          <year>2019</year>
          ). https://doi.org/10.1145/3360901.3364425, https://doi.org/10.1145/3360901.3364425
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yih</surname>
          </string-name>
          , W.t.,
          <string-name>
            <surname>He</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Embedding entities and relations for learning and inference in knowledge bases</article-title>
          .
          <source>arXiv preprint arXiv:1412.6575</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>