<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Responsible Data Management for Human Resources</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Olivia</forename><surname>Kyriakidou</surname></persName>
							<email>okyriakidou@acg.edu</email>
							<affiliation key="aff1">
								<orgName type="institution">The American</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Responsible Data Management for Human Resources</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">B2C4431010237F5C1CF0D51725FB77D1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T17:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Human Resources</term>
					<term>Fairness</term>
					<term>Bias</term>
					<term>Data Ecosystems</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The human resources (HR) departments rely increasingly on recommender systems (RS) for most of their processes, such as recruiting, selecting and developing their employees. However, the RS often discriminate unfairly based on biases in data that may perpetuate and enhance existing biases and in the work place. An important part of an HR department is a the data ecosystem, comprising raw and derived data, related to potentially different stakeholders while being subject to laws, and regulations. In this work we propose the characteristics of a data ecosystem that will facilitate data transparency through traceability as a way of detecting potential biases in the data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CCS CONCEPTS</head><p>• Information systems → Data management systems; • Social and professional topics → Employment issues; User characteristics.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Recommender systems (RS) are widely used by Human Resources (HR) departments to facilitate their business processes from the point of time optimization, but also from the perspective of minizing human intervention, in an effort of achieving fairness. RS can be applied in the recruitment, hiring, promotion of employees, etc. For instance they can used in matching CVs against job posts, in ranking CVs which determines the order of interviews or even in comparison of CVs against past CVs of employees that are deemed as successful. A RS can also be used for segmenting job applications into categories, detecting and recording long terms trends etc. Moreover, they can be used by prospective employees that seek employment.</p><p>Although RS seem to remove human intervention by automating HR processes, often but not exclusively, through advanced machine learning algorithms, still segments of the population can be discriminated against. The data upon which the analysis is based on may contain biases towards age groups, gender, ethnic origin etc. The data stem from specific data items, specific data features, data distributions, data sampling methods. The bias in data can be also very subtle and difficult to detect as it may appear in derived data stemming from an analytics process.</p><p>Eventually a RS in HR is a information system that is directly related to the professional life of people, and as such it should be subject to ethical and legal regulations, apart from the technical ones, like prediction accuracy. ACM <ref type="foot" target="#foot_0">1</ref> and IEEE 2 have issued codes of ethics that refer to the need for fairness in Information Systems. In particular section 1.4 of ACM code of ethics is entitled Be fair and take action not to discriminate and section II of the IEEE code of ethics states: To treat all persons fairly and with respect, to not engage in harassment or discrimination, and to avoid injuring others.</p><p>In reality RS are often fraught with elements of discrimination and unfairness. The unfairness may stem from biases in the data that may misrepresent the actual population and subsequent analytic algorithm often amplify the data biases. Lack of fairness can have potential legal consequences, especially in employment as it might violate anti-discrimination laws. Also it might have financial consequences as usage of such systems might drop. See also <ref type="bibr" target="#b10">[10]</ref> for a recent tutorial on the origin and form of fairness in RS.</p><p>A data ecosystem is a network of data in potentially many forms (e.g. unstructured, structured) as well as accompanying rules that permit their acquisition, storage, maintenance, and retrieval. The data ecosystems is of potential interest to many stakeholders, including data providers, and data users that try to create value out the ecosystems. A data ecosystem includes metadata, as well as legal, organizational or ethical regulations. Moreover, the ecosystems evolve as their constituent components change. Finally, derived data also form part of the ecosystems. For instance clusters, predictions etc. are examples of derived data that are produced by statistical or machine learning methods. See <ref type="bibr" target="#b15">[15]</ref> for an overview of data ecosystems.</p><p>Our contribution is to focus on the data component of a RS and examine how a data ecosystem would facilitate data transparency through data traceability so that potential biases are made explicit or a least easier to track and detect. Our approach is based on a similar work for data transparency in the biomedical domain <ref type="bibr" target="#b11">[11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATED WORK</head><p>Responsible data management has been discussed in the context of automated decision systems (ADS), which are systems that make make decisions about humans that might affect their socioeconomic life <ref type="bibr" target="#b17">[17]</ref>, <ref type="bibr" target="#b18">[18]</ref>. They authors refer to the ethical challenges faced in all phases of a data science pipeline and the need for a fair, transparent and responsible data management.</p><p>Our current work focuses on the data representation part of an ADS, which is essentially an RS. In particular, we refer to the features of a data ecosystem and how it can be supported with semantic web technologies, and their relevance to the issues of fairness in HR.</p><p>A very similar approach that we propose in the current work has been developed for a biomedical system in the context of the EU funded project BigMedilytics 3 for a lung-cancer pilot application. The pilot integrates structured and unstructured information, open and sensitive data in a knowledge graph. This constitutes an example of a data ecosystem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">MOTIVATING EXAMPLES</head><p>Next we mention some examples that indicate the form of bias in raw data, in data associations, as well as in derived data produced by machine learning algorithms. The examples refer to HR department cases.</p><p>Biased Data based on human behavioral biases. HR algorithmic recommendations may sustain existing inequities when they are trained on data that do not include specific groups of individuals <ref type="bibr" target="#b9">[9]</ref>. For example, many selection algorithms try to identify the criteria that characterize the ideal employee and use them for the selection of newcomers. For this task they utilize performance data that identify the best performing employees within the organize and then identify the traits that distinguish them. However, there is the danger that if the performance data favor men due to existing biases within the organization <ref type="bibr" target="#b16">[16]</ref>, then the selection algorithm might include gender as a preferred characteristic for the ideal candidate and prefer men rather than women applicants. In this sense, existing biases could be reified by limiting the number of certain groups and possibly underrepresented groups who are alerted, selected, and hired for specific job openings <ref type="bibr" target="#b5">[6]</ref>. Moreover, HR recommendation systems utilized for the automated screening of candidates' CVs against certain preferred selection criteria may also generate biased results when they are trained on data from past hiring decisions that are based on individual, organizational and structural biases against certain underrepresented groups of employees <ref type="bibr" target="#b14">[14]</ref>. <ref type="bibr" target="#b3">[4]</ref>. The use of natural language processing (NLP) tools in chatbots that evaluate candidates' competencies and fit to the job and the organization may also preserve existing societal inequities when they are trained on biased data and exclude certain categories of candidates. The association of African-American names with negative feelings and female names with the household and non-technical jobs has been already documented in the literature <ref type="bibr" target="#b19">[19]</ref>.</p><p>Proxies. Recommendation systems can replicate biases in other subtle ways, especially through the use of proxies. Certain hiring criteria could serve as proxies for categorizing individuals in specific groups and drive discrimination. For example, the use of gaps in employment as a hiring criterion could discriminate against women applicants as women disproportionately leave the workplace to provide child or elderly care <ref type="bibr" target="#b0">[1]</ref>. Moreover, job matching platforms and job recommendation systems use proxies for "relevance" that reproduce biases. Such systems, for example, could show to women specific jobs at specific hierarchical levels (e.g., senior or junior positions in management) according to their own search history but also according to the search history of women similar to them. Accordingly, they might end up with fewer recommendations for 3 https://www.bigmedilytics.eu/ senior positions if themselves and others look tend to look for lower-level jobs <ref type="bibr" target="#b3">[4]</ref>. Proxies are also included in HR data that train employee selection recommendation systems in order to offer the most appropriate renumeration package to prospective employees. Such suggestions however may reinforce gender or racial pay gaps especially when they reflect the existence of strong proxies that signal for certain gender representations (e.g., male employees as breadwinners) and status inequalities.</p><p>Facial analysis that is used in virtual interviews may also create disparate impact on specific sub-groups of employees across gender and racial lines. In <ref type="bibr" target="#b4">[5]</ref> it was shown that the faces of women with darker skin cannot be reliably recognized by facial analysis systems as well as the emotions of people with disabilities and in different cultural contexts <ref type="bibr" target="#b2">[3]</ref>. Finally, in employee selection, most recruiters use a number of candidate characteristics as proxies of culture fit <ref type="bibr" target="#b6">[7]</ref>, defined as the degree to which the values of the individual match those of the organization. However, there is the danger that these proxies will become hard rules ignoring their subjective character and in this way exclude certain individuals who are thought apriori that they do not "fit" the organizational culture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Segregation of individuals.</head><p>Biases could also persist when algorithms segregate employees into groups drawing inferences about individuals from their group memberships. Selection recommendation systems, for instance, may erroneously attribute to people with disabilities <ref type="bibr" target="#b20">[20]</ref> certain characteristics based on their group membership without properly assessing the candidates and consequently offer lower status job positions. Moreover, categorizing individuals into certain gender groups could unfairly marginalize non-binary and transgender employees while their classification into certain race groups could signify status inequalities <ref type="bibr" target="#b12">[12]</ref>.</p><p>Human computer interaction. Most HR recommendation systems run on platforms that require employees' and candidates' active involvement with them which is determined merely by the rules set by the platform that control all processes <ref type="bibr" target="#b13">[13]</ref>. For instance, job candidates do not have any control over how their application will be presented to possible employers and they have to provide all the required information by the platform if they want to be considered for future job opportunities. Moreover, employee selection recommendation systems tend to present numerical rankings of candidates to employers generating the perception that there are actual substantial differences between the candidates for a certain position, while in reality the differences might be minimal <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">REQUIREMENTS FOR A DATA ECOSYSTEM IN HR</head><p>Next, based on the previously mentioned examples we sketch the requirements that would be necessary for a data ecosystem in HR.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Data management requirements:</head><p>The data ecosystem should allow data sharing for structured (e.g. CSV files) and unstructured data (e.g. text). The data should be accessible and retrievable by all stakeholders. Also the data has to be of high quality. For instance, data items that have missing values could be rejected or data items that are very old. As an example we could mention a CV that that does not contain any information about education or past employment. The data management requirements fall into the following categories: DM1: Data management of multiple document types should be supported. DM2: Quality of data items should be supported at all levels of data pipeline, e.g. at the raw data, but also at derived data. Organizational requirements: The data should be stored, accessed and processed according to the organization's rules, and regulations. The organization requirements fall into the following categories: O1: Data governance should be enforced by the organization. Thus the data acquisition process, the data storage, and retention, access rights, and data obsolescence are items related to data governance. The HR department may have business rules that stipulate the recruitment policy, and what will be the requested documents. O2: Data sovereignty which specifies who owns the original data, the derived data, and to what purpose. This will increase the trust in the system. For instance, it will be clearer of how a submitted CV will be handled.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Legal &amp; Ethical requirements:</head><p>The data management should be in accordance with the requirements of the European GDPR. 4 Moreover, the data management should address bias. For instance, the execution of algorithms should be independent of sensitive attributes (like ethnicity, age, gender).</p><p>In addition, the data should be owned and used for the indented purposes. For instance, CVs of job applicants should not used to generate business value by selling them without their consent. Finally, traceability is an import aspect of the data that essentially allows to know where the data where obtained from, and how they were obtained. The above can be summarized into the following ethical requirements: E1: Data protection &amp; ownership which specifies the extence of ownership for each stake holder. E2: Sensitive attributes which clearly states the sensitive attributes, with the foresight that they should be used by prediction algorithms. Typically, they represent age, gender, ethnic background etc. The sensitive attributes are typically associated with the provisions of GDPR. E3: discrimination attributes they may lead to discrimination in non-obvious ways. For instance the name of a job applicant might inadvertently facilitate discrimination as it may reveal ethnic origin. Moreover, some derived attributes fall in this category, for instance employment gaps.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">DESIGN OF A DATA ECOSYSTEM</head><p>We present in detail the concept of a data ecosystem, that will serve as the infrastructure for an HR. A data ecosystem (DE) can be defined as a 4-tuple: DE=&lt;Data Sets, Data Operators, Meta-Data, Mappings&gt; <ref type="bibr" target="#b7">[8]</ref>.</p><p>Data sets: the ecosystem is composed of potentially multiple data sets. Data sets can be comprised of structured or unstructured information; also, they have different formats, e.g., CSV, JSON or tabular relations, and can be managed using different management systems. 4 https://gdpr-info.eu/ Data operators: the set of operators that can be executed against the data sets. For instance, anonymization, data quality checks, recency checks can be considered as data operators. Meta-Data: provide the semantics of the data stored in the data sets of the data ecosystem. It comprises: (1) A Domain ontology, which provides a unified view of the concepts, relationships, and constraints of the domain of knowledge. It associates formal elements from the domain ontology to concepts. For instance, a specific job post and a specific applicant can be part of the concepts in a domain ontology.</p><p>(2) Properties enable the definition of data quality, provenance, and data access regulations of the data in the ecosystem.</p><p>For instance, last updated and other non-domain properties (quality etc). (3) Descriptions of the main characteristics of a data set. No specific formal language or vocabulary is required; in fact, a data set could be described using natural language. For instance, Data set D is a collection of CVs and cover letters. Mappings expressing correspondences among the different components of a data ecosystem. The mappings are as follows: Mappings between ontologies: they represent associations between the concepts in the different ontologies that compose the domain ontology of the ecosystem. For instance, if there can a mapping between the personnel ontology, and the candidate employees ontology. Mappings between data sets: they represent relations among data sets of the ecosystem and the domain ontology.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">DATA ECOSYSTEM IN HR</head><p>The role of a data ecosystem (DE) is to provide an explicit description of the data and applicable operations on them through metadata and mapping rules. Next we provide some examples that refer to the usage of the elements of the DE as referred to in the previous section. The description of a data ecosystem that we describe next does not cover all the activities HR, but rather it addresses some essential parts that refer to recruitment and hiring of employees. Thus we will assume a scenario where there are applicants CVs and job posts. The DE for this example can be depicted in Figure <ref type="figure" target="#fig_0">1</ref>.</p><p>One of the data sources are the CVs of the applicants. Typically, they contain textual information, possibly with some keywords (e.g. education, past employment) which can be helpful as annotations. Thus a CV represents a piece of unstructured or partially structured information.</p><p>A Data Operator can implement an NLP process to extracte structured information from the CV. Typically named entity recognition (NER) and relation extraction (RE) will have to be performed, resulting in triplets comprising two entities and a relation. The named entities (NE) in CVs can be things like: skills, past employment, educational achievements and demographic data. The relations connect the NE to the person in question, while being labeled with time annotations. This will form the job applicant's graph. The extracted entities will then be annotated with meta data derived from a domain ontology, commonly described in OWL. For example The NER and RE processes can on occasion be of low precision. The Meta-data properties can represent the quality of the NLP process as a numerical score per NE or per relation.</p><p>To the best of our knowledge there is not single ontology that is complete enough to annotate a CV for the requirements of an HR. For instance, it may be necessary apart from NAICS to use also resumeRDF, 6 and the Human Resources Ontology. 7 This results in the need also to have mappings between the ontologies for the common concepts (i.e. classes) and for the common object properties. The mapping rules can be stated in RML. 8  The second major source of information is the job post, which is typically in textual form, possibly split in sections each with a meaningful keyword (like company culture, required skills etc.). This usually constitutes a partially structured piece of information. Likewise with the case of CVs information has to be extracted in the forms of triplets, resulting in the job posts graph. However, it may be not necessary to extract structure from all parts of the document. For instance a company's culture could fall under the Meta-Data Descriptions data set.</p><p>Finally the merging of the two graphs in the integrated knowledge graph can also be achieved with mapping rules. The mapping rules, as well all the ontology selection, and possibly expansion to be designed in cooperation of a knowledge engineer with a representative of HR department.</p><p>The issue of detecting possible bias in the data can be assisted through data transparency, especially at the stage of NE annotation. Thus CV attributes can be split into sensitive and non-sensitive ones. The former comprising name, gender, ethnic origin, age etc. whereas the latter would comprise entities like education, or skills. 5 North American Industry Classification System (NAICS) https://www.census.gov/naics/ 6 http://rdfs.org/resume-rdf/ 7 https://github.com/motapinto/cv-ontology/blob/main/cv-onto logy.owl 8 https://rml.io/specs/rml/ The distinction between attributes can be represented for instance as classes by expanding one of the existing ontologies. Thus is will be clearer what attributes should be used from subsequent machine learning algorithms that perform job recommendations.</p><p>Subsequently a similar distinction can be made between soft and hard skills in job posts. Naturally, this will require Data operators to split the skills into two classes. This will facilitate an association of soft and hard skills to the level of seniority of the position, and to the applicants' gender. This can reveal subtle forms of biases.</p><p>Finally, business regulations and regulations derived from ethical data management can be set as constraints on the integrate knowledge graph, and be expressed in the SHACL <ref type="foot" target="#foot_1">9</ref> language.</p><p>Typically the DE can accessed via SPARQL endpoints. Normally, the end user has access via web services accessible through a dashboard. The web services can also allow for different user roles, thus implementing data access control.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">CONCLUSIONS</head><p>In the current work we proposed a framework for responsible data management for a human resources department. The framework is based on the concept of a DE, that comprises data, meta-data and data operators. It can be implemented with semantic technologies (RDF Schema, OWL, RML rules, etc.). The implementation of a data ecosystem will require a substantial investment both from a knowledge engineering and the HR perspective. The benefits can be important, especially in the field of data transparency. Moreover, a DE can also facilitate the deployment of explainable machine learning algorithms.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The HR Data Ecosystem</figDesc><graphic coords="4,161.85,83.68,288.29,188.71" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.acm.org/code-of-ethics</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_1">https://www.w3.org/TR/shacl/</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>The authors would like to acknowledge the support of the Deree -The American College of Greece in the current article.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">The Paradox of Automation as Anti-Bias Intervention</title>
		<author>
			<persName><forename type="first">Ifeoma</forename><surname>Ajunwa</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">41</biblScope>
			<pubPlace>Cardozo, L</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">the organization of work</title>
		<author>
			<persName><forename type="first">Ifeoma</forename><surname>Ajunwa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniel</forename><surname>Greene</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Platforms at work: Automated hiring platforms and other new intermediaries in</title>
				<imprint>
			<publisher>Emerald Publishing Limited</publisher>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note>Work and labor in the digital age</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements</title>
		<author>
			<persName><forename type="first">Lisa</forename><surname>Feldman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Barrett</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Ralph</forename><surname>Adolphs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stacy</forename><surname>Marsella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aleix</forename><forename type="middle">M</forename><surname>Martinez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Seth</forename><forename type="middle">D</forename><surname>Pollak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological science in the public interest</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="68" />
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Help wanted: An examination of hiring algorithms</title>
		<author>
			<persName><forename type="first">Miranda</forename><surname>Bogen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aaron</forename><surname>Rieke</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<publisher>equity</publisher>
		</imprint>
	</monogr>
	<note>and bias</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Gender shades: Intersectional accuracy disparities in commercial gender classification</title>
		<author>
			<persName><forename type="first">Joy</forename><surname>Buolamwini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Timnit</forename><surname>Gebru</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference on fairness, accountability and transparency</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="77" to="91" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Balanced neighborhoods for multi-sided fairness in recommendation</title>
		<author>
			<persName><forename type="first">Robin</forename><surname>Burke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nasim</forename><surname>Sonboli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aldo</forename><surname>Ordonez-Gauger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference on Fairness, Accountability and Transparency. PMLR</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="202" to="214" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Stereotypes of Norwegian social groups</title>
		<author>
			<persName><forename type="first">Henrik</forename><surname>Hege H Bye</surname></persName>
		</author>
		<author>
			<persName><surname>Herrebrøden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gunnhild</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Guro</forename><forename type="middle">Ø</forename><surname>Hjetland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Linda</forename><forename type="middle">L</forename><surname>Røyset</surname></persName>
		</author>
		<author>
			<persName><surname>Westby</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Scandinavian Journal of Psychology</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="469" to="476" />
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">Cinzia</forename><surname>Capiello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Avigdor</forename><surname>Gal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matthias</forename><surname>Jarke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jakob</forename><surname>Rehof</surname></persName>
		</author>
		<title level="m">Data ecosystems: sovereign data exchange among organizations</title>
				<imprint>
			<date type="published" when="2020">2020. 19391</date>
		</imprint>
	</monogr>
	<note>Dagstuhl Seminar</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m">Dagstuhl Reports</title>
				<imprint>
			<publisher>Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik</publisher>
			<biblScope unit="volume">9</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The hidden biases in big data</title>
		<author>
			<persName><forename type="first">Kate</forename><surname>Crawford</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Harvard business review</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">4</biblScope>
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Fairness and discrimination in recommendation and retrieval</title>
		<author>
			<persName><forename type="first">Robin</forename><surname>Michael D Ekstrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fernando</forename><surname>Burke</surname></persName>
		</author>
		<author>
			<persName><surname>Diaz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th ACM Conference on Recommender Systems</title>
				<meeting>the 13th ACM Conference on Recommender Systems</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="576" to="577" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">Sandra</forename><surname>Geisler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maria-Esther</forename><surname>Vidal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cinzia</forename><surname>Capiello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bernadette</forename><forename type="middle">Farias</forename><surname>Loscio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Avigdor</forename><surname>Gal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matthias</forename><surname>Jarke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maurizio</forename><surname>Lenzerini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Missier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Boris</forename><surname>Otto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elda</forename><surname>Paja</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2105.09312</idno>
		<title level="m">Knowledge-driven Data Ecosystems Towards Data Transparency</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The misgendering machines: Trans/HCI implications of automatic gender recognition</title>
		<author>
			<persName><forename type="first">Os</forename><surname>Keyes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM on human-computer interaction 2</title>
				<meeting>the ACM on human-computer interaction 2</meeting>
		<imprint>
			<publisher>CSCW</publisher>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="1" to="22" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Designing against discrimination in online markets</title>
		<author>
			<persName><forename type="first">Karen</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Solon</forename><surname>Barocas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Berkeley Technology Law Journal</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1183" to="1238" />
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Ethical implications and accountability of algorithms</title>
		<author>
			<persName><forename type="first">Kirsten</forename><surname>Martin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Business Ethics</title>
		<imprint>
			<biblScope unit="volume">160</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="835" to="850" />
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">What is a data ecosystem?</title>
		<author>
			<persName><forename type="first">Marcelo</forename><surname>Iury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Oliveira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bernadette</forename><surname>Farias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lóscio</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age</title>
				<meeting>the 19th Annual International Conference on Digital Government Research: Governance in the Data Age</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Go with your gut: Emotion and evaluation in job interviews</title>
		<author>
			<persName><forename type="first">Lauren</forename><forename type="middle">A</forename><surname>Rivera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">American journal of sociology</title>
		<imprint>
			<biblScope unit="volume">120</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1339" to="1389" />
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Fides: Towards a platform for responsible data science</title>
		<author>
			<persName><forename type="first">Julia</forename><surname>Stoyanovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bill</forename><surname>Howe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Serge</forename><surname>Abiteboul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gerome</forename><surname>Miklau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Arnaud</forename><surname>Sahuguet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gerhard</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 29th International Conference on Scientific and Statistical Database Management</title>
				<meeting>the 29th International Conference on Scientific and Statistical Database Management</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Responsible data management</title>
		<author>
			<persName><forename type="first">Julia</forename><surname>Stoyanovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bill</forename><surname>Howe</surname></persName>
		</author>
		<author>
			<persName><surname>Jagadish</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the VLDB Endowment</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="3474" to="3488" />
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Biased embeddings from wild data: Measuring, understanding and removing</title>
		<author>
			<persName><forename type="first">Adam</forename><surname>Sutton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Lansdall-Welfare</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nello</forename><surname>Cristianini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Symposium on Intelligent Data Analysis</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="328" to="339" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">Shari</forename><surname>Trewin</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1811.10670</idno>
		<title level="m">AI fairness for people with disabilities: Point of view</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
