<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Leveraging Ontologies to Document Bias in Data</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Mayra</forename><surname>Russo</surname></persName>
							<email>mrusso@l3s.de</email>
							<affiliation key="aff0">
								<orgName type="department">L3S Research Center</orgName>
								<orgName type="institution">Leibniz University of Hannover</orgName>
								<address>
									<settlement>Hannover</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maria-Esther</forename><surname>Vidal</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">L3S Research Center</orgName>
								<orgName type="institution">Leibniz University of Hannover</orgName>
								<address>
									<settlement>Hannover</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">TIB Leibniz Information Center for Science and Technology</orgName>
								<address>
									<settlement>Hannover</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Leveraging Ontologies to Document Bias in Data</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">8EFEFF2F154E8BF9B05A75B06A8D5744</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Bias</term>
					<term>Ontology</term>
					<term>Machine Learning</term>
					<term>Trustworthy AI</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that "any remedy for bias starts with awareness of its existence". However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The breakthroughs and benefits attributed to big data and, consequently, to machine learning (ML) -or AI -systems <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>, have also resulted in making prevalent how these systems are capable of producing unexpected, biased, and in some cases, undesirable output <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>. Seminal work on bias (i.e., prejudice for, or against one person, or group, especially in a way considered to be unfair) in the context of ML systems demonstrates how facial recognition tools and popular search engines can exacerbate demographic disparities, worsening the marginalization of minorities at the individual and group level <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>. Further, biases in news recommenders and social media feeds actively play a role in conditioning and manipulating people's behavior and amplifying individual and public opinion polarization <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>. In this context, the last few years have seen the consolidation of the Trustworthy AI framework, led in large part by regulatory bodies <ref type="bibr" target="#b9">[10]</ref>, with the objective of guiding commercial AI development to proactively account for ethical, legal, and technical dimensions <ref type="bibr" target="#b10">[11]</ref>. Furthermore, this framework is also accompanied by the call to establish standards across the field in order to ensure AI systems are safe, secure and fair upon deployment <ref type="bibr" target="#b10">[11]</ref>. In terms of AI bias, many efforts have been concentrated in devising methods that can improve its identification, understanding, measurement, and mitigation <ref type="bibr" target="#b11">[12]</ref>. For example, the special publication prepared by the National Institute of Standards and Technology (NIST) proposes a thorough, however not exhaustive, categorization of different types of bias in AI beyond common computational definitions (see Figure <ref type="figure">1</ref> for core hierarchy) <ref type="bibr" target="#b12">[13]</ref>. In this same direction, some scholars advocate for practices that account for the characteristics of ML pipelines (i.e., datasets, ML algorithms, and user interaction loop) <ref type="bibr" target="#b13">[14]</ref> to enable actors concerned with its research, development, regulation, and use, to inspect all the actions performed across the engineering process, with the objective to increase trust placed not only on the development processes, but on the systems themselves <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18]</ref>. In addition to human-readable (i.e., textual descriptions in a format that humans can read and understand) documentation frameworks for machine learning pipelines <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b19">20]</ref>, semantic data models (e.g., ontologies, knowledge graphs) can also play a crucial role in enhancing the accuracy and interpretability of ML systems <ref type="bibr" target="#b20">[21]</ref>, as well as to perform "bias assessment, representation, and mitigation" tasks <ref type="bibr" target="#b21">[22]</ref>, in a way that is also machine-readable (i.e., makes available a fine-grained description of data in a format manageable by computers). This characteristic improves the findability, accessibility, interoperability and reusability (FAIR) of data-centric resources in the Web <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b23">24]</ref>. Ontologies to model existing ML fairness metrics <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref>, as well as the semantic specifications to catalog risks in terms of compliance and conformance of AI systems under the EU's AI Act<ref type="foot" target="#foot_0">1</ref>  <ref type="bibr" target="#b26">[27,</ref><ref type="bibr" target="#b27">28]</ref> have been proposed, however, a resource that can formally describe ML pipelines, and provides a vocabulary to characterize them in terms of measured biases is still amiss.</p><p>Proposed Solution We propose an ontology-driven approach to describe and document biases detected across machine learning pipelines. Here, we refer to documentation as the process of generating metadata represented in formats understandable by humans and also by machines <ref type="bibr" target="#b28">[29]</ref>, where formal data models like ontologies and controlled vocabularies provide standardized concepts for expressing this metadata. Our ontology, Doc-BiasO, is a resource developed with the objective to introduce an integrated vocabulary system of AI-related biases as defined in the literature and their measures; represent their relationships with other relevant terminology, i.e., datasets, ML systems, fairness, harm, risk; and semantically annotate ML pipelines based on bias measures values. The version presented here has 389 classes, 72 object properties, and 28 data properties.</p><p>Contributions: Concisely, our contributions are the following:</p><p>1. Doc-BiasO, an integrated vocabulary system of ML-related biases; 2. an ontology-based approach to document bias in ML pipelines; 3. a technical evaluation of Doc-BiasO.</p><p>The remainder of this paper is structured as follows: Section 2 introduces relevant Semantic Web concepts and presents a review of related literature. Section 3, details the design of Doc-BiasO. The results of the evaluation are reported in Section 4. Finally, Section 5 outlines our conclusions and future lines of work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Statistical Bias</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Bias</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Processing and Validation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Selection and Sampling</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Group Bias</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Systemic Bias</head><p>Human Bias</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Use and Interpretation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Individual Bias</head><p>Upper category Subcategory Specialized subcategory</p><p>Figure <ref type="figure">1</ref>: Types of Bias. Core categories of bias in relation to AI systems as per the NIST <ref type="bibr" target="#b12">[13]</ref>. In the center and in turquoise, we depict the biggest circle for the Bias concept, as the most abstract and highest category. Via thicker arrows, we depict three smaller blue circles representing three main sub-categories of Bias. Using finer arrows that draw out from the three sub-categories, we depict smaller soft violet circles, to represent further sub-categories in the bias hierarchy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background and Related Work</head><p>Ontologies and Machine Learning Gruber <ref type="bibr" target="#b29">[30]</ref> defines an ontology as a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity. Ontologies include abstract concepts or classes, represented as nodes, and predicates representing the relations of these classes, edges in an ontology; the meaning of the predicates is represented using rules. Ontologies are specified using knowledge representation models, making the expressiveness of the ontology dependent on the expressive power of the representation model. The Resource Description Framework (RDF)<ref type="foot" target="#foot_1">2</ref> enables the description of entities in terms of classes and properties; while subsumption relations between classes and properties can be modelled with the RDF Schema (RDFS). <ref type="foot" target="#foot_2">3</ref> More expressive formalisms like the Ontology Web Language (OWL) <ref type="foot" target="#foot_3">4</ref> make available a larger number of operators which enable the representation not only of classes, properties, and subsumption relations, but also class and property constraints, general equivalence relations, and restrictions of cardinality. Several examples of the usefulness of context-aware ontologies for bias awareness and mitigation in ML systems are explored in the work presented in <ref type="bibr" target="#b21">[22]</ref>.</p><p>In the context of bias modelling, the Bias Ontology Design Pattern (BODP) <ref type="bibr" target="#b30">[31]</ref> is one of the first works to propose a formalization for the bias concept. Its objective is to capture a high-level representation of bias as an abstract term and not necessarily in the context of ML systems. We re-use part of BODP as a building block to repurpose it for scope and intended use of Doc-BiasO, which is to document bias in AI pipelines. Similar to our work, the fairness metrics ontology (FMO) <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref> models fairness metrics (fmo:fairness_metric) from the literature and relates them to their use-case. The conceptualization of bias and fairness in relation to ML systems are often intertwined; however, distinctions between both concepts need to be made explicit, as they are not always used in conjunction, nor to study the same phenomena <ref type="bibr" target="#b31">[32,</ref><ref type="bibr" target="#b32">33]</ref>. Fairness, in relation to ML (fair-ML), takes the form of algorithmic interventions that incorporate mathematical formalizations of moral or legal notions for the fair treatment of different populations into ML pipelines. These interventions aim to prompt ML models to satisfy statistical non-discrimination criterion for a given subpopulation <ref type="bibr" target="#b3">[4]</ref>. In our case, we focus on modelling biases in data identified in the literature and the existing measures defined to detect them. Specifically, we propose a descriptive bias vocabulary that can be used and incorporated into varying frameworks as needed and that can be extended to further semiautomatize documentation tasks. Concepts and relations pertaining to bias are not made explicit in the current version of FMO, however, we consider both ontologies to be complementary, thus we re-use FMO to foster the development of a comprehensive vocabulary that provides coverage of terminology that pertains to the responsible development of ML systems. We follow a similar approach with the AI Risk Ontology (AIRO) <ref type="bibr" target="#b26">[27]</ref>, and by-effect, the Vocabulary of AI Risks (VAIR) <ref type="bibr" target="#b27">[28]</ref>. In this case, risk in relation to ML systems, under the broader label of AI, is defined as systems that are likely to cause serious harms to health, safety, or fundamental rights of individuals as per European Union (EU) Law. These works are ontologydriven approaches to account for the compliance and conformance of AI systems under the EU's AI Act's specifications. <ref type="foot" target="#foot_4">5</ref> Specifically, AIRO is a modular ontology created to identity whether an AI system is classified as high-risk, whilst VAIR provides semantic specifications for cataloging AI risks, re-using core concepts in AIRO (e.g., airo#Risk, airo#Consequence). Lastly, <ref type="bibr" target="#b33">[34]</ref> proposes a descriptive framework (ACROCPoLis) to describe ML systems and their societal impact by making explicit the interrelations and diverging perspectives of relevant stakeholders (individuals, groups of people, institutions). While this is beyond the scope of our work, should the conceptual model be formalized and made publically available, a study for re-use and extension of Doc-BiasO ontology would be undertaken in a future iteration. The Semantic Web community has also proposed other technical solutions to improve the interpretability and transparency of machine learning pipelines. The provenance ontology, PROV-O <ref type="bibr" target="#b34">[35]</ref>, enables the representation of provenance information generated by different entities, and can be easily applied to multiple contexts (i.e., training datasets). Standard schemas for data mining and machine learning algorithms, such as the Machine Learning Schema </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Concept Definition</head><p>Bias A concentration on, or interest in one particular area or subject. Whilst a more value-laden definition, conceptualizes bias as prejudice for, or against one person, or group, especially in a way considered to be unfair <ref type="bibr" target="#b44">[45]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Application</head><p>The use, purpose or application of a machine learning system. Examples include, recommenders, speech recognition, etc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ML Task</head><p>Task or ML Problem is the formal description of a process that needs to be completed (e.g., based on inputs and outputs) <ref type="bibr" target="#b35">[36]</ref>.</p><p>Dataset A collection of data, published or curated by a single source, and available for access or download in one or more representations <ref type="bibr" target="#b35">[36]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Harm</head><p>Adverse lived experiences resulting from an ML system's deployment and operation in the world <ref type="bibr" target="#b45">[46,</ref><ref type="bibr" target="#b46">47]</ref>.</p><p>Bias Measure A quantitative metric or indicator that assesses the presence and extent of bias in a particular context, via predefined thresholds <ref type="bibr" target="#b47">[48]</ref>.</p><p>(MLS) ontology <ref type="bibr" target="#b35">[36]</ref>, and the Description of a Model (DOAM) <ref type="foot" target="#foot_5">6</ref> ontology, provide fine-grained vocabularies to represent ML models characteristics. Moreover, the issue of reproducibility in ML has also been addressed <ref type="bibr" target="#b36">[37]</ref>. Correspondingly, the Data Catalog Vocabulary (DCAT) <ref type="bibr" target="#b37">[38]</ref> enables the fine-grained description of datasets and data services in a catalog using a controlled and rich vocabulary. Adhering to ontology engineering best practices <ref type="bibr" target="#b29">[30]</ref>, all these ontologies and vocabularies have been re-used in the composition of Doc-BiasO.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Documentation Frameworks and Machine Learning</head><p>The opaqueness of the inner processes of ML systems can hinder the understanding of how they work. <ref type="bibr" target="#b18">[19]</ref>, Gebru et al. <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b19">[20]</ref>, Hupont et al. <ref type="bibr" target="#b38">[39]</ref>, thus advocate for the production of value oriented, human-readable documentation for datasets (Data Statements for Natural Language Processing, Datasheets for Datasets), ML models (Model Cards for Model Reporting and Use Case Cards). Doc-BiasO aims to follow their stride by combining the different components of the AI pipeline (input, model, output data) to produce comprehensive descriptions in human-and machine-readable format of data-driven pipelines. Other documentation approaches, such as Sun et al. <ref type="bibr" target="#b39">[40]</ref> introduce a tool to assess fitness for use of datasets. This automated data exploration tool delimits its focus to three dimensions: representativeness, bias, and correctness. In a similar line, <ref type="bibr" target="#b40">[41]</ref> introduces a bias visualization tool for computer vision datasets. This exploration tool narrows down their assessment to three sets of metrics: object-based, gender-based and geography-based dimensions. Further, interactive tools-developed by industries-(e.g., <ref type="bibr" target="#b41">[42,</ref><ref type="bibr" target="#b42">43,</ref><ref type="bibr" target="#b43">44]</ref>) enable dataset exploration, visualization, and comparison. The extensible and modular design of Doc-BiasO, allows users to describe and document their data-driven pipelines, and seamlessly incorporates additional descriptive dimensions and components as needed. Further, the underlying knowledge-driven framework prompts the integration and fine-grained description of multiple data sources, and leverages reasoning capabilities for enhanced data analytics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Design and Implementation</head><p>In this section, we describe the design stages of Doc-BiasO. We also describe its implementation and include an example of an instance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Scoping out the Coverage of Doc-BiasO</head><p>To determine the scope of our ontology, we perform a domain and content analysis following a hybrid strategy. On the one hand, through our own position within a research project on bias in relation to the development and regulation of AI <ref type="foot" target="#foot_6">7</ref> , we have held fruitful discussions with experts researching different dimensions of bias from a multidisciplinary and critical point of view, i.e., <ref type="bibr" target="#b48">[49,</ref><ref type="bibr" target="#b11">12]</ref>. Further, these discussions have helped identify what concepts make up our universe of discourse, for instance, bias, ML model, dataset, task, application, fairness, harms, risks; as well as how these concepts interact or relate to each other. In Table <ref type="table" target="#tab_0">1</ref>, we have summarized the core concepts defined in our ontology. Each of these concepts represents the top-most abstract concept in a hierarchy of terms, with less abstract or more concrete concepts being defined as the ontology grows to give a broader coverage. For example, Bias is the most abstract representation, while Representation Bias is a more concrete type of bias.</p><p>The exchanges with researchers have also helped deepen our understanding and characterization of bias in data from a critical stance (e.g., there is never just one bias, bias detection is contextual, bias detection can depend on data modality, biases cannot be eradicated <ref type="bibr" target="#b11">[12]</ref>) and to identify challenges not only in modelling bias, but also in relation to the underlying documentation process, primarily on how it should not be fully automatized. In developing a tool like our ontology, it is important to aim for a careful balance between an effective, useful and comprehensive vocabulary that supports streamlining documentation tasks, while at the same time, avoid dissuading practitioners from critical thinking when engaging in both documentation and bias analysis. The aim of both of these practices is to mitigate negative consequences arising from the deployment of ML systems. However, it is always possible that unintentionally through enforcing standardization or automation on practitioners, new gateways are created that worsen the problem. Some influencing factors are the lack of experience, domain knowledge, or the right incentives <ref type="bibr" target="#b45">[46,</ref><ref type="bibr" target="#b49">50,</ref><ref type="bibr" target="#b50">51]</ref>. Ultimately, this rapport informs our design choices across all iterations of ontology engineering, makes us aware of the limitations of our technical tool, and creates opportunities for refinement in later versions.</p><p>On the other hand, the scope of our ontology is also informed by the growing body of literature on our topic of interest. In this case, we particularly rely on official reports, as is the NIST Special Publication 1270 <ref type="bibr" target="#b12">[13]</ref>, and by periodically identifying relevant work in order to gather background information for a rich vocabulary of biases-(e.g., <ref type="bibr" target="#b51">[52,</ref><ref type="bibr" target="#b32">33,</ref><ref type="bibr" target="#b52">53,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b53">54]</ref>), while also consider emerging work on this topic published at venues such as: ACM FAccT<ref type="foot" target="#foot_7">8</ref> , AAAI/ACM AIES<ref type="foot" target="#foot_8">9</ref> , ACM EAAMO. <ref type="foot" target="#foot_9">10</ref> Concisely, we pay attention to discerning bias, and its detection measures from fairness notions and their measures, combining keywords such as "machine learning", "artificial intelligence", "datasets", "bias", "metrics", and "bias mitigation".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Doc-BiasO Design</head><p>To design and model our ontology, we adhere to ontology engineering best practices <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b54">55]</ref>. As such, after the scope is determined and competency questions are defined, re-usable ontologies are identified following a layered approach (i.e., a foundational layer for general metadata and provenance, a domain-dependent layer to cover standards for the relevant area of use, a domain-dependent layer of ontologies specific to our problem of interest) <ref type="bibr" target="#b54">[55]</ref>.</p><p>We first specify the competency questions that emerged during the analysis phase and that represent the intended use of Doc-BiasO: a tool that can be integrated into AI documentation frameworks and that can offer the vocabulary required to characterize these pipelines; ideally, a resource that informs AI practitioners or researchers on the ways in which bias interacts with other components in the AI pipeline, and as a controlled repository as they study the development of a new measure and wish to survey those that already exist.</p><p>We then lay the foundation of our ontology by re-using ontologies such as: the SKOS data model <ref type="bibr" target="#b55">[56]</ref>, the PROV data model (PROV-O) <ref type="bibr" target="#b34">[35]</ref>, and the Friend of a Friend (FOAF) vocabulary <ref type="bibr" target="#b56">[57]</ref>. The next layer incorporates standard schemas for data mining and machine learning algorithms, such as the Machine Learning Schema (MLS) ontology <ref type="bibr" target="#b35">[36]</ref>. This schema provides fine-grained descriptions to represent the characteristics and intricacies of ML models. Similarly, the Data Catalog Vocabulary (DCAT) <ref type="bibr" target="#b37">[38]</ref> enables the fine-grained description of datasets and data services in a catalog using a controlled and rich vocabulary. By extension, the Data Quality Vocabulary (DQV) <ref type="bibr" target="#b57">[58]</ref> provides a framework and vocabulary to assess the quality of a dataset, offering an extensive catalog of quality metrics. For our third layer, we look at previous work on bias, specifically the BODP <ref type="bibr" target="#b30">[31]</ref> and the Artificial Intelligence Ontology (AIO). <ref type="foot" target="#foot_10">11</ref> The class AIO:Bias is our starting point, which we organize in hierarchies via rdfs:SubClassOf, as per the AIO modelling, and in order to represent different kinds of bias identified in the literature i.e., representation bias, popularity bias, demography bias. We build on the pattern and ontology, however, it does not suffice to our modelling needs. For this reason, all missing concepts are incorporated manually, as we set out to capture and explicitly document otherwise unstated assumptions about bias in relation to ML systems <ref type="bibr" target="#b58">[59]</ref>. Critical data studies <ref type="bibr" target="#b46">[47,</ref><ref type="bibr" target="#b58">59]</ref> maintain that for bias detection tasks to be meaningful, practitioners must reflect on possible harms that can emerge upon the deployment of an ML system in dynamic societal and cultural contexts. Here, we emphasize thus on both, the importance of assisting practitioners via the development of tools that streamline tasks that may be perceived as a burden <ref type="bibr" target="#b49">[50]</ref>, while avoid dissuading them from reflecting about harms that could emerge from deploying these systems. For that reason, in our modelling we align scoped biases with harms, with the objective to make explicit the articulation of otherwise alleged, unstated negative consequences attributed to ML systems. However, our expectation and recommendation, is that users will enrich the proposed vocabulary with the results derived of their own explorations, in a similar line as with AI incident databases. Furthermore, bias is not singular, and highly context dependent, meaning that most biases are studied and defined in association to a particular ML application. To represent both of these concepts, we model bias:Harm and bias:Application. The central concept in our ontology is bias:BiasMeasure. This class represents a measure defined in some foaf:Document, evaluated in a dcat:Dataset (that has some characteristics), and for a particular mls:MLTask. bias:BiasEvaluation is the class that represents the n-ary relationship between entities schematized in the extended entity relationship model completed at the start of the design phase. Figure <ref type="figure" target="#fig_0">2</ref> illustrates a conceptual overview of the core classes and relationships of Doc-BiasO.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Towards a Comprehensive Vocabulary for Trustworthy AI</head><p>The Trustworthy AI framework requires a comprehensive formal vocabulary that unifies approaches and contemplates terminology and concepts of ML pipelines, and in broader terms AI, holistically. This type of resource can contribute to the generation of metadata that primes reproducibility and traceability of research results <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b23">24]</ref>, a known issue in ML research and development <ref type="bibr" target="#b59">[60,</ref><ref type="bibr" target="#b36">37]</ref>. Moreover, it can help achieve a certain degree of standarization for the area. Motivated by this, we perform an analysis of the FMO <ref type="bibr" target="#b24">[25]</ref> and VAIR <ref type="bibr" target="#b27">[28]</ref> ontologies, as to determine their characteristics and how they fit into our model. We also do this with the aim to achieve a good balance between ontology re-use and down the line overhead derived from doing so <ref type="bibr" target="#b54">[55]</ref>. Key takeaways are:</p><p>1. FMO complements Doc-BiasO by giving coverage to existing fairness metrics used to evaluate ML systems. Specifically, metrics pertaining to machine learning problems of classification and regression; 2. VAIR captures a wider scope of AI system deployment to instill accountability on an AI provider (i.e., a party that places the system on the market) and thus capture specifications of risky applications of AI from a regulatory point of view; 3. both ontologies represent bias, however, with differing modelling objectives. FMO organizes fmo#Bias in a hierarchy with seven subclasses, two of these are used in relation to</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Popularity Bias</head><p>Gini coefficient of the in-degree distribution To avoid constraining our modelling, we opt to not import either ontology in its entirety. When needed, we implement OWL axioms to assert class equivalence, i.e., owl:equivalentClass.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Recommender</head><p>Otherwise, we reference external concepts using annotation properties.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Doc-BiasO Specifications</head><p>Doc-BiasO Axiomatization The conceptualization of the Doc-BiasO ontology is specified using OWL logical axioms, given that OWL is formally defined in Description Logic. By using OWL to formalize our ontology, we enable consistency checks and logical inferences on a resulting RDF knowledge graph. Further details can be found in the ontology documentation. <ref type="foot" target="#foot_11">12</ref>Instantiating Doc-BiasO To showcase an instantiation of Doc-BiasO, we look at an example based on bias detection in relation to recommender systems, commonly implemented in online social networks. The class Bias is instantiated as Popularity Bias. This bias is Associated With, an instance of the class Application, Recommender System and has a Bias Measure, "Gini coefficient of the in-degree distribution". In this example, Popularity Bias is Aligned With the instance of the class Harm, which is Erasure. We illustrate this in Figure <ref type="figure" target="#fig_1">3</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Evaluation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Competency Questions</head><p>The domain analysis and scope definition of Doc-BiasO, as already described in Section 3.1, derived a set of competency questions that was also used to convey the requirements that Listing 1: SPARQL Query for Competence Question Q4.1 would guide the engineering of our ontology. As part of the process, we tested and refined the Doc-BiasO ontology by implementing the formalization of the competency questions originally expressed in natural language as SPARQL queries. <ref type="foot" target="#foot_12">13</ref> The queries were tested to make sure the results were the expected ones. The set of queries can be access through our GitHub. 12 To illustrate their adequacy, we continue with the example introduced earlier, and start by posing Q1 "Given a particular bias, what is its definition?" ; our example uses Popularity Bias. Below the query result: "When collaborative filtering recommenders emphasize popular items (those with more ratings) over other "long-tail", less popular ones that may only be popular among small groups of users. "@en This expected result is expressed as a rdfs:Literal in English. We follow this question by posing Q4.1 "How many measures have been documented for it?". The results produced by executing the corresponding query, specified in Listing 1, are that for Popularity Bias, we have 3 measures. We choose the measure, Gini coefficient of the in-degree distribution, to learn more about it. We proceed to execute the query that corresponds to Q6. what is its formalization?. The corresponding SPARQL query is specified in Listing 2, with its execution projecting the definition for the chosen measure and the formalization for it in natural language.</p><p>As part of the evaluation process, we also report on the quality of Doc-BiasO. Table <ref type="table" target="#tab_3">2</ref> summarizes the results obtained according to three indicators defined in <ref type="bibr" target="#b60">[61]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Automatic Ontology Evaluation</head><p>This version of Doc-BiasO has also been validated with online tools to verify its consistency and syntactical validity, as well as to check for modelling anomalies or errors. First, we checked that our ontology is syntactically correct using the W3C RDF validation service. <ref type="foot" target="#foot_13">14</ref>The results indicated a successful validation of our RDF document. Second, we checked for   <ref type="foot" target="#foot_14">15</ref> We choose this engine as it is a complete reasoner. The results determined that Doc-BiasO is logically coherent and consistent. Finally, we scanned our ontology with the "OOPS! Ontology Pitfall Scanner" <ref type="bibr" target="#b61">[62]</ref> to automatically dismiss the existence of modelling pitfalls; the evaluation results were also positive, as there were no bad practices detected by the tool.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and Future Work</head><p>In this work, we presented Doc-BiasO, an ontology for bias measures found in the literature that can support the elaboration of documentation of bias in machine learning pipelines. Our objective is to contribute towards improving the interpretation of these pipelines in terms of biases captured, and the derived harms attributed to ML systems. Further, we make a call for a unified controlled vocabulary for the Trustworthy AI framework, and assess existing relevant work. We technically evaluated Doc-BiasO and showcase an example of its instantiation. Notwithstanding, our work is not without limitations. First, research on bias in ML, and by extension AI, is a fast-moving field, thus providing adequate and updated coverage with our tool is a challenge. Second, bias evaluation are highly complex and context dependent tasks. This means that our modelling cannot account for all potential existing biases, and that in general, bias analysis cannot be fully automated, requiring a human-in-the-loop. Third, our resources are yet to be evaluated by AI practitioners outside a research environment. Nevertheless, the addressed limitations are an opportunity for future work. In particular, we intend to add and expand on aspects left unmodeled in this version, and we will liaise with AI practitioners to evaluate the suitability of our tool in real world scenarios. We will also continue the development of a controlled vocabulary for Trustworthy AI, as this resource can foster effective communication between the different actors involved across the AI pipeline.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Conceptualization of the Doc-BiasO Ontology. Core concepts in the ontology are represented as classes, in color-coded boxes, to account for originating vocabularies. While object properties are drawn as directed arrows between classes. In purple colored boxes, relevant and prominently re-used vocabularies implemented in the representation of the universe of discourse.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Conceptualization of an instance of Doc-BiasO. Instances of the Doc-BiasO ontology are represented with round-edge boxes and the color green. "Popularity Bias" is an instance of bias:Bias. Related classes are also exemplified.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 Core Concepts in Doc-BiasO.</head><label>1</label><figDesc></figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>PREFIX skos: &lt;http://www.w3.org/2004/02/skos/core#&gt; PREFIX owl: &lt;http://www.w3.org/2002/07/owl#&gt; PREFIX rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt; PREFIX bias: &lt;https://bias-project.x/bias/&gt;</figDesc><table><row><cell>SELECT DISTINCT</cell></row><row><cell>?bias_1 (COUNT(DISTINCT ?biasMeasure_1) AS</cell></row><row><cell>?number_of_measures)</cell></row><row><cell>WHERE { ?bias_1 rdfs:subClassOf bias:Bias .</cell></row><row><cell>?biasMeasure_1 bias:measures ?bias_1}</cell></row><row><cell>GROUP BY ?bias_1</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>SPARQL Query for Competence Question Q6</head><label></label><figDesc>PREFIX skos: &lt;http://www.w3.org/2004/02/skos/core#&gt; PREFIX owl: &lt;http://www.w3.org/2002/07/owl#&gt; PREFIX rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt; PREFIX bias: &lt;https://bias-project.x/bias/&gt;</figDesc><table><row><cell>SELECT DISTINCT</cell></row><row><cell>?biasMeasure_1 ?definition_1 ?formalization_1</cell></row><row><cell>WHERE {</cell></row><row><cell>?biasMeasure_1 rdfs:subClassOf bias:BiasMeasure ;</cell></row><row><cell>skos:definition ?definition_1 ;</cell></row><row><cell>bias:formalization ?formal_1</cell></row><row><cell>FILTER ( ( REGEX(str(?biasMeasure_1), "Gini", 'i')))}</cell></row><row><cell>Listing 2:</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2 Quality Indicators for Doc-BiasO.</head><label>2</label><figDesc>Accessibilityhttp://ontology.tib.eu/DocBIASO/visualization logical consistency by running the DL reasoning engine Pellet (v.2.2.0), as a plug-in for the Protégé open-source platform (v.5.6.1).</figDesc><table><row><cell>Indicator</cell><cell>Results</cell></row><row><cell>Completeness</cell><cell></cell></row><row><cell>Bias</cell><cell>All 51 subclasses have verifiable definitions based on the</cell></row><row><cell></cell><cell>NIST report, 59 51 = 115%.</cell></row><row><cell>Bias Measures</cell><cell>8 subclasses with verifiable definitions based on ongoing</cell></row><row><cell></cell><cell>literature review, 24 instances based on 3 case studies.</cell></row><row><cell>Interoperability</cell><cell></cell></row><row><cell>Using external vocabulary Used proprietary vocab</cell><cell>316 389 = 81% 73 389 = 19%</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Annex III, European Council position</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://www.w3.org/RDF/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://www.w3.org/TR/rdf12-schema/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://www.w3.org/OWL/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">Annex III, European Council position</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">https://www.openriskmanual.org/ns/doam/index-en.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_6">https://nobias-project.eu/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_7">https://facctconference.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_8">https://www.aies-conference.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_9">https://eaamo.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="11" xml:id="foot_10">https://bioportal.bioontology.org/ontologies/AIO?p=summary</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_11">https://github.com/SDM-TIB/Doc-BIASO</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="13" xml:id="foot_12">https://www.w3.org/TR/sparql11-query/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="14" xml:id="foot_13">https://www.w3.org/RDF/Validator/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="15" xml:id="foot_14">https://protege.stanford.edu/software.php</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We thank Guillermo Climent-Gargallo, Sammy Sawischa and Yukti Sharma for their support during this research. Mayra Russo is supported by EU-Horizon 2020 research and innovation programme under the MCSA-grant agreement No. 860630, project: NoBIAS. Maria-Esther Vidal is partially supported by Leibniz Association, program "Leibniz Best Minds: Programme for Women Professors", project TrustKG-Transforming Data in Trustable Insights; Grant P99/2020.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A framework for understanding sources of harm throughout the machine learning life cycle</title>
		<author>
			<persName><forename type="first">H</forename><surname>Suresh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Guttag</surname></persName>
		</author>
		<idno type="DOI">10.1145/3465416.3483305</idno>
		<idno>doi:10. 1145/3465416.3483305</idno>
		<ptr target="https://doi.org/10.1145/3465416.3483305" />
	</analytic>
	<monogr>
		<title level="m">Equity and Access in Algorithms, Mechanisms, and Optimization</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">21</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The elusive promise of ai: A second look</title>
		<author>
			<persName><forename type="first">J</forename><surname>Riley</surname></persName>
		</author>
		<idno type="DOI">10.1145/3458742</idno>
		<ptr target="https://doi.org/10.1145/3458742.doi:10.1145/3458742" />
	</analytic>
	<monogr>
		<title level="j">Ubiquity</title>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Big data or right data?</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Baeza-Yates</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:12577033" />
	</analytic>
	<monogr>
		<title level="m">Alberto Mendelzon Workshop on Foundations of Data Management</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Narayanan</surname></persName>
		</author>
		<ptr target="http://www.fairmlbook.org" />
		<title level="m">Fairness and Machine Learning, fairmlbook</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Big data&apos;s disparate impact</title>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Selbst</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">California Law Review</title>
		<imprint>
			<biblScope unit="volume">104</biblScope>
			<biblScope unit="page">671</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Buolamwini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<title level="m">Gender shades: Intersectional accuracy disparities in commercial gender classification</title>
				<imprint>
			<publisher>FAT</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Algorithms of oppression: How search engines reinforce racism</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">U</forename><surname>Noble</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Bias on the web and beyond: an accessibility point of view</title>
		<author>
			<persName><forename type="first">R</forename><surname>Baeza-Yates</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17th International Web for All Conference</title>
				<meeting>the 17th International Web for All Conference</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Biases on social media data: (keynote extended abstract</title>
		<author>
			<persName><forename type="first">R</forename><surname>Baeza-Yates</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Companion Proceedings of the Web Conference</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page">2020</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">How empty is trustworthy ai? a discourse analysis of the ethics guidelines of trustworthy ai</title>
		<author>
			<persName><forename type="first">E</forename><surname>Stamboliev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Christiaens</surname></persName>
		</author>
		<idno type="DOI">10.1080/19460171.2024.2315431</idno>
		<idno>arXiv:</idno>
		<ptr target="https://doi.org/10.1080/19460171.2024.2315431" />
	</analytic>
	<monogr>
		<title level="j">Critical Policy Studies</title>
		<imprint>
			<biblScope unit="volume">0</biblScope>
			<biblScope unit="page" from="1" to="18" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Ethics guidelines for trustworthy ai</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">H L E</forename><surname>Group</surname></persName>
		</author>
		<ptr target="https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Policy advice and best practices on bias and fairness in ai</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Alvarez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Colmenarejo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Elobaid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fabbrizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fahimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ferrara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ghodsi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mougan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Papageorgiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Reyero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Scott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>State</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10676-024-09746-w</idno>
		<ptr target="https://doi.org/10.1007/s10676-024-09746-w.doi:10.1007/s10676-024-09746-w" />
	</analytic>
	<monogr>
		<title level="j">Ethics and Information Technology</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Towards a standard for identifying and managing bias in artificial intelligence</title>
		<author>
			<persName><forename type="first">R</forename><surname>Schwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vassilev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">K</forename><surname>Greene</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Perine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Burt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hall</surname></persName>
		</author>
		<idno type="DOI">10.6028/NIST.SP.1270</idno>
		<ptr target="https://doi.org/10.6028/NIST.SP.1270" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A survey on bias and fairness in machine learning</title>
		<author>
			<persName><forename type="first">N</forename><surname>Mehrabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Morstatter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Galstyan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys (CSUR)</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="1" to="35" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Datasheets for datasets</title>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Morgenstern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Vecchione</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Vaughan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">D</forename><surname>Iii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Crawford</surname></persName>
		</author>
		<idno type="DOI">10.1145/3458723</idno>
		<ptr target="https://doi.org/10.1145/3458723.doi:10.1145/3458723" />
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">64</biblScope>
			<biblScope unit="page" from="86" to="92" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">About ml: Annotation and benchmarking on understanding and transparency of machine learning lifecycles</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">D</forename><surname>Raji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yang</surname></persName>
		</author>
		<idno>ArXiv abs/1912.06166</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Responsible data management</title>
		<author>
			<persName><forename type="first">J</forename><surname>Stoyanovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Abiteboul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Howe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">V</forename><surname>Jagadish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schelter</surname></persName>
		</author>
		<idno type="DOI">10.1145/3488717</idno>
		<ptr target="https://doi.org/10.1145/3488717.doi:10.1145/3488717" />
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">65</biblScope>
			<biblScope unit="page" from="64" to="74" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic auditing</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">D</forename><surname>Raji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Smart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">N</forename><surname>White</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hutchinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Smith-Loud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Theron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Barnes</surname></persName>
		</author>
		<idno type="DOI">10.1145/3351095.3372873</idno>
		<idno>doi:10.1145/3351095.3372873</idno>
		<ptr target="https://doi.org/10.1145/3351095.3372873" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* &apos;20</title>
				<meeting>the 2020 Conference on Fairness, Accountability, and Transparency, FAT* &apos;20</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="33" to="44" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Data statements for natural language processing: Toward mitigating system bias and enabling better science</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Bender</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Friedman</surname></persName>
		</author>
		<idno type="DOI">10.1162/tacl_a_00041</idno>
		<ptr target="https://aclanthology.org/Q18-1041.doi:10.1162/tacl_a_00041" />
	</analytic>
	<monogr>
		<title level="j">Transactions of the Association for Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="587" to="604" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Model cards for model reporting</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zaldivar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Barnes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Vasserman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hutchinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Spitzer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">D</forename><surname>Raji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<idno type="DOI">10.1145/3287560.3287596</idno>
		<idno>doi:10.1145/3287560.3287596</idno>
		<ptr target="https://doi.org/10.1145/3287560.3287596" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* &apos;19</title>
				<meeting>the Conference on Fairness, Accountability, and Transparency, FAT* &apos;19</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="220" to="229" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Fides: An ontology-based approach for making machine learning systems accountable</title>
		<author>
			<persName><forename type="first">I</forename><surname>Fernandez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Aceta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gilabert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Esnaola-Gonzalez</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.websem.2023.100808</idno>
		<ptr target="https://doi.org/10.1016/j.websem.2023.100808" />
	</analytic>
	<monogr>
		<title level="j">Journal of Web Semantics</title>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page">100808</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Reyero-Lobo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Daga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Alani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fernández</surname></persName>
		</author>
		<title level="m">Semantic web technologies and bias in arti�cial intelligence: A systematic literature review</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Are we cobblers without shoes? making computer science data fair, Commun</title>
		<author>
			<persName><forename type="first">N</forename><surname>Noy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Goble</surname></persName>
		</author>
		<idno type="DOI">10.1145/3528574</idno>
		<ptr target="https://doi.org/10.1145/3528574.doi:10.1145/3528574" />
	</analytic>
	<monogr>
		<title level="j">ACM</title>
		<imprint>
			<biblScope unit="volume">66</biblScope>
			<biblScope unit="page" from="36" to="38" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">The fair guiding principles for scientific data management and stewardship</title>
		<author>
			<persName><forename type="first">W</forename><surname>Mark</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Scientific data</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1" to="9" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">An ontology for fairness metrics</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Franklin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bhanot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghalwash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Bennett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mccusker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Mcguinness</surname></persName>
		</author>
		<idno type="DOI">10.1145/3514094.3534137</idno>
		<idno>doi:10.1145/3514094. 3534137</idno>
		<ptr target="https://doi.org/10.1145/3514094.3534137" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, AIES &apos;22</title>
				<meeting>the 2022 AAAI/ACM Conference on AI, Ethics, and Society, AIES &apos;22<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="265" to="275" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">An ontology for reasoning about fairness in regression and machine learning</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Franklin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Powers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Erickson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Mccusker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Mcguinness</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Bennett</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Iberoamerican Conference on Knowledge Graphs and Semantic Web</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">AIRO: an ontology for representing AI risks based on the proposed EU AI act and ISO risk management standards</title>
		<author>
			<persName><forename type="first">D</forename><surname>Golpayegani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Pandit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lewis</surname></persName>
		</author>
		<idno type="DOI">10.3233/SSW220008</idno>
		<ptr target="https://doi.org/10.3233/SSW220008.doi:10.3233/SSW220008" />
	</analytic>
	<monogr>
		<title level="m">Towards a Knowledge-Aware AI -SEMANTiCS 2022 -Proceedings of the 18th International Conference on Semantic Systems</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Dimou</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Neumaier</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Pellegrini</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Vahdati</surname></persName>
		</editor>
		<meeting><address><addrLine>Vienna, Austria</addrLine></address></meeting>
		<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2022-09-15">13-15 September 2022. 2022</date>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="51" to="65" />
		</imprint>
	</monogr>
	<note>of Studies on the Semantic Web</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">To be high-risk, or not to be-semantic specifications and implications of the ai act&apos;s high-risk ai applications and harmonised standards</title>
		<author>
			<persName><forename type="first">D</forename><surname>Golpayegani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Pandit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lewis</surname></persName>
		</author>
		<idno type="DOI">10.1145/3593013.3594050</idno>
		<idno>doi:10.1145/3593013.3594050</idno>
		<ptr target="https://doi.org/10.1145/3593013.3594050" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;23</title>
				<meeting>the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;23<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="905" to="915" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Are we cobblers without shoes?: Making computer science data FAIR</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">F</forename><surname>Noy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A</forename><surname>Goble</surname></persName>
		</author>
		<idno type="DOI">10.1145/3528574</idno>
		<ptr target="https://doi.org/10.1145/3528574.doi:10.1145/3528574" />
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">66</biblScope>
			<biblScope unit="page" from="36" to="38" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Toward principles for the design of ontologies used for knowledge sharing?</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Gruber</surname></persName>
		</author>
		<idno type="DOI">10.1006/ijhc.1995.1081</idno>
		<ptr target="https://linkinghub.elsevier.com/retrieve/pii/S1071581985710816.doi:10.1006/ijhc.1995.1081" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Human-Computer Studies</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="page" from="907" to="928" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Chapter 21. an ontology design pattern for modeling bias</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Kaushik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mutharaju</surname></persName>
		</author>
		<idno type="DOI">10.3233/ssw210024</idno>
		<ptr target="https://doi.org/10.3233/ssw210024.doi:10.3233/ssw210024" />
	</analytic>
	<monogr>
		<title level="m">Studies on the Semantic Web</title>
				<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Social-minded measures of data quality: Fairness, diversity, and lack of bias</title>
		<author>
			<persName><forename type="first">E</forename><surname>Pitoura</surname></persName>
		</author>
		<idno type="DOI">10.1145/3404193</idno>
		<ptr target="https://doi.org/10.1145/3404193.doi:10.1145/3404193" />
	</analytic>
	<monogr>
		<title level="j">J. Data and Information Quality</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Representation bias in data: A survey on identification and resolution techniques</title>
		<author>
			<persName><forename type="first">N</forename><surname>Shahbazi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Asudeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">V</forename><surname>Jagadish</surname></persName>
		</author>
		<idno type="DOI">10.1145/3588433</idno>
		<ptr target="https://doi.org/10.1145/3588433.doi:10.1145/3588433" />
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Acrocpolis: A descriptive framework for making sense of fairness</title>
		<author>
			<persName><forename type="first">A</forename><surname>Tubella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Coelho Mollo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dahlgren Lindström</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Devinney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dignum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ericson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jonsson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kampik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lenaerts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Mendez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Nieves</surname></persName>
		</author>
		<idno type="DOI">10.1145/3593013.3594059</idno>
		<idno>doi:10.1145/3593013.3594059</idno>
		<ptr target="https://doi.org/10.1145/3593013.3594059" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;23</title>
				<meeting>the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;23</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1014" to="1025" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Lebo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sahoo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mcguinness</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Belhajjame</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cheney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Corsar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Garijo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Soiland-Reyes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zednik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<title level="m">PROV-O: The PROV Ontology, W3C Recommendation</title>
				<meeting><address><addrLine>United States</addrLine></address></meeting>
		<imprint>
			<publisher>World Wide Web Consortium</publisher>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<title level="m" type="main">Ml-schema: Exposing the semantics of machine learning with schemas and ontologies</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">C</forename><surname>Publio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Esteves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ławrynowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Panov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Soldatova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Soru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vanschoren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zafar</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.1807.05351</idno>
		<ptr target="https://arxiv.org/abs/1807.05351.doi:10.48550/ARXIV.1807.05351" />
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Albertoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Colantonio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Skrzypczynski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Stefanowski</surname></persName>
		</author>
		<idno>ArXiv abs/2302.12691</idno>
		<title level="m">Reproducibility of machine learning: Terminology, recommendations and open issues</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Albertoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Browning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J D</forename><surname>Cox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Gonzalez-Beltran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Perego</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Winstanley</surname></persName>
		</author>
		<idno>ArXiv abs/2303.08883</idno>
		<title level="m">The w3c data catalog vocabulary, version 2: Rationale, design principles, and uptake</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Use case cards: a use case reporting framework inspired by the european ai act</title>
		<author>
			<persName><forename type="first">I</forename><surname>Hupont</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Fernández-Llorca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Baldassarri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gómez</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10676-024-09757-7</idno>
		<ptr target="https://doi.org/10.1007/s10676-024-09757-7.doi:10.1007/s10676-024-09757-7" />
	</analytic>
	<monogr>
		<title level="j">Ethics and Information Technology</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Mithralabel: Flexible dataset nutritional labels for responsible data science</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Asudeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">V</forename><surname>Jagadish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Howe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Stoyanovich</surname></persName>
		</author>
		<idno type="DOI">10.1145/3357384.3357853</idno>
		<idno>doi:10.1145/3357384.3357853</idno>
		<ptr target="https://doi.org/10.1145/3357384.3357853" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM &apos;19</title>
				<meeting>the 28th ACM International Conference on Information and Knowledge Management, CIKM &apos;19</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2893" to="2896" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Revise: A tool for measuring and mitigating bias in visual datasets</title>
		<author>
			<persName><forename type="first">A</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kleiman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Shirai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Narayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Russakovsky</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11263-022-01625-5</idno>
		<ptr target="https://doi.org/10.1007/s11263-022-01625-5.doi:10.1007/s11263-022-01625-5" />
	</analytic>
	<monogr>
		<title level="j">Int. J. Comput. Vision</title>
		<imprint>
			<biblScope unit="volume">130</biblScope>
			<biblScope unit="page" from="1790" to="1810" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<title level="m" type="main">Know your data</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">P A</forename><surname>Research</surname></persName>
		</author>
		<ptr target="10.06.2022" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">F</forename><surname>Research</surname></persName>
		</author>
		<ptr target="https://huggingface.co/spaces/huggingface/data-measurements-tool" />
		<title level="m">Data Measurements Too</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Ai</surname></persName>
		</author>
		<ptr target="https://ai.facebook.com/blog/how-were-using-fairness-flow-to-help\-build-ai-that-works-better-for-everyone/" />
		<title level="m">Fairness flow</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Social-minded measures of data quality</title>
		<author>
			<persName><forename type="first">E</forename><surname>Pitoura</surname></persName>
		</author>
		<idno type="DOI">10.1145/3404193</idno>
		<ptr target="https://doi.org/10.1145/3404193.doi:10.1145/3404193" />
	</analytic>
	<monogr>
		<title level="j">Journal of Data and Information Quality</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="1" to="8" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">fairness toolkits, a checkbox culture?&quot; on the factors that fragment developer practices in handling algorithmic harms</title>
		<author>
			<persName><forename type="first">A</forename><surname>Balayn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yurrita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Gadiraju</surname></persName>
		</author>
		<idno type="DOI">10.1145/3600211.3604674</idno>
		<idno>doi:10.1145/3600211.3604674</idno>
		<ptr target="https://doi.org/10.1145/3600211.3604674" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES &apos;23</title>
				<meeting>the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES &apos;23</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="482" to="495" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Shelby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rismani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Henne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Rostamzadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nicholas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">F</forename><surname>Yilla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gallegos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Smart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Virk</surname></persName>
		</author>
		<title level="m">Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Fairness through awareness</title>
		<author>
			<persName><forename type="first">C</forename><surname>Dwork</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pitassi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Reingold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Zemel</surname></persName>
		</author>
		<idno type="DOI">10.1145/2090236.2090255</idno>
	</analytic>
	<monogr>
		<title level="m">Innovations in Theoretical Computer Science 2012</title>
				<meeting><address><addrLine>Cambridge, MA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">January 8-10, 2012. 2012</date>
			<biblScope unit="page" from="214" to="226" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<analytic>
		<title level="a" type="main">A multidisciplinary lens of bias in hate speech</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">Reyero</forename><surname>Lobo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kwarteng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fahimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Scott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ferrara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fernandez</surname></persName>
		</author>
		<idno type="DOI">10.1145/3625007.3627491</idno>
		<idno>doi:10.1145/3625007.3627491</idno>
		<ptr target="https://doi.org/10.1145/3625007.3627491" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Advances in Social Networks Analysis and Mining, ASONAM &apos;23</title>
				<meeting>the International Conference on Advances in Social Networks Analysis and Mining, ASONAM &apos;23<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="121" to="125" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<analytic>
		<title level="a" type="main">Documenting computer vision datasets: An invitation to reflexive data practices</title>
		<author>
			<persName><forename type="first">M</forename><surname>Miceli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Naudts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schuessler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Serbanescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hanna</surname></persName>
		</author>
		<idno type="DOI">10.1145/3442188.3445880</idno>
		<ptr target="https://doi.org/10.1145/3442188.3445880" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;21</title>
				<meeting>the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;21</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="161" to="172" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">Understanding machine learning practitioners&apos; data documentation perceptions, needs, challenges, and desiderata</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Heger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">B</forename><surname>Marquis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vorvoreanu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Wortman</forename><surname>Vaughan</surname></persName>
		</author>
		<idno type="DOI">10.1145/3555760</idno>
		<idno>doi:</idno>
		<ptr target="10.1145/3555760" />
	</analytic>
	<monogr>
		<title level="m">Proc. ACM Hum.-Comput. Interact</title>
				<meeting>ACM Hum.-Comput. Interact</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">6</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<analytic>
		<title level="a" type="main">A survey on bias in visual datasets</title>
		<author>
			<persName><forename type="first">S</forename><surname>Fabbrizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Papadopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ntoutsi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kompatsiaris</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cviu.2022.103552</idno>
		<ptr target="https://doi.org/10.1016/j.cviu.2022.103552.doi:10.1016/j.cviu.2022.103552" />
	</analytic>
	<monogr>
		<title level="j">Comput. Vis. Image Underst</title>
		<imprint>
			<biblScope unit="volume">223</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<analytic>
		<title level="a" type="main">A survey on bias and fairness in machine learning</title>
		<author>
			<persName><forename type="first">N</forename><surname>Mehrabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Morstatter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Galstyan</surname></persName>
		</author>
		<idno type="DOI">10.1145/3457607</idno>
		<idno>doi:</idno>
		<ptr target="10.1145/3457607" />
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<analytic>
		<title level="a" type="main">Social data: Biases, methodological pitfalls, and ethical boundaries</title>
		<author>
			<persName><forename type="first">A</forename><surname>Olteanu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Castillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Diaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kiciman</surname></persName>
		</author>
		<ptr target="https://www.microsoft.com/en-us/research/publication/social-data-biases-methodological-pitfalls-and-ethical-boundaries/" />
	</analytic>
	<monogr>
		<title level="m">Frontiers in Big Data 2</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b54">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">F</forename><surname>Kendall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Mcguinness</surname></persName>
		</author>
		<idno type="DOI">10.2200/S00834ED1V01Y201802WBE018</idno>
		<ptr target="https://doi.org/10.2200/S00834ED1V01Y201802WBE018.doi:10.2200/S00834ED1V01Y201802WBE018" />
		<title level="m">Ontology Engineering, Synthesis Lectures on the Semantic Web: Theory and Technology</title>
				<imprint>
			<publisher>Morgan &amp; Claypool Publishers</publisher>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b55">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Miles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bechhofer</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:58835891" />
		<title level="m">Skos simple knowledge organization system reference</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b56">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Yu</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:60893017" />
		<title level="m">Foaf: Friend of a friend</title>
				<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b57">
	<analytic>
		<title level="a" type="main">Introducing the data quality vocabulary (dqv)</title>
		<author>
			<persName><forename type="first">R</forename><surname>Albertoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Isaac</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantic Web</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="81" to="97" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b58">
	<analytic>
		<title level="a" type="main">Language (technology) is power: A critical survey of &quot;bias&quot; in NLP</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Blodgett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Daumé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Iii</forename></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2020.acl-main.485</idno>
		<ptr target="https://aclanthology.org/2020.acl-main.485.doi:10.18653/v1/2020.acl-main.485" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</title>
				<meeting>the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="5454" to="5476" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b59">
	<analytic>
		<title level="a" type="main">The fallacy of ai functionality</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">D</forename><surname>Raji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">E</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Horowitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Selbst</surname></persName>
		</author>
		<idno type="DOI">10.1145/3531146.3533158</idno>
		<idno>doi:10.1145/3531146.3533158</idno>
		<ptr target="https://doi.org/10.1145/3531146.3533158" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;22</title>
				<meeting>the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;22<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="959" to="972" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b60">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Färber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bartscherer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Menne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rettinger</surname></persName>
		</author>
		<title level="m">Linked data quality of dbpedia, freebase, opencyc, wikidata, and yago, Semantic Web</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="77" to="129" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b61">
	<analytic>
		<title level="a" type="main">OOPS! (OntOlogy Pitfall Scanner!): An On-line Tool for Ontology Evaluation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Poveda-Villalón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gómez-Pérez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Suárez-Figueroa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal on Semantic Web and Information Systems (IJSWIS)</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="7" to="34" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
