<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Using Protected Attributes to Consider Fairness in Multi-Agent Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Gabriele</forename><forename type="middle">La</forename><surname>Malfa</surname></persName>
							<email>gabriele.la_malfa@kcl.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">UKRI Centre for Doctoral Training in Safe and Trusted AI</orgName>
								<orgName type="institution">King&apos;s College London</orgName>
								<address>
									<postCode>WC2B 4BG</postCode>
									<settlement>London</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jie</forename><forename type="middle">M</forename><surname>Zhang</surname></persName>
							<email>jie.zhang@kcl.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">UKRI Centre for Doctoral Training in Safe and Trusted AI</orgName>
								<orgName type="institution">King&apos;s College London</orgName>
								<address>
									<postCode>WC2B 4BG</postCode>
									<settlement>London</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><surname>Luck</surname></persName>
							<email>michael.luck@sussex.ac.uk</email>
							<affiliation key="aff1">
								<orgName type="institution">University of Sussex</orgName>
								<address>
									<postCode>BN1 9RH</postCode>
									<settlement>Brighton</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Elizabeth</forename><surname>Black</surname></persName>
							<email>elizabeth.black@kcl.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">UKRI Centre for Doctoral Training in Safe and Trusted AI</orgName>
								<orgName type="institution">King&apos;s College London</orgName>
								<address>
									<postCode>WC2B 4BG</postCode>
									<settlement>London</settlement>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Using Protected Attributes to Consider Fairness in Multi-Agent Systems</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">EFC22FBA532269AFA5CD4DE534FE3093</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Fairness</term>
					<term>bias</term>
					<term>Multi-Agent Systems (MAS)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Fairness in Multi-Agent Systems (MAS) has been extensively studied, particularly in reward distribution among agents in scenarios such as goods allocation, resource division, lotteries, and bargaining systems. Fairness in MAS depends on various factors, including the system's governing rules, the behaviour of the agents, and their characteristics. Yet, fairness in human society often involves evaluating disparities between disadvantaged and privileged groups, guided by principles of Equality, Diversity, and Inclusion (EDI). Taking inspiration from the work on algorithmic fairness, which addresses bias in machine learning-based decision-making, we define protected attributes for MAS as characteristics that should not disadvantage an agent in terms of its expected rewards. We adapt fairness metrics from the algorithmic fairness literature-namely, demographic parity, counterfactual fairness, and conditional statistical parity-to the multi-agent setting, where self-interested agents interact within an environment. These metrics allow us to evaluate the fairness of MAS, with the ultimate aim of designing MAS that do not disadvantage agents based on protected attributes.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Multi-Agent Systems (MAS) consist of agents interacting with each other and their surrounding environment to achieve their individual or shared goals. The achievement of an agent's goals may depend on the actions it takes, the actions of other agents, the environment they are situated in, and the rules that govern the MAS. Similarly, fairness in MAS depends on multiple factors. Fairness can be influenced by agents' decision-making processes, as evidenced by research in reinforcement learning focused on developing fair and efficient policies <ref type="bibr" target="#b0">[1]</ref>. It can also hinge on mechanism design, as seen in scenarios like goods allocation games <ref type="bibr" target="#b1">[2]</ref> or cake-cutting problems <ref type="bibr" target="#b2">[3]</ref>, where rules can ensure fair reward distribution among agents. Additionally, fairness can be affected by things like an agent's utility <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref> or their priority in accessing resources <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>, among others.</p><p>In human societies, fairness is often defined in terms of characteristics that should not disadvantage an individual or group, such as age, race, disability or gender. For example, in the UK Equality Act 2010 1  these are identified as protected characteristics, and UK law states that individuals cannot be discriminated against on the basis of these. These protected characteristics typically define subgroups of the population who have historically been disadvantaged in particular situations, such as age discrimination in the workplace, unequal access to healthcare or barriers in education for people with disabilities and gender disparities in political representation, among others. Driven by the bias that often exists in the training data as a result of these systemic inequalities, machine learning approaches often produce biased results (e.g., discrimination in credit market <ref type="bibr" target="#b7">[8]</ref> or justice <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref> algorithms); there is a growing body of work (often referred to as algorithmic fairness) that aims to identify and mitigate such bias by applying a range of fairness metrics that compare the outcomes achieved by what is identified as advantaged and disadvantaged subgroups of the population (see, e.g., <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref> for a review).</p><p>Taking inspiration from the UK Equality Act 2010, we define the concept of protected attributes within a multi-agent system, which are any attributes that have been deemed should not disadvantage an agent in terms of its performance within that system. For example, consider a multi-agent setting that includes both artificial agents in the form of autonomous vehicles and human agents who drive their own cars; we may want to ensure that the human agents are not disadvantaged in such a setting. We adapt the following fairness metrics from the algorithmic fairness literature to our multi-agent setting.</p><p>• Demographic parity -Agents with and without protected attributes should obtain the same expected rewards. • Counterfactual fairness -In both a factual and a counterfactual scenario, where the only difference is whether the protected attributes hold for an agent, agents should obtain the same expected rewards. • Conditional statistical parity -Within a group of agents characterised by a legitimate factor influencing rewards, agents with and without protected attributes should obtain the same expected rewards.</p><p>We are able to evaluate different MAS according to these metrics, with the ultimate aim of designing fairer MAS (for example, by configuring the environment in which agents operate to optimise for fairness). Such an approach is inspired by other works outside MAS, such as designing accessible buildings <ref type="bibr" target="#b12">[13]</ref> or safe urban environments <ref type="bibr" target="#b13">[14]</ref>. Further studies explore environment configurations to optimise rescue operations and autonomous vehicle planning <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref>. However, none of them deal with fairness. Hence, we hope this research can offer valuable insights into domains beyond MAS.</p><p>To summarise, the contributions of this paper are as follows. We introduce protected attributes to MAS -characteristics that should not impact an agent's expected rewards, all other things being equal. We adapt the concepts of demographic parity, counterfactual fairness and conditional statistical parity from the algorithmic fairness literature to the MAS context. The future aim of this work is to use these metrics to evaluate and optimise MAS for fairness.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Motivating example.</head><p>In future urban environments, we may see vehicles operated by humans and vehicles operated by AI undertaking journeys within the same road network. These human and AI agents navigate city streets to reach their destinations, with the rewards they receive dependent on things like time taken and cost of journey. AI-driven vehicles excel by analysing traffic data in real-time, optimising routes, and communicating with other AI vehicles, providing them with an advantage over the human agents in the system, who are generally less efficient at route optimisation and less well-equipped to coordinate with other road users. To mitigate this advantage of AI agents, we might consider altering the road infrastructure, for example, by providing dedicated lanes for human-controlled vehicles.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related work</head><p>Fairness has attracted the attention of Game Theory and MAS researchers for decades alongside psychologists and economists <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b19">20]</ref>. Factors such as the rules that govern the system can influence fairness in MAS. For instance, this can be seen in the Ultimatum Game, where fairness is influenced by the dynamics between proposers and responders <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b22">23]</ref>. In goods allocation or cake-cutting games, the rules depend on the type of good being allocated, for example, whether they are divisible or indivisible, goods or chores <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25]</ref>, and fairness depends on the distribution of goods among the agents <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b25">26,</ref><ref type="bibr" target="#b26">27]</ref>.</p><p>Agent behaviour can also influence fairness. Fair behaviours often balance the rewards collected by the community and individuals. For example, Zhang and Shah <ref type="bibr" target="#b27">[28]</ref> propose a minimum reward for the worst-performing agent while improving the overall rewards of the whole community of agents. However, fairness and reward optimisation can be in tension, and compromises must be made regarding one of the two sides. Jiang and Lu <ref type="bibr" target="#b28">[29]</ref> propose a two-step solution consisting of a single policy for each agent based on fair and optimal rewards, with a controller agent who decides which sub-policies to implement to maximise environmental rewards and fairness. Other works <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b30">31,</ref><ref type="bibr" target="#b31">32]</ref> implement fair optimisation policies within cooperative multi-agent systems, aiming to integrate individualistic and altruistic behaviours. Grupen et al. <ref type="bibr" target="#b32">[33]</ref> introduce a new measure of team fairness, demonstrating how maximising team rewards in cooperative MAS can lead to unfair outcomes for individual agents.</p><p>In contrast to these works, which do not distinguish agents that may be particularly disadvantaged within a system, we consider fairness across agents who do or do not possess protected attributes. We adapt demographic parity <ref type="bibr" target="#b33">[34,</ref><ref type="bibr" target="#b34">35]</ref>, counterfactual fairness <ref type="bibr" target="#b34">[35]</ref> and conditional statistical parity <ref type="bibr" target="#b35">[36]</ref> fairness metrics from the algorithmic fairness literature to the MAS setting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Preliminaries</head><p>A multi-agent system consists of multiple decision-making agents who act and interact in an environment to achieve their goals. A multi-agent system S = (𝐸, 𝑒 𝑜 , 𝐴𝑐, 𝑃, 𝐴𝑡, 𝐴𝑡 𝑝𝑟 , 𝜏 ) is characterised by: the set of possible environment states 𝐸; the starting state 𝑒 0 ; the set of available actions that may be performed by an agent in the environment 𝐴𝑐 (including a null action); a population 𝑃 = {𝑎 1 , . . . , 𝑎 𝑛 } of agents; the attributes 𝐴𝑡 = {𝑎𝑡 1 , . . . , 𝑎𝑡 𝑚 } available to the agents in 𝑃 ; the protected attributes 𝐴𝑡 𝑝𝑟 ⊂ {𝑎𝑡 1 , . . . , 𝑎𝑡 𝑚 }; and the non-deterministic state transformer function 𝜏 : 𝐸 × 𝐴𝑐 1 × . . . × 𝐴𝑐 𝑛 → 𝐸 × [0, 1] that specifies the probability distribution over the possible resulting states that can occur when each agent in the population performs an action (where the possible null action reflects that an agent chooses not to act).</p><p>An agent 𝑎 𝑥 within a multi-agent system (𝐸, 𝑒 𝑜 , 𝐴𝑐, 𝑃, 𝐴𝑡, 𝐴𝑡 𝑝𝑟 , 𝜏 ) (where 𝑎 𝑥 ∈ 𝑃 ) is defined as a tuple (𝐴𝑡 𝑥 , 𝐴𝑐 𝑥 , 𝜋 𝑥 , 𝜌 𝑥 ) where: the attribute evaluation function 𝐴𝑡 𝑥 : 𝐴𝑡 → {0, 1} specifies which attributes hold true for the agent; 𝐴𝑐 𝑥 ⊆ 𝐴𝑐 are the actions available to the agent; the non-deterministic policy 𝜋 𝑥 : 𝐸 → 𝐴𝑐 𝑥 × [0, 1] specifies how an agent will act in any given state (represented as a probability distribution over the possible actions); and the reward function 𝜌 𝑥 : 𝐸 × 𝐸 → R specifies the reward the agent receives for moving between two states.</p><p>A possible run within a multi-agent system S = (𝐸, 𝑒 𝑜 , 𝐴𝑐, 𝑃, 𝐴𝑡, 𝐴𝑡 𝑝𝑟 , 𝜏 ) (where 𝑃 consists of 𝑛 agents) is denoted 𝑟 = (𝑒 0 , (𝑎𝑐 1  1 , . . . , 𝑎𝑐 𝑛 1 ), 𝑒 1 , . . . , (𝑎𝑐 1 𝑗 , . . . , 𝑎𝑐 𝑛 𝑗 ), 𝑒 𝑗 ) where: for each 𝑎 𝑥 ∈ 𝑃 and for each 𝑖 such that 0 &lt; 𝑖 ≤ 𝑗, (𝑎𝑐 𝑥 𝑖 , 𝑝) ∈ 𝜋 𝑥 (𝑒 𝑖−1 ) and 𝑝 &gt; 0; and for each 𝑖 such that 0 ≤ 𝑖 &lt; 𝑗, (𝑒 𝑖+1 , 𝑝) ∈ 𝜏 (𝑒 𝑖 , (𝑎𝑐 1 𝑖 , . . . , 𝑎𝑐 𝑛 𝑖 )) and 𝑝 &gt; 0. The set of all possible runs within a multi-agent system S is denoted ℛ S .</p><p>Let 𝑟 = (𝑒 0 , (𝑎𝑐 1 1 , . . . , 𝑎𝑐 𝑛 1 ), 𝑒 1 , . . . , (𝑎𝑐 1 𝑗 , . . . , 𝑎𝑐 𝑛 𝑗 ), 𝑒 𝑗 ) ∈ ℛ S where S = (𝐸, 𝑒 𝑜 , 𝐴𝑐, 𝑃, 𝐴𝑡, 𝐴𝑡 𝑝𝑟 , 𝜏 ). We can determine the probability 𝑟 will occur, denoted 𝑝(𝑟 | S ), as follows.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑝(𝑟</head><formula xml:id="formula_0">| S ) = (︃ 𝑗−1 ∏︁ 𝑖=0 (︃ 𝑛 ∏︁ 𝑥=1 𝑝 𝑥 where (𝑎𝑐 𝑥 𝑖+1 , 𝑝 𝑥 ) ∈ 𝜋 𝑥 (𝑒 𝑖 ) )︃)︃ • (︃ 𝑗−1 ∏︁ 𝑖=0 𝑝 𝑖 where (𝑒 𝑖+1 , 𝑝 𝑖 ) ∈ 𝜏 (𝑒 𝑖 , (𝑎𝑐 1 𝑖+1 , . . . , 𝑎𝑐 𝑛 𝑖+1 ))</formula><p>)︃</p><p>For a run 𝑟 = (𝑒 0 , (𝑎𝑐 </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑅𝑒𝑤(𝑎 𝑥 , 𝑟).𝑝(𝑟 | S ).</head><p>Motivating example, continued. The city traffic consists of a population of cars, each capable of steering, accelerating or braking. Cars also possess attributes like speed or safety features. Cars are either driven by AI or humans, and we consider being driven by humans to be a protected attribute of cars. AI-driven cars can find optimal paths to reach their destination more efficiently than human-driven ones. If we consider agents reaching a hospital, we can foresee fairness problems as AI-driven cars would be advantaged. When the cars act with a specific probability, the environment changes state. Also, each car obtains a reward when reaching its destination. A car's policy is a decision rule based on the state of the crossroads.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Fairness in MAS</head><p>We define fairness by comparing, in different ways, the rewards gathered by individuals or groups of agents possessing and not possessing protected attributes. We adapt demographic parity <ref type="bibr" target="#b33">[34,</ref><ref type="bibr" target="#b34">35]</ref>, counterfactual fairness <ref type="bibr" target="#b34">[35]</ref> and conditional statistical parity <ref type="bibr" target="#b35">[36]</ref> to MAS. Demographic parity in MAS is achieved when the expected rewards of agents are not influenced by whether or not they possess protected attributes, all else being equal.</p><p>Definition 1 (Demographic Parity). Let S = (𝐸, 𝑒 𝑜 , 𝐴𝑐, 𝑃, 𝐴𝑡, 𝐴𝑡 𝑝𝑟 , 𝜏 ) be a system and let 𝑎𝑡 𝑝𝑟 ∈ 𝐴𝑡 𝑝𝑟 be the protected attribute under consideration. Demographic parity is satisfied for 𝑎𝑡 𝑝𝑟 in S if and only if: for all 𝑎 𝑥 , 𝑎 𝑦 ∈ 𝑃 , if 𝐴𝑡 𝑥 (𝑎𝑡 𝑝𝑟 ) = 1, 𝐴𝑡 𝑦 (𝑎𝑡 𝑝𝑟 ) = 0, and for all 𝑎𝑡 ′ ∈ 𝐴𝑡∖{𝑎𝑡 𝑝𝑟 }, 𝐴𝑡 𝑥 (𝑎𝑡 ′ ) = 𝐴𝑡 𝑦 (𝑎𝑡 ′ ), then 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑥 , S ) = 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑦 , S ).</p><p>Where demographic parity is not satisfied for a particular protected attribute, we can measure the extent to which this is the case, denoted 𝐷𝑒𝑚𝑃 𝑎𝑟(𝑎𝑡 𝑝𝑟 , S ), as follows.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝐷𝑒𝑚𝑃 𝑎𝑟(𝑎𝑡 𝑝𝑟 , S ) = ∑︁</head><p>𝑎𝑥,𝑎𝑦∈𝑃 such that 𝐴𝑡𝑥(𝑎𝑡 𝑝𝑟 )=1, 𝐴𝑡𝑦(𝑎𝑡 𝑝𝑟 )=0, and for all 𝑎𝑡 ′ ∈𝐴𝑡∖{𝑎𝑡 𝑝𝑟 },𝐴𝑡𝑥(𝑎𝑡 ′ )=𝐴𝑡𝑦(𝑎𝑡 ′ ) 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑥 , S ) − 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑦 , S )</p><p>(1) Note that if demographic parity holds for 𝑎𝑡 𝑝𝑟 in S then 𝐷𝑒𝑚𝑃 𝑎𝑟(𝑎𝑡 𝑝𝑟 , S ) = 0.</p><p>Counterfactual fairness in MAS is achieved when the expected rewards of agents remain the same in both a factual and a counterfactual world, where in the latter, we change the protected attribute of the agents while keeping all other elements the same.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 2 (Counterfactual Fairness).</head><p>Let S = (𝐸, 𝑒 𝑜 , 𝐴𝑐, 𝑃, 𝐴𝑡, 𝐴𝑡 𝑝𝑟 , 𝜏 ) be a system where 𝑃 = {(𝐴𝑡 1 , 𝐴𝑐 1 , 𝜋 1 , 𝜌 1 ), . . . , (𝐴𝑡 𝑛 , 𝐴𝑐 𝑛 , 𝜋 𝑛 , 𝜌 𝑛 )}, and let 𝑎𝑡 𝑝𝑟 ∈ 𝐴𝑡 𝑝𝑟 be the protected attribute under consideration. Let S ′ = (𝐸, 𝑒 𝑜 , 𝐴𝑐, 𝑃 ′ , 𝐴𝑡, 𝐴𝑡 𝑝𝑟 , 𝜏 ) be the counterfactual of S such that 𝑃 ′ = {(𝐴𝑡 ′ 1 , 𝐴𝑐 1 , 𝜋 1 , 𝜌 1 ), . . . , (𝐴𝑡 ′ 𝑛 , 𝐴𝑐 𝑛 , 𝜋 𝑛 , 𝜌 𝑛 )} where for all 𝑖 such that 1 ≤ 𝑖 ≤ 𝑛: if 𝐴𝑡 𝑖 (𝑎𝑡 𝑝𝑟 ) = 0, then 𝐴𝑡 ′ 𝑖 (𝑎𝑡 𝑝𝑟 ) = 1; if 𝐴𝑡 𝑖 (𝑎𝑡 𝑝𝑟 ) = 1, then 𝐴𝑡 ′ 𝑖 (𝑎𝑡 𝑝𝑟 ) = 0; and for all 𝑎𝑡 ∈ 𝐴𝑡 ∖ {𝑎𝑡 𝑝𝑟 }, 𝐴𝑡 𝑖 (𝑎𝑡) = 𝐴𝑡 𝑎 𝑖 𝑝𝑟𝑖𝑚𝑒(𝑎𝑡). Counterfactual fairness is satisfied for 𝑎𝑡 𝑝𝑟 in S if and only if: for all 𝑎 𝑥 = (𝐴𝑡 𝑥 , 𝐴𝑐 𝑥 , 𝜋 𝑥 , 𝜌 𝑥 ) ∈ 𝑃 , for all 𝑎 ′ 𝑥 = (𝐴𝑡 ′ 𝑥 , 𝐴𝑐 𝑥 , 𝜋 𝑥 , 𝜌 𝑥 ) ∈ 𝑃 ′ , 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑥 , S ) = 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 ′ 𝑥 , S ′ ). Where counterfactual fairness is not satisfied, we can measure the extent to which this is the case, denoted 𝐶𝑜𝑢𝑛𝑡𝐹 𝑎𝑖𝑟(𝑎𝑡 𝑝𝑟 , S ), as follows.</p><formula xml:id="formula_1">𝐶𝑜𝑢𝑛𝑡𝐹 𝑎𝑖𝑟(𝑎𝑡 𝑝𝑟 , S ) = ∑︁ 𝑎𝑥∈𝑃 such that 𝐴𝑡𝑥(𝑎𝑡 𝑝𝑟 )=1 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑥 , S ) − 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 ′ 𝑥 , S ′ )<label>(2)</label></formula><p>Note that if counterfactual fairness holds for 𝑎𝑡 𝑝𝑟 in S then 𝐶𝑜𝑢𝑛𝑡𝐹 𝑎𝑖𝑟(𝑎𝑡 𝑝𝑟 , S ) = 0.</p><p>Conditional statistical parity in MAS is achieved when the expected rewards of agents are not influenced by whether or not they possess protected attributes when conditioning on a legitimate factor, assuming all other elements are the same. A legitimate factor is an attribute that has been identified as one that may legitimately affect an agent's reward.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 3 (Conditional Statistical Parity).</head><p>Let S = (𝐸, 𝑒 𝑜 , 𝐴𝑐, 𝑃, 𝐴𝑡, 𝐴𝑡 𝑝𝑟 , 𝜏 ) be a system, let 𝐿𝐹 ⊆ (𝐴𝑡 ∖ 𝐴𝑡 𝑝𝑟 ) be the set of legitimate factors, and let 𝑎𝑡 𝑝𝑟 ∈ 𝐴𝑡 𝑝𝑟 be the protected attribute under consideration. Conditional statistical parity is satisfied for 𝑎𝑡 𝑝𝑟 with 𝐿𝐹 in S if and only if: for all 𝑎 𝑥 , 𝑎 𝑦 ∈ 𝑃 , if 𝐴𝑡 𝑥 (𝑎𝑡 𝑝𝑟 ) = 1, 𝐴𝑡 𝑦 (𝑎𝑡 𝑝𝑟 ) = 0, 𝐴𝑡 𝑥 (𝑎𝑡 𝑙𝑓 ) = 𝐴𝑡 𝑦 (𝑎𝑡 𝑙𝑓 ) = 1 for all 𝑎𝑡 𝑙𝑓 ∈ 𝐿𝐹 , and for all 𝑎𝑡 ′ ∈ 𝐴𝑡 ∖ {𝑎𝑡 𝑝𝑟 }, 𝐴𝑡 𝑥 (𝑎𝑡 ′ ) = 𝐴𝑡 𝑦 (𝑎𝑡 ′ ), then 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑥 , S ) = 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑦 , S ).</p><p>Where conditional statistical parity is not satisfied, we can measure the extent to which this is the case, denoted 𝐶𝑜𝑛𝑑𝑆𝑃 (𝑎𝑡 𝑝𝑟 , 𝐿𝐹, S ), as follows.</p><p>𝐶𝑜𝑛𝑑𝑆𝑃 (𝑎𝑡 𝑝𝑟 , 𝐿𝐹, S ) = ∑︁ 𝑎𝑥,𝑎𝑦∈𝑃 such that 𝐴𝑡𝑥(𝑎𝑡 𝑝𝑟 )=1, 𝐴𝑡𝑦(𝑎𝑡 𝑝𝑟 )=0, 𝐴𝑡𝑥(𝑎𝑡 𝑙𝑓 )=𝐴𝑡𝑦(𝑎𝑡 𝑙𝑓 )=1 for all 𝑎𝑡 𝑙𝑓 ∈𝐿𝐹, and for all 𝑎𝑡 ′ ∈𝐴𝑡∖{𝑎𝑡 𝑝𝑟 }, 𝐴𝑡𝑥(𝑎𝑡 ′ )=𝐴𝑡𝑦(𝑎𝑡 ′ ) 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑥 , S ) − 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑦 , S )</p><p>(3) Note that if conditional statistical parity holds for 𝑎𝑡 𝑝𝑟 with 𝐿𝐹 in S then 𝐶𝑜𝑛𝑑𝑆𝑃 (𝑎𝑡 𝑝𝑟 , 𝐿𝐹, S ) = 0.</p><p>Conditional statistical parity is demographic parity within subsets of the population characterised by legitimate factors. For example, in algorithmic fairness, such a metric is used to verify whether the probability of predicting re-offence for male and female prisoners is the same for similar age groups, which is the legitimate factor <ref type="bibr" target="#b36">[37]</ref>.</p><p>Motivating example, continued. In the city traffic example, demographic parity would be achieved if the sum of the expected rewards obtained by AI-driven cars and human-driven cars were equal, all other things being equal. In other words, the protected attribute should not affect the expected rewards gathered by the human-driven cars compared to the AI-driven ones. Counterfactual fairness is achieved if the sum of the expected rewards of the cars remains the same in both a factual and a counterfactual world, where in the latter, agents possess the protected attribute (i.e., cars are driven by humans) while keeping all other factors constant. Conditional statistical parity is achieved if the sum of the cars' expected rewards is not influenced by whether or not they possess protected attributes when conditioned on a legitimate factor, e.g., a certain range of speed capacity of the cars, assuming all other elements are the same.</p><p>We can use the metrics above to measure fairness of different systems. Our ultimate goal is to optimise systems for these different fairness measures, for example by adjusting the starting state of the environment, or the way the environment responds to the agents' actions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion and future work</head><p>This paper is a first step towards ensuring that certain sub-groups of agents are not disadvantaged in multi-agent systems. We identify protected attributes, which are characteristics that should not disadvantage an agent in terms of its expected rewards. Inspired by algorithmic fairness, we adapt demographic parity, counterfactual fairness and conditional statistical parity to analyse fairness in MAS. Our metrics assess fairness from various perspectives in any multi-agent system where expected rewards are applicable. Additional metrics from the algorithmic fairness literature, such as equal opportunity, equalised odds <ref type="bibr" target="#b37">[38]</ref>, disparate impact <ref type="bibr" target="#b38">[39]</ref>, or other metrics based on causal reasoning <ref type="bibr" target="#b39">[40,</ref><ref type="bibr" target="#b40">41]</ref> could be adapted to this setting to capture other aspects of fairness. Our methodology applies to MAS, involving both human and AI agents, as motivated by our example. It could also be used to improve the fairness of human societies by modelling these as multi-agent systems and seeing how changes to the system affect the various fairness metrics defined here.</p><p>In future work, we plan to analyse these fairness metrics experimentally in different settings, both competitive and cooperative, to find system configurations that enhance fairness. We will use techniques such as Bayesian optimisation <ref type="bibr" target="#b41">[42]</ref>, evolutionary algorithms <ref type="bibr" target="#b42">[43]</ref> and sparse sampling techniques <ref type="bibr" target="#b43">[44]</ref> to try to identify system configurations that optimise for the different fairness metrics.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>1 1 , . . . , 𝑎𝑐 𝑛 1 ), 𝑒 1 , . . . , (𝑎𝑐 1 𝑗 , . . . , 𝑎𝑐 𝑛 𝑗 ), 𝑒 𝑗 ), the reward achieved by an agent 𝑎 𝑥 is 𝑅𝑒𝑤(𝑎 𝑥 , 𝑟) = ∑︀ 𝑗 𝑖=1 𝜌 𝑥 (𝑒 𝑖−1 , 𝑒 𝑖 ). The expected reward of an agent 𝑎 𝑥 within a system S , denoted 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑥 , S ), is thus 𝐸𝑥𝑝𝑅𝑒𝑤(𝑎 𝑥 , S ) = ∑︁</figDesc><table><row><cell>𝑟∈ℛ S</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was supported by UK Research and Innovation [grant number EP/S023356/1], in the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (www.safeandtrustedai.org).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Gajane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tavakol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Fletcher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pechenizkiy</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2205.10032</idno>
		<title level="m">Survey on fair reinforcement learning: Theory and practice</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Fair division of indivisible goods: Recent progress and open questions</title>
		<author>
			<persName><forename type="first">G</forename><surname>Amanatidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Aziz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Birmpas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Filos-Ratsikas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Moulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Voudouris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wu</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.artint.2023.103965</idno>
		<ptr target="https://doi.org/10.1016/j.artint.2023.103965" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">322</biblScope>
			<biblScope unit="page">103965</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Cake cutting: not just child&apos;s play</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Procaccia</surname></persName>
		</author>
		<idno type="DOI">10.1145/2483852.2483870</idno>
		<idno>doi:10.1145/2483852.2483870</idno>
		<ptr target="https://doi.org/10.1145/2483852.2483870" />
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="page" from="78" to="87" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Welfare engineering in multiagent systems</title>
		<author>
			<persName><forename type="first">U</forename><surname>Endriss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Maudet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Engineering Societies in the Agents World IV</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Petta</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Pitt</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="93" to="106" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The price of fairness</title>
		<author>
			<persName><forename type="first">D</forename><surname>Bertsimas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Farias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Trichakis</surname></persName>
		</author>
		<idno type="DOI">10.1287/opre.1100.0865</idno>
	</analytic>
	<monogr>
		<title level="j">Operations Research</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="17" to="31" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Priority awareness: Towards a computational model of human fairness for multi-agent systems</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">De</forename><surname>Jong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Tuyls</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbeeck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Roos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Tuyls</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Nowe</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Z</forename><surname>Guessoum</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Kudenko</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="117" to="128" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Fair division with prioritized agents</title>
		<author>
			<persName><forename type="first">X</forename><surname>Bu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Tao</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v37i5.25688</idno>
		<ptr target="https://doi.org/10.1609/aaai.v37i5.25688.doi:10.1609/aaai.v37i5.25688" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI&apos;23/IAAI&apos;23/EAAI&apos;23</title>
				<meeting>the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI&apos;23/IAAI&apos;23/EAAI&apos;23</meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Predictably unequal? the effects of machine learning on credit markets</title>
		<author>
			<persName><forename type="first">A</forename><surname>Fuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Goldsmith-Pinkham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ramadorai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Walther</surname></persName>
		</author>
		<idno type="DOI">10.1111/jofi.13090</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1111/jofi.13090" />
	</analytic>
	<monogr>
		<title level="j">The Journal of Finance</title>
		<imprint>
			<biblScope unit="volume">77</biblScope>
			<biblScope unit="page" from="5" to="47" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Fair&quot; Risk Assessments: A Precarious Approach for Criminal Justice Reform</title>
		<imprint>
			<date type="published" when="2018">2018</date>
			<pubPlace>Stockholm, Sweden</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">An algorithm for removing sensitive information: Application to raceindependent recidivism prediction</title>
		<author>
			<persName><forename type="first">J</forename><surname>Johndrow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lum</surname></persName>
		</author>
		<idno type="DOI">10.1214/18-AOAS1201</idno>
	</analytic>
	<monogr>
		<title level="j">The Annals of Applied Statistics</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">50 years of test (un)fairness: Lessons for machine learning</title>
		<author>
			<persName><forename type="first">B</forename><surname>Hutchinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mitchell</surname></persName>
		</author>
		<idno type="DOI">10.1145/3287560.3287600</idno>
		<idno>doi:10.1145/3287560.3287600</idno>
		<ptr target="https://doi.org/10.1145/3287560.3287600" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* &apos;19</title>
				<meeting>the Conference on Fairness, Accountability, and Transparency, FAT* &apos;19<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="49" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Potash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>D'amour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lum</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:228893833" />
		<title level="m">Algorithmic fairness: Choices, assumptions, and definitions, Annual Review of Statistics and Its Application</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Inclusion, diversity, equity and accessibility in the built environment: A study of architectural design practice</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zallio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Clarkson</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.buildenv.2021.108352</idno>
		<ptr target="https://doi.org/10.1016/j.buildenv.2021.108352" />
	</analytic>
	<monogr>
		<title level="j">Building and Environment</title>
		<imprint>
			<biblScope unit="volume">206</biblScope>
			<biblScope unit="page">108352</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A global analysis of urban design types and road transport injury: an image processing study</title>
		<author>
			<persName><forename type="first">J</forename><surname>Thompson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stevenson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Wijnands</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Nice</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Dpa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Silver</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Nieuwenhuijsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rayner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Schofield</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hariharan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">N</forename><surname>Morrison</surname></persName>
		</author>
		<idno type="DOI">10.1016/S2542-5196(19)30263-3</idno>
		<ptr target="https://doi.org/10.1016/S2542-5196(19)30263-3" />
	</analytic>
	<monogr>
		<title level="j">The Lancet Planetary Health</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="e32" to="e42" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Military factors influencing path planning</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kozůbek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Flasar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Dumišinec</surname></persName>
		</author>
		<idno type="DOI">10.5772/intechopen.86421</idno>
		<ptr target="https://doi.org/10.5772/intechopen.86421.doi:10.5772/intechopen.86421" />
	</analytic>
	<monogr>
		<title level="m">Path Planning for Autonomous Vehicle, IntechOpen</title>
				<editor>
			<persName><forename type="first">U</forename><forename type="middle">Z A</forename><surname>Hamid</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Sezer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Huang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Zakaria</surname></persName>
		</editor>
		<meeting><address><addrLine>Rijeka</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Use of unmanned vehicles in search and rescue operations in forest fires: Advantages and limitations observed in a field trial</title>
		<author>
			<persName><forename type="first">S</forename><surname>Karma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Zorba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Pallis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Statheropoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Balta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mikedi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vamvakari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pappa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chalaris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Xanthopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Statheropoulos</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ijdrr.2015.07.009</idno>
		<ptr target="https://doi.org/10.1016/j.ijdrr.2015.07.009" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Disaster Risk Reduction</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="307" to="312" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A theory of fairness, competition, and cooperation</title>
		<author>
			<persName><forename type="first">E</forename><surname>Fehr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Schmidt</surname></persName>
		</author>
		<ptr target="http://www.jstor.org/stable/2586885" />
	</analytic>
	<monogr>
		<title level="j">The Quarterly Journal of Economics</title>
		<imprint>
			<biblScope unit="volume">114</biblScope>
			<biblScope unit="page" from="817" to="868" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A theory of reciprocity</title>
		<author>
			<persName><forename type="first">A</forename><surname>Falk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Fischbacher</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.geb.2005.03.001</idno>
		<ptr target="https://doi.org/10.1016/j.geb.2005.03.001" />
	</analytic>
	<monogr>
		<title level="j">Games and Economic Behavior</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="293" to="315" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Fairness in multi-agent systems</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">De</forename><surname>Jong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Tuyls</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbeeck</surname></persName>
		</author>
		<idno type="DOI">10.1017/S026988890800132X</idno>
	</analytic>
	<monogr>
		<title level="j">The Knowledge Engineering Review</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="153" to="180" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Fairness versus reason in the ultimatum game</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Nowak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Page</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sigmund</surname></persName>
		</author>
		<idno type="DOI">10.1126/science.289.5485.1773</idno>
	</analytic>
	<monogr>
		<title level="j">Science</title>
		<imprint>
			<biblScope unit="volume">289</biblScope>
			<biblScope unit="page" from="1773" to="1775" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Evolution of fairness in the one-shot anonymous ultimatum game</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Rand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Tarnita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ohtsuki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Nowak</surname></persName>
		</author>
		<idno type="DOI">10.1073/pnas.1214167110</idno>
		<ptr target="https://www.pnas.org/doi/pdf/10.1073/pnas.1214167110" />
	</analytic>
	<monogr>
		<title level="j">Proceedings of the National Academy of Sciences</title>
		<imprint>
			<biblScope unit="volume">110</biblScope>
			<biblScope unit="page" from="2581" to="2586" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Cost-efficient interventions for promoting fairness in the ultimatum game</title>
		<author>
			<persName><forename type="first">T</forename><surname>Cimpeanu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Perret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">A</forename><surname>Han</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.knosys.2021.107545</idno>
		<ptr target="https://doi.org/10.1016/j.knosys.2021.107545" />
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">233</biblScope>
			<biblScope unit="page">107545</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Evolution of fairness in the divide-a-lottery game</title>
		<author>
			<persName><forename type="first">J.-Y</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-M</forename><surname>Lee</surname></persName>
		</author>
		<idno type="DOI">10.1038/s41598-023-34131-w</idno>
	</analytic>
	<monogr>
		<title level="j">Scientific Reports</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Fair allocation of indivisible goods and chores</title>
		<author>
			<persName><forename type="first">H</forename><surname>Aziz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Caragiannis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Igarashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Walsh</surname></persName>
		</author>
		<idno type="DOI">10.24963/ijcai.2019/8</idno>
		<ptr target="https://doi.org/10.24963/ijcai.2019/8.doi:10.24963/ijcai.2019/8" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, International Joint Conferences on Artificial Intelligence Organization</title>
				<meeting>the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, International Joint Conferences on Artificial Intelligence Organization</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="53" to="59" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Hosseini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mammadov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Was</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2305.01786</idno>
		<title level="m">Fairly allocating goods and (terrible) chores</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Possible fairness for allocating indivisible resources</title>
		<author>
			<persName><forename type="first">H</forename><surname>Aziz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Xing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS &apos;23, International Foundation for Autonomous Agents and Multiagent Systems</title>
				<meeting>the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS &apos;23, International Foundation for Autonomous Agents and Multiagent Systems<address><addrLine>Richland, SC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="197" to="205" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Double-deck multi-agent pickup and delivery: Multi-robot rearrangement in largescale warehouses</title>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ma</surname></persName>
		</author>
		<idno type="DOI">10.1109/LRA.2023.3272272</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Robotics and Automation Letters</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="3701" to="3708" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Fairness in multi-agent sequential decision-making</title>
		<author>
			<persName><forename type="first">C</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Shah</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper_files/paper/2014/file/792c7b5aae4a79e78aaeda80516ae2ac-Paper.pdf" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">Z</forename><surname>Ghahramani</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Welling</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Cortes</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Lawrence</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Weinberger</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">27</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1910.14472</idno>
		<title level="m">Learning fairness in multi-agent systems</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Inequity aversion improves cooperation in intertemporal social dilemmas</title>
		<author>
			<persName><forename type="first">E</forename><surname>Hughes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Z</forename><surname>Leibo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Phillips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Tuyls</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Dueñez Guzman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>García Castañeda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Dunning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mckee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Koster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Roff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Graepel</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper_files/paper/2018/file/7fea637fd6d02b8f0adf6f7dc36aed93-Paper.pdf" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Bengio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Larochelle</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Grauman</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Cesa-Bianchi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">31</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Evolving intrinsic motivations for altruistic behavior</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hughes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Fernando</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M</forename><surname>Czarnecki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Duéñez Guzmán</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Z</forename><surname>Leibo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS &apos;19, International Foundation for Autonomous Agents and Multiagent Systems</title>
				<meeting>the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS &apos;19, International Foundation for Autonomous Agents and Multiagent Systems<address><addrLine>Richland, SC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="683" to="692" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<title level="m" type="main">Learning fair policies in decentralized cooperative multi-agent reinforcement learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zimmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Glanois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Siddique</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Weng</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2012.09421</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Grupen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Selman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Lee</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2106.05727</idno>
		<title level="m">Cooperative multi-agent fairness and equivariant policies</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Fairness through awareness</title>
		<author>
			<persName><forename type="first">C</forename><surname>Dwork</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pitassi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Reingold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zemel</surname></persName>
		</author>
		<idno type="DOI">10.1145/2090236.2090255</idno>
		<ptr target="https://doi.org/10.1145/2090236.2090255.doi:10.1145/2090236.2090255" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS &apos;12</title>
				<meeting>the 3rd Innovations in Theoretical Computer Science Conference, ITCS &apos;12<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="214" to="226" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Counterfactual fairness</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kusner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Loftus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Russell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Silva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS&apos;17</title>
				<meeting>the 31st International Conference on Neural Information Processing Systems, NIPS&apos;17<address><addrLine>Red Hook, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Curran Associates Inc</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4069" to="4079" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Algorithmic decision making and the cost of fairness</title>
		<author>
			<persName><forename type="first">S</forename><surname>Corbett-Davies</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Pierson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Feller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Goel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Huq</surname></persName>
		</author>
		<idno type="DOI">10.1145/3097983.3098095</idno>
		<idno>doi:10.1145/3097983. 3098095</idno>
		<ptr target="https://doi.org/10.1145/3097983.3098095" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;17</title>
				<meeting>the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;17<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="797" to="806" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Fairness in criminal justice risk assessments: The state of the art</title>
		<author>
			<persName><forename type="first">R</forename><surname>Berk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Heidari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jabbari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kearns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roth</surname></persName>
		</author>
		<idno type="DOI">10.1177/0049124118782533</idno>
	</analytic>
	<monogr>
		<title level="j">Sociological Methods &amp; Research</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page" from="3" to="44" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Price</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Srebro</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1610.02413</idno>
		<title level="m">Equality of opportunity in supervised learning</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Feldman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Friedler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Moeller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Scheidegger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Venkatasubramanian</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1412.3756</idno>
		<title level="m">Certifying and removing disparate impact</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Avoiding discrimination through causal reasoning</title>
		<author>
			<persName><forename type="first">N</forename><surname>Kilbertus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rojas-Carulla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Parascandolo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Janzing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schölkopf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS&apos;17</title>
				<meeting>the 31st International Conference on Neural Information Processing Systems, NIPS&apos;17<address><addrLine>Red Hook, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Curran Associates Inc</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="656" to="666" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Fair inference on outcomes</title>
		<author>
			<persName><forename type="first">R</forename><surname>Nabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Shpitser</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI&apos;18/IAAI&apos;18/EAAI&apos;18</title>
				<meeting>the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI&apos;18/IAAI&apos;18/EAAI&apos;18</meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<title level="m" type="main">Practical bayesian optimization of machine learning algorithms</title>
		<author>
			<persName><forename type="first">J</forename><surname>Snoek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Larochelle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Adams</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1206.2944</idno>
		<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Evolutionary algorithms: A critical review and its future prospects</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Vikhar</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICGTSPICC.2016.7955308</idno>
	</analytic>
	<monogr>
		<title level="m">2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC)</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="261" to="265" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">A sparse sampling algorithm for near-optimal planning in large markov decision processes</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kearns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Mansour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Y</forename><surname>Ng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 16th International Joint Conference on Artificial Intelligence -Volume 2, IJCAI&apos;99</title>
				<meeting>the 16th International Joint Conference on Artificial Intelligence -Volume 2, IJCAI&apos;99<address><addrLine>San Francisco, CA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann Publishers Inc</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="1324" to="1331" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
