<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>M. Chalaris, G. Xanthopoulos, M. Statheropoulos, Use of unmanned vehicles in search and rescue
operations in forest fires: Advantages and limitations observed in a field trial, International Journal
of Disaster Risk Reduction</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1017/S026988890800132X</article-id>
      <title-group>
        <article-title>Using Protected Attributes to Consider Fairness in Multi-Agent Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gabriele La Malfa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jie M. Zhang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Luck</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elizabeth Black</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>UKRI Centre for Doctoral Training in Safe and Trusted AI, King's College London</institution>
          ,
          <addr-line>London, WC2B 4BG</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Sussex</institution>
          ,
          <addr-line>Brighton, BN1 9RH</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>13</volume>
      <issue>2015</issue>
      <fpage>307</fpage>
      <lpage>312</lpage>
      <abstract>
        <p>Fairness in Multi-Agent Systems (MAS) has been extensively studied, particularly in reward distribution among agents in scenarios such as goods allocation, resource division, lotteries, and bargaining systems. Fairness in MAS depends on various factors, including the system's governing rules, the behaviour of the agents, and their characteristics. Yet, fairness in human society often involves evaluating disparities between disadvantaged and privileged groups, guided by principles of Equality, Diversity, and Inclusion (EDI). Taking inspiration from the work on algorithmic fairness, which addresses bias in machine learning-based decision-making, we define protected attributes for MAS as characteristics that should not disadvantage an agent in terms of its expected rewards. We adapt fairness metrics from the algorithmic fairness literature-namely, demographic parity, counterfactual fairness, and conditional statistical parity-to the multi-agent setting, where self-interested agents interact within an environment. These metrics allow us to evaluate the fairness of MAS, with the ultimate aim of designing MAS that do not disadvantage agents based on protected attributes.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Fairness</kwd>
        <kwd>bias</kwd>
        <kwd>Multi-Agent Systems (MAS)</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Multi-Agent Systems (MAS) consist of agents interacting with each other and their surrounding
environment to achieve their individual or shared goals. The achievement of an agent’s goals may
depend on the actions it takes, the actions of other agents, the environment they are situated in, and
the rules that govern the MAS. Similarly, fairness in MAS depends on multiple factors. Fairness can be
influenced by agents’ decision-making processes, as evidenced by research in reinforcement learning
focused on developing fair and eficient policies [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It can also hinge on mechanism design, as seen
in scenarios like goods allocation games [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] or cake-cutting problems [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], where rules can ensure fair
reward distribution among agents. Additionally, fairness can be afected by things like an agent’s
utility [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ] or their priority in accessing resources [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ], among others.
      </p>
      <p>
        In human societies, fairness is often defined in terms of characteristics that should not disadvantage
an individual or group, such as age, race, disability or gender. For example, in the UK Equality Act 20101
these are identified as protected characteristics, and UK law states that individuals cannot be discriminated
against on the basis of these. These protected characteristics typically define subgroups of the population
who have historically been disadvantaged in particular situations, such as age discrimination in the
workplace, unequal access to healthcare or barriers in education for people with disabilities and gender
disparities in political representation, among others. Driven by the bias that often exists in the training
data as a result of these systemic inequalities, machine learning approaches often produce biased results
(e.g., discrimination in credit market [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] or justice [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ] algorithms); there is a growing body of work
(often referred to as algorithmic fairness) that aims to identify and mitigate such bias by applying a
range of fairness metrics that compare the outcomes achieved by what is identified as advantaged and
disadvantaged subgroups of the population (see, e.g., [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ] for a review).
      </p>
      <p>Taking inspiration from the UK Equality Act 2010, we define the concept of protected attributes within
a multi-agent system, which are any attributes that have been deemed should not disadvantage an
agent in terms of its performance within that system. For example, consider a multi-agent setting that
includes both artificial agents in the form of autonomous vehicles and human agents who drive their
own cars; we may want to ensure that the human agents are not disadvantaged in such a setting. We
adapt the following fairness metrics from the algorithmic fairness literature to our multi-agent setting.
• Demographic parity – Agents with and without protected attributes should obtain the same
expected rewards.
• Counterfactual fairness – In both a factual and a counterfactual scenario, where the only diference
is whether the protected attributes hold for an agent, agents should obtain the same expected
rewards.
• Conditional statistical parity – Within a group of agents characterised by a legitimate factor
influencing rewards, agents with and without protected attributes should obtain the same expected
rewards.</p>
      <p>
        We are able to evaluate diferent MAS according to these metrics, with the ultimate aim of designing
fairer MAS (for example, by configuring the environment in which agents operate to optimise for
fairness). Such an approach is inspired by other works outside MAS, such as designing accessible
buildings [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] or safe urban environments [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Further studies explore environment configurations
to optimise rescue operations and autonomous vehicle planning [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ]. However, none of them deal
with fairness. Hence, we hope this research can ofer valuable insights into domains beyond MAS.
      </p>
      <p>To summarise, the contributions of this paper are as follows. We introduce protected attributes to
MAS – characteristics that should not impact an agent’s expected rewards, all other things being equal.
We adapt the concepts of demographic parity, counterfactual fairness and conditional statistical parity
from the algorithmic fairness literature to the MAS context. The future aim of this work is to use these
metrics to evaluate and optimise MAS for fairness.</p>
      <p>Motivating example. In future urban environments, we may see vehicles operated by humans and
vehicles operated by AI undertaking journeys within the same road network. These human and AI
agents navigate city streets to reach their destinations, with the rewards they receive dependent on
things like time taken and cost of journey. AI-driven vehicles excel by analysing trafic data in real-time,
optimising routes, and communicating with other AI vehicles, providing them with an advantage
over the human agents in the system, who are generally less eficient at route optimisation and less
well-equipped to coordinate with other road users. To mitigate this advantage of AI agents, we might
consider altering the road infrastructure, for example, by providing dedicated lanes for human-controlled
vehicles.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        Fairness has attracted the attention of Game Theory and MAS researchers for decades alongside
psychologists and economists [17, 18, 19, 20]. Factors such as the rules that govern the system can
influence fairness in MAS. For instance, this can be seen in the Ultimatum Game, where fairness is
influenced by the dynamics between proposers and responders [ 21, 22, 23]. In goods allocation or
cake-cutting games, the rules depend on the type of good being allocated, for example, whether they
are divisible or indivisible, goods or chores [24, 25], and fairness depends on the distribution of goods
among the agents [
        <xref ref-type="bibr" rid="ref2 ref3 ref7">2, 3, 7, 26, 27</xref>
        ].
      </p>
      <p>Agent behaviour can also influence fairness. Fair behaviours often balance the rewards collected by
the community and individuals. For example, Zhang and Shah [28] propose a minimum reward for
the worst-performing agent while improving the overall rewards of the whole community of agents.
However, fairness and reward optimisation can be in tension, and compromises must be made regarding
one of the two sides. Jiang and Lu [29] propose a two-step solution consisting of a single policy for
each agent based on fair and optimal rewards, with a controller agent who decides which sub-policies
to implement to maximise environmental rewards and fairness. Other works [30, 31, 32] implement fair
optimisation policies within cooperative multi-agent systems, aiming to integrate individualistic and
altruistic behaviours. Grupen et al. [33] introduce a new measure of team fairness, demonstrating how
maximising team rewards in cooperative MAS can lead to unfair outcomes for individual agents.</p>
      <p>In contrast to these works, which do not distinguish agents that may be particularly disadvantaged
within a system, we consider fairness across agents who do or do not possess protected attributes. We
adapt demographic parity [34, 35], counterfactual fairness [35] and conditional statistical parity [36]
fairness metrics from the algorithmic fairness literature to the MAS setting.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Preliminaries</title>
      <p>
        A multi-agent system consists of multiple decision-making agents who act and interact in an
environment to achieve their goals. A multi-agent system S = (, , , , , ,  ) is
characterised by: the set of possible environment states ; the starting state 0; the set of available actions
that may be performed by an agent in the environment  (including a null action); a population
 = {1, . . . , } of agents; the attributes  = {1, . . . , } available to the agents in  ; the
protected attributes  ⊂ { 1, . . . , }; and the non-deterministic state transformer function
 :  × 1 × . . . ×  →  × [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] that specifies the probability distribution over the
possible resulting states that can occur when each agent in the population performs an action (where the
possible null action reflects that an agent chooses not to act).
      </p>
      <p>
        An agent  within a multi-agent system (, , , , , ,  ) (where  ∈  ) is defined as
a tuple (, ,  ,  ) where: the attribute evaluation function  :  → {0, 1} specifies which
attributes hold true for the agent;  ⊆  are the actions available to the agent; the non-deterministic
policy   :  →  × [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] specifies how an agent will act in any given state (represented as a
probability distribution over the possible actions); and the reward function   :  ×  → R specifies
the reward the agent receives for moving between two states.
      </p>
      <p>A possible run within a multi-agent system S = (, , , , , ,  ) (where  consists of
 agents) is denoted  = (0, (11, . . . , 1), 1, . . . , (1 , . . . , ),  ) where: for each  ∈  and
for each  such that 0 &lt;  ≤ , (, ) ∈  (− 1) and  &gt; 0; and for each  such that 0 ≤  &lt; ,
(+1, ) ∈  (, (1 , . . . , )) and  &gt; 0. The set of all possible runs within a multi-agent system S
is denoted ℛS .</p>
      <p>Let  = (0, (11, . . . , 1), 1, . . . , (1 , . . . , ),  ) ∈ ℛS where S = (, , , , , ,  ).
We can determine the probability  will occur, denoted ( | S ), as follows.
( | S ) =
︃( − 1(︃ 
∏︁ ∏︁  where (+1, ) ∈  ()
=0 =1
)︃)︃ (︃ − 1
· ∏︁  where (+1, ) ∈  (, (+1, . . . , +1))</p>
      <p>1
=0
)︃
For a run  = (0, (11, . . . , 1), 1, . . . , (1 , . . . , ),  ), the reward achieved by an agent  is
(, ) = ∑︀</p>
      <p>=1  (− 1, ).</p>
      <p>The expected reward of an agent  within a system S , denoted (, S ), is thus
(, S ) = ∑︁ (, ).( | S ).</p>
      <p>∈ℛS
Motivating example, continued. The city trafic consists of a population of cars, each capable of
steering, accelerating or braking. Cars also possess attributes like speed or safety features. Cars are
either driven by AI or humans, and we consider being driven by humans to be a protected attribute of
cars. AI-driven cars can find optimal paths to reach their destination more eficiently than human-driven
ones. If we consider agents reaching a hospital, we can foresee fairness problems as AI-driven cars
would be advantaged. When the cars act with a specific probability, the environment changes state.
Also, each car obtains a reward when reaching its destination. A car’s policy is a decision rule based on
the state of the crossroads.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Fairness in MAS</title>
      <p>We define fairness by comparing, in diferent ways, the rewards gathered by individuals or groups
of agents possessing and not possessing protected attributes. We adapt demographic parity [34, 35],
counterfactual fairness [35] and conditional statistical parity [36] to MAS.</p>
      <p>Demographic parity in MAS is achieved when the expected rewards of agents are not influenced by
whether or not they possess protected attributes, all else being equal.</p>
      <p>Definition 1 (Demographic Parity). Let S = (, , , , , ,  ) be a system and let  ∈
 be the protected attribute under consideration. Demographic parity is satisfied for  in S if and only
if: for all ,  ∈  , if () = 1, () = 0, and for all ′ ∈ ∖{}, (′) = (′),
then (, S ) = (, S ).</p>
      <p>Where demographic parity is not satisfied for a particular protected attribute, we can measure the extent
to which this is the case, denoted  (, S ), as follows.
,∈ such that ()=1, ()=0,</p>
      <p>and for all ′∈∖{},(′)=(′)
Note that if demographic parity holds for  in S then  (, S ) = 0.</p>
      <p>Counterfactual fairness in MAS is achieved when the expected rewards of agents remain the same in
both a factual and a counterfactual world, where in the latter, we change the protected attribute of the
agents while keeping all other elements the same.</p>
      <p>Definition 2 (Counterfactual Fairness). Let S = (, , , , , ,  ) be a system where
 = {(1, 1,  1,  1), . . . , (, ,  ,  )}, and let  ∈  be the protected attribute
under consideration. Let S ′ = (, , ,  ′, , ,  ) be the counterfactual of S such that  ′ =
{(′1, 1,  1,  1), . . . , (′, ,  ,  )} where for all  such that 1 ≤  ≤ : if () = 0,
then ′() = 1; if () = 1, then ′() = 0; and for all  ∈  ∖ {},
() = (). Counterfactual fairness is satisfied for  in S if and only if: for all  =
(, ,  ,  ) ∈  , for all ′ = (′, ,  ,  ) ∈  ′, (, S ) = (′, S ′).
Where counterfactual fairness is not satisfied, we can measure the extent to which this is the case, denoted
 (, S ), as follows.</p>
      <p>(, S ) =</p>
      <p>∑︁
∈ such that ()=1
(, S ) − (′, S ′)
(2)
Note that if counterfactual fairness holds for  in S then  (, S ) = 0.</p>
      <p>Conditional statistical parity in MAS is achieved when the expected rewards of agents are not
influenced by whether or not they possess protected attributes when conditioning on a legitimate factor,
assuming all other elements are the same. A legitimate factor is an attribute that has been identified as
one that may legitimately afect an agent’s reward.</p>
      <p>Definition 3 (Conditional Statistical Parity). Let S = (, , , , , ,  ) be a system, let
 ⊆ ( ∖ ) be the set of legitimate factors, and let  ∈  be the protected attribute under
consideration. Conditional statistical parity is satisfied for  with  in S if and only if: for all
,  ∈  , if () = 1, () = 0, ( ) = ( ) = 1 for all  ∈  , and for all
′ ∈  ∖ {}, (′) = (′), then (, S ) = (, S ).</p>
      <p>Where conditional statistical parity is not satisfied, we can measure the extent to which this is the case,
denoted  (, , S ), as follows.
 (, , S ) =
∑︁
,∈ such that ()=1, ()=0,</p>
      <p>( )=( )=1 for all  ∈,
and for all ′∈∖{}, (′)=(′)
(3)
Note that if conditional statistical parity holds for  with  in S then  (, , S ) = 0.</p>
      <p>Conditional statistical parity is demographic parity within subsets of the population characterised
by legitimate factors. For example, in algorithmic fairness, such a metric is used to verify whether the
probability of predicting re-ofence for male and female prisoners is the same for similar age groups,
which is the legitimate factor [37].</p>
      <p>Motivating example, continued. In the city trafic example, demographic parity would be achieved
if the sum of the expected rewards obtained by AI-driven cars and human-driven cars were equal,
all other things being equal. In other words, the protected attribute should not afect the expected
rewards gathered by the human-driven cars compared to the AI-driven ones. Counterfactual fairness
is achieved if the sum of the expected rewards of the cars remains the same in both a factual and a
counterfactual world, where in the latter, agents possess the protected attribute (i.e., cars are driven by
humans) while keeping all other factors constant. Conditional statistical parity is achieved if the sum of
the cars’ expected rewards is not influenced by whether or not they possess protected attributes when
conditioned on a legitimate factor, e.g., a certain range of speed capacity of the cars, assuming all other
elements are the same.</p>
      <p>We can use the metrics above to measure fairness of diferent systems. Our ultimate goal is to
optimise systems for these diferent fairness measures, for example by adjusting the starting state of
the environment, or the way the environment responds to the agents’ actions.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and future work</title>
      <p>This paper is a first step towards ensuring that certain sub-groups of agents are not disadvantaged
in multi-agent systems. We identify protected attributes, which are characteristics that should not
disadvantage an agent in terms of its expected rewards. Inspired by algorithmic fairness, we adapt
demographic parity, counterfactual fairness and conditional statistical parity to analyse fairness in MAS.
Our metrics assess fairness from various perspectives in any multi-agent system where expected rewards
are applicable. Additional metrics from the algorithmic fairness literature, such as equal opportunity,
equalised odds [38], disparate impact [39], or other metrics based on causal reasoning [40, 41] could be
adapted to this setting to capture other aspects of fairness. Our methodology applies to MAS, involving
both human and AI agents, as motivated by our example. It could also be used to improve the fairness
of human societies by modelling these as multi-agent systems and seeing how changes to the system
afect the various fairness metrics defined here.</p>
      <p>In future work, we plan to analyse these fairness metrics experimentally in diferent settings, both
competitive and cooperative, to find system configurations that enhance fairness. We will use techniques
such as Bayesian optimisation [42], evolutionary algorithms [43] and sparse sampling techniques [44]
to try to identify system configurations that optimise for the diferent fairness metrics.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work was supported by UK Research and Innovation [grant number EP/S023356/1], in the UKRI
Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (www.safeandtrustedai.org).
multi-agent reinforcement learning, 2021. arXiv:2012.09421.
[33] N. A. Grupen, B. Selman, D. D. Lee, Cooperative multi-agent fairness and equivariant policies,
2022. arXiv:2106.05727.
[34] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, R. Zemel, Fairness through awareness, in: Proceedings
of the 3rd Innovations in Theoretical Computer Science Conference, ITCS ’12, Association for
Computing Machinery, New York, NY, USA, 2012, p. 214–226. URL: https://doi.org/10.1145/2090236.
2090255. doi:10.1145/2090236.2090255.
[35] M. Kusner, J. Loftus, C. Russell, R. Silva, Counterfactual fairness, in: Proceedings of the 31st
International Conference on Neural Information Processing Systems, NIPS’17, Curran Associates
Inc., Red Hook, NY, USA, 2017, p. 4069–4079.
[36] S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, A. Huq, Algorithmic decision making and the cost
of fairness, in: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD ’17, Association for Computing Machinery, New York, NY,
USA, 2017, p. 797–806. URL: https://doi.org/10.1145/3097983.3098095. doi:10.1145/3097983.
3098095.
[37] R. Berk, H. Heidari, S. Jabbari, M. Kearns, A. Roth, Fairness in criminal justice risk
assessments: The state of the art, Sociological Methods &amp; Research 50 (2021) 3–44. doi:10.1177/
0049124118782533.
[38] M. Hardt, E. Price, N. Srebro, Equality of opportunity in supervised learning, 2016.</p>
      <p>arXiv:1610.02413.
[39] M. Feldman, S. Friedler, J. Moeller, C. Scheidegger, S. Venkatasubramanian, Certifying and removing
disparate impact, 2015. arXiv:1412.3756.
[40] N. Kilbertus, M. Rojas-Carulla, G. Parascandolo, M. Hardt, D. Janzing, B. Schölkopf, Avoiding
discrimination through causal reasoning, in: Proceedings of the 31st International Conference
on Neural Information Processing Systems, NIPS’17, Curran Associates Inc., Red Hook, NY, USA,
2017, p. 656–666.
[41] R. Nabi, I. Shpitser, Fair inference on outcomes, in: Proceedings of the Thirty-Second AAAI
Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence
Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence,
AAAI’18/IAAI’18/EAAI’18, AAAI Press, 2018.
[42] J. Snoek, H. Larochelle, R. P. Adams, Practical bayesian optimization of machine learning algorithms,
2012. arXiv:1206.2944.
[43] P. A. Vikhar, Evolutionary algorithms: A critical review and its future prospects, in: 2016
International Conference on Global Trends in Signal Processing, Information Computing and
Communication (ICGTSPICC), 2016, pp. 261–265. doi:10.1109/ICGTSPICC.2016.7955308.
[44] M. Kearns, Y. Mansour, A. Y. Ng, A sparse sampling algorithm for near-optimal planning in large
markov decision processes, in: Proceedings of the 16th International Joint Conference on Artificial
Intelligence - Volume 2, IJCAI’99, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA,
1999, p. 1324–1331.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Gajane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tavakol</surname>
          </string-name>
          , G. Fletcher,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Pechenizkiy, Survey on fair reinforcement learning:</article-title>
          <source>Theory and practice</source>
          ,
          <year>2022</year>
          . arXiv:
          <volume>2205</volume>
          .
          <fpage>10032</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Amanatidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Aziz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Birmpas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Filos-Ratsikas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Moulin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Voudouris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <article-title>Fair division of indivisible goods: Recent progress and open questions</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>322</volume>
          (
          <year>2023</year>
          )
          <article-title>103965</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S000437022300111X. doi:https://doi.org/10.1016/j.artint.
          <year>2023</year>
          .
          <volume>103965</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Procaccia</surname>
          </string-name>
          ,
          <article-title>Cake cutting: not just child's play</article-title>
          ,
          <source>Commun. ACM</source>
          <volume>56</volume>
          (
          <year>2013</year>
          )
          <fpage>78</fpage>
          -
          <lpage>87</lpage>
          . URL: https://doi.org/10.1145/2483852.2483870. doi:
          <volume>10</volume>
          .1145/2483852.2483870.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>U.</given-names>
            <surname>Endriss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Maudet</surname>
          </string-name>
          ,
          <article-title>Welfare engineering in multiagent systems</article-title>
          , in: A.
          <string-name>
            <surname>Omicini</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Petta</surname>
          </string-name>
          , J. Pitt (Eds.), Engineering Societies in the Agents World IV, Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2004</year>
          , pp.
          <fpage>93</fpage>
          -
          <lpage>106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bertsimas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Farias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Trichakis</surname>
          </string-name>
          ,
          <article-title>The price of fairness</article-title>
          ,
          <source>Operations Research</source>
          <volume>59</volume>
          (
          <year>2011</year>
          )
          <fpage>17</fpage>
          -
          <lpage>31</lpage>
          . doi:
          <volume>10</volume>
          .1287/opre.1100.0865.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>De Jong</surname>
          </string-name>
          , K. Tuyls,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verbeeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Roos</surname>
          </string-name>
          , Priority awareness:
          <article-title>Towards a computational model of human fairness for multi-agent systems</article-title>
          , in: K.
          <string-name>
            <surname>Tuyls</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Nowe</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Guessoum</surname>
          </string-name>
          , D. Kudenko (Eds.),
          <source>Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2008</year>
          , pp.
          <fpage>117</fpage>
          -
          <lpage>128</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>X.</given-names>
            <surname>Bu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <article-title>Fair division with prioritized agents</article-title>
          ,
          <source>in: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence</source>
          , AAAI'23/IAAI'23/EAAI'23, AAAI Press,
          <year>2023</year>
          . URL: https: //doi.org/10.1609/aaai.v37i5.25688. doi:
          <volume>10</volume>
          .1609/aaai.v37i5.
          <fpage>25688</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fuster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Goldsmith-Pinkham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ramadorai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Walther</surname>
          </string-name>
          ,
          <article-title>Predictably unequal? the efects of machine learning on credit markets</article-title>
          ,
          <source>The Journal of Finance</source>
          <volume>77</volume>
          (
          <year>2022</year>
          )
          <fpage>5</fpage>
          -
          <lpage>47</lpage>
          . doi:https://doi. org/10.1111/jofi.13090.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>[9] “Fair” Risk Assessments: A Precarious Approach for Criminal Justice Reform</article-title>
          , Stockholm, Sweden,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Johndrow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lum</surname>
          </string-name>
          ,
          <article-title>An algorithm for removing sensitive information: Application to raceindependent recidivism prediction</article-title>
          ,
          <source>The Annals of Applied Statistics</source>
          <volume>13</volume>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .1214/ 18-
          <fpage>AOAS1201</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hutchinson</surname>
          </string-name>
          , M. Mitchell,
          <article-title>50 years of test (un)fairness: Lessons for machine learning</article-title>
          ,
          <source>in: Proceedings of the Conference on Fairness, Accountability, and Transparency</source>
          , FAT* '
          <volume>19</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          , p.
          <fpage>49</fpage>
          -
          <lpage>58</lpage>
          . URL: https://doi.org/10.1145/ 3287560.3287600. doi:
          <volume>10</volume>
          .1145/3287560.3287600.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          , E. Potash,
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. D'Amour</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Lum</surname>
          </string-name>
          , Algorithmic fairness: Choices, assumptions, and definitions,
          <source>Annual Review of Statistics and Its Application</source>
          (
          <year>2021</year>
          ). URL: https://api.semanticscholar.org/CorpusID:228893833.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zallio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Clarkson</surname>
          </string-name>
          ,
          <article-title>Inclusion, diversity, equity and accessibility in the built environment: A study of architectural design practice</article-title>
          ,
          <source>Building and Environment</source>
          <volume>206</volume>
          (
          <year>2021</year>
          )
          <article-title>108352</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0360132321007496. doi:https://doi.org/ 10.1016/j.buildenv.
          <year>2021</year>
          .
          <volume>108352</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Thompson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stevenson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Wijnands</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A. A</given-names>
            <surname>Nice</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. DPA</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Silver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nieuwenhuijsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rayner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schofield</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hariharan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. N.</given-names>
            <surname>Morrison</surname>
          </string-name>
          ,
          <article-title>A global analysis of urban design types and road transport injury: an image processing study</article-title>
          ,
          <source>The Lancet Planetary Health</source>
          <volume>4</volume>
          (
          <year>2020</year>
          )
          <fpage>e32</fpage>
          -
          <lpage>e42</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S2542519619302633. doi:https: //doi.org/10.1016/S2542-
          <volume>5196</volume>
          (
          <issue>19</issue>
          )
          <fpage>30263</fpage>
          -
          <lpage>3</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kozůbek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Flasar</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Dumišinec</surname>
          </string-name>
          ,
          <article-title>Military factors influencing path planning</article-title>
          , in: U.
          <string-name>
            <given-names>Z. A.</given-names>
            <surname>Hamid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sezer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Zakaria</surname>
          </string-name>
          (Eds.), Path Planning for Autonomous Vehicle, IntechOpen, Rijeka,
          <year>2019</year>
          . URL: https://doi.org/10.5772/intechopen.86421. doi:
          <volume>10</volume>
          .5772/intechopen.86421.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Karma</surname>
          </string-name>
          , E. Zorba, G. Pallis, G. Statheropoulos, I. Balta,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mikedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vamvakari</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Pappa,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>