<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Toward a cost-effective fairness-aware ML lifecycle</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bussi Elisa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Basso Andrea</string-name>
          <email>andrea.basso@ius.to</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maran Elena</string-name>
          <email>elena.maran@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chiavarino Claudia</string-name>
          <email>claudia.chiavarino@ius.to</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Severini Simone</string-name>
          <email>severini.simone@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Istituto Universitario Salesiano Torino Rebaudengo, Piazza Conti di Rebaudengo</institution>
          ,
          <addr-line>22, 10155 Turin</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Modulos AG</institution>
          ,
          <addr-line>Technoparkstr. 1, 8005 Zurich</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Machine learning technology has a profoundly transformative impact on innovation, presenting challenges to existing regulatory frameworks. Trustworthiness principles play a pivotal role in both the EU AI Act and other relevant regulations, such as the GDPR. In a broader context, trustworthiness that will be an essential requirement for AI systems necessitates the integration of fairness considerations throughout the entire ML lifecycle. This integration is a complex yet crucial endeavor to ensure the responsible development and deployment of AI systems. The challenges associated with incorporating fairness into machine learning models arise from not only the lack of standardized fairness constraints but also the hesitancy among practitioners to adopt existing fairness measures and to the cost associated with the inclusion of fairness in the process. Overcoming these challenges is essential for building AI systems that are genuinely trustworthy, compliant with regulations, and ultimately beneficial to society.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Machine Learning</kwd>
        <kwd>Fairness</kwd>
        <kwd>Machine Learning Lifecycle</kwd>
        <kwd>Algorithmic Bias</kwd>
        <kwd>Ethical AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Machine Learning has become one of the most
promising disruptive innovations of the last
decade: it's been estimated that AI will reach a
worldwide market value of 1,500 billion USD by
2030 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. ML deployment can be found in core
sectors such as finance [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], healthcare [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and
human resources [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], and so is set to impact
people’s lives rapidly and deeply. The rise of
machine learning has led to increasing concerns
about the vulnerability of these systems to amplify
existing social biases [
        <xref ref-type="bibr" rid="ref13 ref34">34, 13</xref>
        ], resulting in unfair
treatment of entire populations [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Unfortunately,
most of the models available in the industry today
are not designed to account for fairness [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
Numerous examples exist of discriminatory
systems that discriminate based on protected
attributes like race [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], gender [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], or a
combination of both [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], as well as any of the
proxy features correlated with protected
attributes. The need to address these issues has
given rise to a new research field called
algorithmic fairness [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], focused on mitigating
bias and ensuring fair decision-making systems.
While fairness in AI has long been recognized as
an important concept [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], its significance in
machine learning has grown exponentially due to
the integration of trustworthiness principles in
modern legislation such as GDPR [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and the
upcoming EU AI Act [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. However,
incorporating fairness into the development
process of machine learning is complex. Although
numerous fairness metrics and tools have been
developed to provide solutions, applying them in
real-world contexts is often facing difficulties.
These challenges arise from a lack of adaptability
to institutional realities [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ], as well as
computationally expensive deployment [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
Addressing these complexities is crucial in
achieving true fairness in AI systems and meeting
modern legal requirements.
      </p>
      <p>
        When it comes to development, ML practitioners
often face challenges in making fair decisions
[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], identifying potential risks in their specific
context and domain area [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], and integrating the
available toolkits into their existing processes, as
they find their functioning difficult to
comprehend and limited in their ML lifecycle
coverage [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Additionally, integrating these
tools into organizational structures can prove to be
challenging, with constraints that compromise the
motivation to consider fairness in their work [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
However, maintaining the current status quo in
terms of fairness will soon be untenable. With the
enactment of AI-related legislation [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ],
compliance with fairness regulations will become
a top priority. Moreover, incorporating fairness
can be a strategic business practice, offering
numerous benefits such as reduced risk of failure,
reinforced brand reputation, and enhanced user
trust [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ]. Embracing fairness in AI systems,
therefore, not only fulfills legal obligations but
also fosters long-term organizational success. The
cost of embracing fairness needs to be considered
as a key element to make fairness-aware ML
lifecycles adopted at large.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The need of a new</title>
    </sec>
    <sec id="sec-3">
      <title>Learning Lifecycle</title>
    </sec>
    <sec id="sec-4">
      <title>Machine</title>
      <p>
        The development of a Machine Learning (ML)
lifecycle involves a complex sociotechnical
process that consists of several phases, each with
its own set of stakeholders. While there is no
standard or agreed-upon process for ML
production and release, most of the ML lifecycles
deployed today include similar phases [
        <xref ref-type="bibr" rid="ref35 ref9">9, 35</xref>
        ],
although some valuable phases, such as the
inclusion of fairness metrics in the pipeline, are
underrepresented. Such metrics, while are
measurable and mathematically defined, are of
limited efficacy [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. They do not consider the
individual choices made by stakeholders
throughout the project, which can influence the
outcome of the model and hinder its long-term
efficacy [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. The ML development lifecycle in
fact involves a series of decisions that, apart from
algorithmic bias management, can lead to
unintended consequences [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Understanding
how each step influences the decision-making
process of pipeline workers may help address the
most harmful downstream consequences more
directly and meaningfully [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. For instance,
incomplete documentation detailing the choices
made in the lifecycle [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] can lead to biased
decision-making and the loss of accountability for
the choices made [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ].
      </p>
      <p>
        In order to prove our point effectively, let us
reflect on a real use case scenario [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] where a
fairness-aware intervention encompassing the
whole ML lifecycle can seriously benefit a
business practice. One of the sectors in which ML
deployment is particularly looked upon is loan
eligibility [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], whose problem statement perfectly
fits that of a prediction task. As decisions about
granting a loan to a customer is traditionally based
on variables like credit history or income, these
can be turned into features to train a model to
predict from a customer profile whether it will
return or not a loan if granted one. Three main
outcomes are possible in relation to a credit
decision: a creditworthy applicant receives the
loan and can repay it; a creditworthy applicant is
refused the loan; an applicant receives the loan
and defaults after its disbursement.
      </p>
      <p>
        This use case has been chosen because the use
of ML in this sector without the rightful fairness
concerns being made has already produced
episodes of discrimination [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which stem from a
specific kind of harm: allocative harm [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The
social and economic cost of a mistake in this
sector can be detrimental for the consumer, and
sometimes for the company itself: assuming the
AI system operates in a country with an official
credit score system aimed at determining the
likelihood of an applicant meeting her financial
obligations, tending to a significant credit score
reduction would affect applicants who receive the
loan but subsequently default (false positives)
whilst a credit score reduction of a lesser extent
would apply to those applicants who are denied
access to the loan but would have repaid it (false
negatives). The latter group is also likely to see a
long-term deterioration of their credit score as
having been denied access to financial resources
may result in a long-term impact on their ability
to meet future financial obligations. Basically,
what makes the problem critical is that harm can
go both ways: on one hand, using a traditional ML
lifecycle (using historical data, maximizing
fairness etc.) to build a loan eligibility system
poses the risk of producing more false negatives;
if the people affected were to eventually resort to
legal means, revealing that the decision made is
only explainable with protected features
discrimination (like gender or race) and nothing
more, the reputation of the business owner would
be at risk, and customers would become more
distrustful of the AI. On the other hand, though,
some selection has to be made in order not to
default the company itself; plus, there could be a
situation where ML prediction may actually prove
correct despite being casually related to a
protected variable (e.g. a woman not being
granted a loan has nothing to do with her gender),
but where the company is still called for an
explanation of the decision process.
      </p>
      <p>In conclusion, the problem to be solved here is
how to find the most optimal trade-off between
profitability, fairness, reputational concerns, but
also current work practices and stakeholder
engagement. For these reasons, we posit that the
old ML lifecycle as it is today not cut for meeting
said requirements: a new, different approach is
needed, one that aims at satisfying the priorities of
all involved stakeholders, including those
ultimately impacted by the decision (the
customers), thus leading to better societal
outcomes whilst ensuring transparency on the
impact of the decisions on the profitability of the
business.</p>
    </sec>
    <sec id="sec-5">
      <title>3. Next Generation</title>
      <p>recent approaches</p>
    </sec>
    <sec id="sec-6">
      <title>ML Lifecycle:</title>
      <p>Recently, some approaches were proposed to
tackle the problem of ML lifecycle.</p>
      <p>
        The first one is called Data-Centric AI [
        <xref ref-type="bibr" rid="ref27 ref6">6, 27</xref>
        ], an
approach that provides a granular understanding
of how sources of noise, error, and bias in data
impact the model performance both in terms of
fairness and accuracy. By creating an intrinsic tie
between data and model results, and operating a
data-model feedback loop, Data-Centric AI
ensures that a long-lasting and sustainable
fairness strategy is achieved. This methodology
brings several stakeholders in the process, creates
accountability amongst data owners, and allows
one to understand specific problems in single data
samples which are impacting the model’s fairness.
Considering the consumer lending problem, the
initial step is to agree on a fairness measure that is
suitable for the application under consideration,
based on the type of harm that is ensued (in this
case, allocative), and the one which best reflects
the above-anticipated harms from the AI system
under scrutiny is the equalized odds measure [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
Once identified the appropriate fairness measure,
the Data-Centric AI approach entails the
identification of the individual contribution of
each observation in the dataset to the quantitative
performance of the model with respect to this
measure, and the results of this procedure will
allow practitioners to correctly select the best
strategy for improving overall fairness (e.g.
improving the dataset features distributions).
The second one is called Z-Inspection [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]. The
Z-Inspection process is a versatile tool that can be
used to evaluate and audit AI systems before they
go into production. Its primary purpose is to raise
awareness among relevant stakeholders about the
potential ethical, social, technical, and legal risks
associated with implementing an AI system.
ZInspection is inspired by the seven requirements
outlined in the "Framework for Trustworthy AI”
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Z-Inspection brings together two distinct
approaches into a single process. The first is a
holistic approach that aims to consider the entire
sociotechnical system. The second is an analytical
approach that considers each part of the problem
domain in greater detail. The outcome is a
multiperspective view that is capable of assessing,
discussing, and resolving the tensions that arise
during the assessment process through a set of
recommendations.
      </p>
      <p>To illustrate how it operates, let's consider the
example of customer lending. The first phase of
the Z-Inspection process involves forming an
interdisciplinary team of investigators, including
engineers, ethicists, case owners, and company
practitioners, to define the boundaries of the
assessment. In the second phase, each team
identifies all possible ethical and legal issues and
maps them to the trustworthy AI ethical values
and requirements, such as protected features and
discrimination danger. Finally, in the third phase,
the team addresses ethical tensions and solves
them whenever possible, such as recommending a
specific fairness measure in favor of other ones
[cfr. 28]. One of the main advantages of the
ZInspection process is that it considers fairness as
an integral part of the assessment process from the
outset. It also allows a multidisciplinary team of
professionals to collaborate and discuss together,
leading to a more comprehensive and effective
approach to addressing ethical issues.</p>
    </sec>
    <sec id="sec-7">
      <title>4. Toward a cost-effective fairness aware ML lifecycle</title>
      <p>The contributions made toward understanding and
standardizing the next generation of ML
lifecycles are incredibly valuable. However, it is
crucial to consider the cost factor in the
development of these lifecycles. To ensure that an
ML lifecycle is fairness conscious, it must
incorporate fairness and other ethical
considerations from problem formulation to
deployment. Therefore, optimizing each step of
the lifecycle is crucial.</p>
      <p>
        To address these issues, we propose the redesign
of a novel ML Lifecycle that places fairness at its
core and factors in also the cost/benefit for every
step of the process. Our approach will draw
inspiration from the FATE paradigm (fairness,
accountability, transparency, explainability [cfr.
28]) and incorporate Human-Centered Design and
Ergonomics techniques. An initial hypothesis of
the different phases is the following:
Problem Formulation: In the problem
formulation stage, the objective is to define the
business problem and identify the relevant
stakeholders. This includes identifying the key
performance indicators (KPIs) and the
decisionmaking criteria that will be used to evaluate the
model's performance. At this stage, it is also
essential to identify the fairness concerns and
metrics that will be used to assess fairness.
Data Collection: The data collection stage is
crucial for fairness-aware machine learning. The
goal is to collect data that is diverse and
representative of the population being studied
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. This includes taking steps to ensure that the
data collection process is unbiased and does not
discriminate against any group [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. At this stage,
it is also important to identify any potential
sources of bias in the data.
      </p>
      <p>
        Data Pre-processing: In the data pre-processing
stage, the goal is to clean and transform the data
so that it is suitable for machine learning. This
includes removing outliers, filling in missing data,
and transforming the data into a format that is
suitable for modeling. At this stage, it is also
essential to check for any biases that may have
been introduced during data collection [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Model Training: In the model training stage, the
goal is to build a model that is fair and accurate.
This involves choosing an appropriate algorithm,
tuning hyperparameters, and evaluating the
model's performance on training and validation
data. It is also essential to check for any biases that
may have been introduced during model training.
Model Evaluation: In the model evaluation stage,
the goal is to evaluate the model's performance on
test data. This includes measuring accuracy and
fairness using appropriate metrics [cfr. 27]. It is
also essential to check for any biases that may
have been introduced during model evaluation.
Model Deployment: In the model deployment
stage, the goal is to deploy the model into a
production environment. This involves testing the
model in real-world scenarios and monitoring its
performance over time. It is also essential to
continually evaluate the model for fairness and
accuracy.
      </p>
      <p>Model Maintenance: In the model maintenance
stage, the goal is to ensure that the model remains
fair and accurate over time. This includes
monitoring the model's performance, updating the
model when new data becomes available, and
reevaluating the model for fairness and accuracy.
At each step of the machine learning lifecycle, a
cost-benefit analysis is performed to optimize
fairness and cost simultaneously. The cost-benefit
analysis considers the trade-offs between fairness,
accuracy, and cost and identifies the optimal
balance between these factors.</p>
    </sec>
    <sec id="sec-8">
      <title>5. Conclusions</title>
      <p>In this position paper, we have examined the
challenges that arise in current ML lifecycles
regarding fairness and introduced different
strategies to address the issue. Specifically, we
have explored data-centric approaches and
inspection-based perspectives to ensure fairness
in the lifecycle.</p>
      <p>Data-centric approaches involve methods that
prioritize the selection, preparation, and use of
data to mitigate any potential biases in the ML
lifecycle. Inspection-based perspectives, on the
other hand, focus on evaluating the outcomes of
the ML model to identify and correct any
instances of unfairness.</p>
      <p>However, in the adoption of new generation ML
lifecycles, cost is a critical factor that must be
taken into account. The cost of developing and
implementing a new ML lifecycle can vary
significantly depending on the complexity of the
model, the size and quality of the data, and the
availability of resources.</p>
      <p>Thus, when designing and implementing new ML
lifecycles that prioritize fairness, it is essential to
consider the cost factor to ensure practical and
feasible solutions. This includes assessing the
cost-benefit of different approaches and
optimizing individual steps in the ML lifecycle to
achieve the desired outcomes while minimizing
costs.</p>
      <p>For example, a cost-effective strategy for ensuring
fairness in the ML lifecycle could involve using
readily available datasets and implementing
guided data cleaning strategies to reduce bias.
Additionally, optimizing the algorithm's
performance through regular monitoring and
testing can reduce the need for costly and
timeconsuming manual inspections.</p>
      <p>In conclusion, while addressing fairness in ML
lifecycles is crucial, it is equally important to
consider the cost factor in the adoption of new
generation ML lifecycles. By optimizing the
individual steps and assessing the cost-benefit of
different approaches, we can ensure that fair and
practical solutions are developed and
implemented.</p>
    </sec>
    <sec id="sec-9">
      <title>6. Acknowledgements</title>
      <p>We thank the people at IUSTO, Modulos and
Lawfultech.ai for the help provided with
examples, discussions and contributions given to
make this paper possible.</p>
    </sec>
    <sec id="sec-10">
      <title>7. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Crawford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shapiro</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <article-title>The problem with bias: from allocative to representational harms in machine learning</article-title>
          ,
          <source>in: 9th Annual Conference of the Special Interest Group for Computing, Information and Society</source>
          , SIGCIS '
          <fpage>17</fpage>
          , Philadelphia, PA, USA,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bartlett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Morse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Stanton</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Wallace</surname>
          </string-name>
          ,
          <article-title>Consumer-lending discrimination in the FinTech era</article-title>
          ,
          <source>Journal of Financial Economics</source>
          (
          <year>2021</year>
          ),
          <volume>143</volume>
          (
          <issue>1</issue>
          ),
          <fpage>30</fpage>
          -
          <lpage>56</lpage>
          . https://doi.org/10.1016/j.jfineco.
          <year>2021</year>
          .
          <volume>05</volume>
          .047
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhatore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mohan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.R.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <article-title>Machine learning techniques for credit risk evaluation: a systematic literature review</article-title>
          ,
          <source>JBFT</source>
          <volume>4</volume>
          (
          <year>2020</year>
          ),
          <fpage>111</fpage>
          -
          <lpage>138</lpage>
          . https://doi.org/10.1007/s42786-020-00020-3
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Biswas</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Rajan</surname>
          </string-name>
          ,
          <article-title>Fair preprocessing: towards understanding compositional fairness of data transformers in machine learning pipeline</article-title>
          ,
          <source>in: Proceedings of the 29th ACM ESEC/FSE Symposium on the Foundations of Software Engineering</source>
          , ACM, New York, NY,
          <year>2021</year>
          ,
          <fpage>981</fpage>
          -
          <lpage>993</lpage>
          . https://doi.org/10.1145/3468264.3468536
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Bloomberg</surname>
          </string-name>
          .com,
          <source>Artificial Intelligence Market USD 1,581.70 Billion By</source>
          <year>2030</year>
          , Growing At A CAGR of
          <year>38</year>
          .0% - Valuates
          <string-name>
            <surname>Reports</surname>
          </string-name>
          ,
          <year>2022</year>
          . URL: https://www.bloomberg.com/press-releases/2022- 06-13/artificial-intelligence
          <string-name>
            <surname>-</surname>
          </string-name>
          market-usd-1-
          <fpage>581</fpage>
          -70
          <string-name>
            <surname>-</surname>
          </string-name>
          billion-by
          <article-title>-2030-growing-at-a-cagr-of-38-0- valuates-reports</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <article-title>Why it's time for 'data-</article-title>
          <source>centric artificial intelligence'</source>
          ,
          <year>2022</year>
          . URL: https://mitsloan.mit.edu/ideas-made
          <article-title>-tomatter/why-its-time-data-centric-artificialintelligence</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.T.</given-names>
            <surname>Cao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Daumé</surname>
          </string-name>
          ,
          <article-title>Toward Gender-Inclusive Coreference Resolution: An Analysis of Gender and Bias Throughout the Machine Learning Lifecycle</article-title>
          , Computational
          <string-name>
            <surname>Linguistics</surname>
          </string-name>
          (
          <year>2021</year>
          ),
          <volume>47</volume>
          (
          <issue>3</issue>
          ),
          <fpage>615</fpage>
          -
          <lpage>661</lpage>
          . https://doi.org/10.1162/coli_a_
          <fpage>00413</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Castiglione</surname>
          </string-name>
          , G. Wu,
          <string-name>
            <given-names>C.</given-names>
            <surname>Srinivasa</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Prince</surname>
          </string-name>
          , fAux: Testing Individual Fairness via Gradient Alignment, arXiv e-prints (
          <year>2022</year>
          ). arXiv:
          <volume>2210</volume>
          .
          <fpage>06288</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Crankshaw</surname>
          </string-name>
          ,
          <source>A Short History of PredictionServing Systems</source>
          ,
          <year>2018</year>
          . URL: https://rise.cs.berkeley.edu/blog/a
          <article-title>-short-historyof-prediction-serving-systems/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>W.</given-names>
            <surname>Douglas</surname>
          </string-name>
          <string-name>
            <surname>Heaven</surname>
          </string-name>
          ,
          <article-title>Predictive policing algorithms are racist. They need to be dismantled</article-title>
          .
          <source>MIT Technology Review</source>
          ,
          <year>2020</year>
          . URL: https://www.technologyreview.com/
          <year>2020</year>
          /07/17/ 1005396/predictive-policing
          <article-title>-algorithms-racistdismantled-machine-learning-bias-criminaljustice/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11] Future of Life Institute,
          <source>The Artificial Intelligence Act</source>
          , European Union,
          <year>2021</year>
          . URL: https://artificialintelligenceact.eu/
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Garanteprivacy</surname>
          </string-name>
          .it,
          <source>Regolamento UE</source>
          <year>2016</year>
          679.
          <article-title>Arricchito con riferimenti ai Considerando Aggiornato alle rettifiche pubblicate sulla Gazzetta Ufficiale dell'</article-title>
          <source>Unione europea 127 del 23 maggio</source>
          <year>2018</year>
          ,
          <year>2016</year>
          . URL: https://www.garanteprivacy.it/web/guest/home/do cweb/-/docweb-display/docweb/6264597
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Gebru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Morgenstern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vecchione</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Vaughan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Daumeé</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Crawford</surname>
          </string-name>
          , Datasheets for Datasets, arXiv e-prints (
          <year>2018</year>
          ). arXiv:
          <year>1803</year>
          .09010
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Grote</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Berens</surname>
          </string-name>
          ,
          <article-title>On the ethics of algorithmic decision making in healthcare</article-title>
          ,
          <source>Journal of Medical Ethics</source>
          (
          <year>2020</year>
          ),
          <volume>46</volume>
          (
          <issue>3</issue>
          ),
          <fpage>205</fpage>
          -
          <lpage>211</lpage>
          . https://doi.org/10.1136/medethics-2019
          <source>-105586</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          , E. Price, and
          <string-name>
            <given-names>N.</given-names>
            <surname>Srebo</surname>
          </string-name>
          ,
          <article-title>Equality of opportunity in supervised learning</article-title>
          ,
          <source>in: Proceedings of the 30th International Conference on Neural Information Processing Systems</source>
          , NIPS'16, Curran Associates Inc.,
          <string-name>
            <surname>Red</surname>
            <given-names>Hook</given-names>
          </string-name>
          ,
          <string-name>
            <surname>NY</surname>
          </string-name>
          , USA,
          <year>2016</year>
          ,
          <fpage>3323</fpage>
          -
          <lpage>3331</lpage>
          . https://doi.org/10.48550/arXiv.1610.02413
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>J.M. Hellerstein</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Sreekanti</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          <string-name>
            <surname>Gonzalez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Dalton</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Nag</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Ramachandran</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Arora</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bhattacharyya</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Das</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Donsky</surname>
            , G. Fierro,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>She</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Steinbach</surname>
            ,
            <given-names>V.R.</given-names>
          </string-name>
          <string-name>
            <surname>Subramanian</surname>
          </string-name>
          , &amp; E. Sun,
          <article-title>Ground: A Data Context Service</article-title>
          ,
          <source>in: Conference on Innovative Data Systems Research</source>
          , Chaminade, CA, USA,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>High-Level Expert</surname>
          </string-name>
          Group on Artificial Intelligence,
          <article-title>Ethics Guidelines for Trustworthy AI</article-title>
          ,
          <string-name>
            <surname>European</surname>
            <given-names>Commission</given-names>
          </string-name>
          ,
          <year>2019</year>
          . URL: https://ec.europa.eu/digital-singlemarket/en/news/ethics
          <article-title>-guidelines-trustworthy-ai</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>K.</given-names>
            <surname>Holstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Wortman</given-names>
            <surname>Vaughan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Daumé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dudik</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <source>Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</source>
          , Association for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          ,
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . https://doi.org/10.1145/3290605.3300830
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hutchinson</surname>
          </string-name>
          , and M. Mitchell,
          <article-title>50 Years of Test (Un)fairness: Lessons for Machine Learning</article-title>
          ,
          <source>in: Proceedings of the Conference on Fairness, Accountability, and Transparency</source>
          , FAT* '
          <volume>19</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          ,
          <fpage>49</fpage>
          -
          <lpage>58</lpage>
          . https://doi.org/10.1145/3287560.3287600
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>G.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hickey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Di Stefano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dhanjal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Stoddart</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Vasileiou</surname>
          </string-name>
          ,
          <article-title>Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms</article-title>
          . arXiv e-prints (
          <year>2020</year>
          ). https://doi.org/10.48550/arXiv.
          <year>2010</year>
          .03986
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.S.A.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>The Landscape and Gaps in Open Source Fairness Toolkits</article-title>
          ,
          <source>in: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2021</year>
          ,
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . https://doi.org/10.1145/3411764.3445261
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.S.A.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Risk Identification Questionnaire for Detecting Unintended Bias in the Machine Learning Development Lifecycle</article-title>
          ,
          <source>in: Proceedings of the 2021 Conference on AI</source>
          ,
          <string-name>
            <surname>Ethics</surname>
          </string-name>
          , and Society, Association for Computing Machinery, AAAI/ACM '21, New York, NY, USA,
          <year>2021</year>
          ,
          <fpage>704</fpage>
          -
          <lpage>714</lpage>
          . https://doi.org/10.1145/3461702.3462572
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M.A.</given-names>
            <surname>Madaio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Stark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Wortman</given-names>
            <surname>Vaughan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <article-title>Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2020</year>
          ,
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . https://doi.org/10.1145/3313831.3376445
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zaldivar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Barnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vasserman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hutchinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Spitzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. D.</given-names>
            <surname>Raji</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Gebru</surname>
          </string-name>
          ,
          <article-title>Model Cards for Model Reporting</article-title>
          ,
          <source>in: Proceedings of the Conference on Fairness, Accountability, and Transparency</source>
          , FAT* '
          <volume>19</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          ,
          <fpage>220</fpage>
          -
          <lpage>229</lpage>
          . https://doi.org/10.1145/3287560.3287596
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          , E. Potash,
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. D'Amour</surname>
            , and
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Lum</surname>
          </string-name>
          , (
          <year>2021</year>
          ). Algorithmic Fairness: Choices, Assumptions, and
          <string-name>
            <surname>Definitions</surname>
          </string-name>
          ,
          <source>Annual Review of Statistics and Its Application</source>
          (
          <year>2021</year>
          ),
          <volume>8</volume>
          ,
          <fpage>141</fpage>
          -
          <lpage>163</lpage>
          . https://doi.org/10.1146/annurevstatistics-042720-125902
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Modulos</surname>
          </string-name>
          .ai,
          <source>Exploring AI Fairness in Consumer Lending</source>
          ,
          <year>2022</year>
          . URL: https://www.modulos.ai/resources/exploring
          <article-title>-aifairness-in-consumer-lending/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Modulos</surname>
          </string-name>
          ,
          <article-title>Fairness in Credit Risk with DataCentric AI</article-title>
          , Video,
          <year>2022</year>
          . URL: https://www.youtube.com/watch?v=clNJOy17YX M
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>D.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <article-title>User Perceptions of Algorithmic Decisions in the Personalized AI System: Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability</article-title>
          , Journal of Broadcasting &amp; Electronic
          <string-name>
            <surname>Media</surname>
          </string-name>
          (
          <year>2020</year>
          ),
          <volume>64</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          . https://doi.org/10.1080/08838151.
          <year>2020</year>
          .1843357
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>M.</given-names>
            <surname>Srivastava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Heidari</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          and
          <article-title>Krause, Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning</article-title>
          ,
          <source>in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          , KDD '19,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          ,
          <fpage>2459</fpage>
          -
          <lpage>2468</lpage>
          . https://doi.org/10.1145/3292500.3330664
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>H.</given-names>
            <surname>Suresh</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Guttag</surname>
          </string-name>
          ,
          <article-title>A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle</article-title>
          , in: Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO '21,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2021</year>
          ,
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          . https://doi.org/10.1145/3465416.3483305
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>P.</given-names>
            <surname>Tambe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cappelli</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Yakubovich</surname>
          </string-name>
          ,
          <article-title>Artificial Intelligence in Human Resources Management: Challenges and a Path Forward, California Management Review (</article-title>
          <year>2019</year>
          ),
          <volume>61</volume>
          (
          <issue>4</issue>
          ),
          <fpage>15</fpage>
          -
          <lpage>42</lpage>
          . https://doi.org/10.1177/0008125619867910
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>S.</given-names>
            <surname>Townson</surname>
          </string-name>
          ,
          <article-title>AI can make bank loans more fair</article-title>
          ,
          <source>Harvard Business Review</source>
          (
          <year>2020</year>
          ). URL: https://hbr.org/
          <year>2020</year>
          /11/ai-can
          <article-title>-make-bank-loansmore-fair</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Veale</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Van Kleek</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>R.</given-names>
            <surname>Binns</surname>
          </string-name>
          ,
          <article-title>Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making</article-title>
          ,
          <source>in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          ,
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . https://doi.org/10.1145/3173574.3174014
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>A.</given-names>
            <surname>Woodruff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.E.</given-names>
            <surname>Fox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rousso-Schindler</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Warshaw</surname>
          </string-name>
          ,
          <article-title>A Qualitative Exploration of Perceptions of Algorithmic Fairness</article-title>
          ,
          <source>in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          ,
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . https://doi.org/10.1145/3173574.3174230
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>R. V.</given-names>
            <surname>Zicari</surname>
          </string-name>
          et al.,
          <string-name>
            <surname>Z</surname>
          </string-name>
          -Inspection®
          <article-title>: A Process to Assess Trustworthy AI</article-title>
          ,
          <source>IEEE Transactions on Technology and Society</source>
          (
          <year>2021</year>
          ),
          <volume>2</volume>
          (
          <issue>2</issue>
          ),
          <fpage>83</fpage>
          -
          <lpage>97</lpage>
          . doi:
          <volume>10</volume>
          .1109/TTS.
          <year>2021</year>
          .3066209
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>