<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Annex III: High-Risk AI Systems Referred to in Article</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Shifting Paradigms: Value Sensitive Design for Fair AI Recruitment</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexandre Puttick</string-name>
          <email>alexandre.puttick@bfh.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carlotta Rigotti</string-name>
          <email>c.rigotti@law.leidenuniv.nl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ahmed Abouzeid</string-name>
          <email>ahmed.abouzeid@ntnu.no</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eduard Fosch-Villaronga</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mascha Kurpicz-Briki</string-name>
          <email>mascha.kurpicz@bfh.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pinar Øztürk</string-name>
          <email>pinar@ntnu.no</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Berner Fachhochschule BFH, Technik und Informatik</institution>
          ,
          <addr-line>Quellgasse 21, 2501 Biel</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Leiden University</institution>
          ,
          <addr-line>Rapenburg 70, 2311 EZ Leiden</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Norwegian University of Science and Technology</institution>
          ,
          <addr-line>IT-bygget, almen, Sem Saelands vei 9, 7034 Trondheim</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>6</volume>
      <issue>2</issue>
      <fpage>2</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>In this position paper, we advocate for the use of value sensitive design (VSD) as a framework for developing fair AI recruitment tools. As a starting point, we assert that the current paradigm in AI fairness in the hiring context is severely limiting. We then document an ongoing process within the EU-horizon project BIAS, seeking to escape this paradigm by applying VSD to the development of AI applications for candidate selection with diversity and fairness as focal points. In particular, we present case-based reasoning as a case study in the intentional operationalization of stakeholder positions on fairness and detail how such an approach can be further expanded, drawing from the concept of agonistic machine learning. In this endeavor, we hope to contribute to the discourse on the ethical design and use of AI within the labor market and in general.</p>
      </abstract>
      <kwd-group>
        <kwd>AI</kwd>
        <kwd>fairness</kwd>
        <kwd>value sensitive design</kwd>
        <kwd>recruitment</kwd>
        <kwd>diversity bias</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <p>The last decades have seen a growing trend toward designing and deploying artificial intelligence
(AI) applications for recruitment and selection. However, such tools lack transparency and pose
the risk of algorithmic diversity bias,1 reinforcing harmful stereotypes and power structures,
and, on the individual level, acting to the detriment of dignity, autonomy, and well-being.
Consequently, job candidate profiling is classified as
high risk under the EU AI Act.2</p>
      <p>As a starting point for this position paper, we assert that the current paradigm in AI fairness
in the context of recruitment is severely limiting. Attempts to promote fairness in AI hiring tools
typically take diversity metrics as a starting point and modify the training or computational
aspects of existing ranking/scoring models to improve metric performance. This approach
bypasses the explicit consideration of values and normative assumptions and fails to engage with
the question of whether the technical interventions and metrics are truly aligned with ethical
fairness goals. It operates within the framework of a rigid, deterministic technical solution
whose blueprint implicitly assumes a set of normative stances that can hinder fairness while
precluding the innovation of novel technical approaches. The resulting models remain opaque
to users and impose a predetermined balance of values, stifling moral agency and accountability
and leading to unjustifiable decisions.</p>
      <p>This paper documents an ongoing process within the EU-horizon project BIAS, seeking to
escape this paradigm by applying Value Sensitive Design to the development of AI
applications for candidate selection with attention to diversity and fairness. Section 1 introduces the
necessary background aspects of VSD. A description of the application of VSD towards fair
AI recruitment tools follows in Section 2, laying the foundation for the pro-active technical
approaches described in Section 3.</p>
    </sec>
    <sec id="sec-3">
      <title>1. Value Sensitive Design</title>
      <p>
        Value sensitive design (VSD) was developed by Batya Friedman and Peter Kahn in order to
promote alternative design goals and methods built upon respect for human agency and
responsible computing [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The theory assumes that every technology is imbued with and reproduces
particular human values. There is no such thing as objective, value-neutral technology. As such,
VSD promotes proactive engagement with human values throughout the design process, aiming
to spur technological innovation that is consciously driven by values that aim to improve the
human and planetary condition. As a starting point, human values are anything that a person
or group of people consider highly important. The open-ended understanding of human values
and the resistance to reifying a particular set of morals is seen as a strength by proponents of
VSD; the emphasis on recognizing co-existing, often competing human values and the need to
carefully and intentionally balance them counters the misguided idea that there exists a singular
solution that is best for everyone. As opposed to developing a technology according to the
values of its designers, VSD insists on in-depth investigation of stakeholder values and how
they relate to each other.
      </p>
      <p>
        Values, Normative Assumptions and Technical Methods. In practice, the development
of new technology under the VSD framework involves the identification of the values
important to each group of primary stakeholders (e.g. fairness) and the corresponding normative
assumptions/stances about what those values mean (e.g. “Everyone should be treated equally,
regardless of their background,” or “People with facing greater structural inequality should receive
more resources to ensure equitable outcomes.”). Finally, an investigation of technical methods
that operationalize these normative stances (e.g. The requirement to meet certain fairness metric
thresholds.) is conducted, with careful consideration of the balancing and/or reconciliation of
competing values. This should all be considered within the sociotechnical context in which the
technology is situated, the relationships it mediates, and the social efects its use could have
over time.
Tri-Partite Methodology. VSD recommends a methodology consisting of conceptual,
empirical and technical investigation. At the conceptual level, researchers are expected to consider
the specific values and normative assumptions in which a technology is situated, clarifying
fundamental issues raised by the technology and design process. It focuses on theoretical
considerations of values, potential conflicts, and trade-ofs. During empirical investigations,
researchers position the design within the relevant social context, seeking to identify, e.g.,
important stakeholders and understand their values. One gathers data from real-world
stakeholders to understand their experiences, needs, and concerns. Technical investigations aim to
identify specific technological approaches suited to supporting certain values, with attention
to potential detriment to others. This includes assessing the capabilities and limitations of
technical solutions. Friedman highlights two subtypes of technical investigation. The first,
retrospective analyses, focuses on “how existing or historical technological processes and
underlying mechanisms support or hinder human values,” [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The second subtype, proactive design,
concerns the design of systems that support the values and normative stances identified during
conceptual and empirical investigations.
      </p>
    </sec>
    <sec id="sec-4">
      <title>2. Fairness in AI Recruitment</title>
      <sec id="sec-4-1">
        <title>Values and Direct Stakeholders</title>
        <p>
          The primary values of interest are diversity, fairness and the selection of the most capable job
candidate. We concentrate on two sub-types of fairness: 1) Procedural, concerning how the
decision was made; 2) Substantive, concerning the outcome of the decision. VSD defines
stakeholders as those who are or will be significantly implicated by the technology [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. The
primary direct stakeholder groups identified within the BIAS project are AI developers/researchers,
HR professionals, workers/job seekers and human rights advocates/regulators. Since fairness and
diversity are central values within the project, emphasis is placed on the representation of
underprivileged identities within each stakeholder group.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Tri-Partite Investigation of Fairness</title>
        <sec id="sec-4-2-1">
          <title>Conceptual Investigation</title>
          <p>
            Defining Fairness. In the broad social and EU legal context, substantive fairness in relation
to diversity may be viewed in terms of group fairness with respect to hiring outcomes, i.e.,
the fair distribution of jobs amongst demographic groups [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ]. Procedural fairness emphasizes
a well-grounded, transparent decision-making process in which discrimination on the basis of
protected attributes is restricted. Based on a literature review and empirical research, we have
identified the following stakeholder perspectives:
Job applicants: Substantive fairness is perceived in terms of hiring decisions that reflect their
knowledge, skills, and eforts [
            <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7 ref8">4, 5, 6, 7, 8</xref>
            ]. Also, applicants are concerned with procedural
fairness, e.g., in terms of 1) job relatedness: The selection process should only assess the personal
characteristics that are necessary for the job; 2) consistency: Each applicant should go through
the same process; and 3) opportunity to perform: The applicant can demonstrate their knowledge
and skills during the hiring process; [
            <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref17 ref4 ref9">9, 4, 10, 11, 12, 13, 14, 15, 16, 17</xref>
            ].
          </p>
          <p>
            HR practitioner/company: The decision-making process should be aimed toward selecting the
most capable candidate for the job [
            <xref ref-type="bibr" rid="ref18 ref19 ref20">18, 19, 20</xref>
            ]. At the team level, diversity is desirable but must
be balanced against the company’s community/cultural structures [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ].
          </p>
        </sec>
        <sec id="sec-4-2-2">
          <title>Empirical Investigation</title>
          <p>One of our main methodologies for empirical investigation consists of co-creation workshops
carried out in all nine partner countries. These workshops focus on understanding the views of
diferent stakeholders, as well as their desires, needs and opinions regarding what constitutes a
fair and useful recruitment tool, making use of mock tools3 to stimulate discussion. Agency
and transparency in relation to both users and job applicants, as well as technical robustness,
were highlighted as key aspects of a fair and trustworthy decision-making process.4 The
outcomes of the decision-making process–algorithmic, human or hybrid–are not fair if they are
not justifiable . Participants brought up situations exemplifying unfairness, such as candidate
scores being artificially raised by invisibly pasting job ads into their applications or being rejected
based on incorrectly parsed data. A disconnect between how outliers are perceived by statistical
models and the desirability of candidates with unique profiles also arose as a potential source of
unfairness. There was additional concern over how the omission of particular types of candidates
by a deterministic model becomes systemic, whereas multiple diferent evaluators may compensate
for each other’s particular biases.</p>
        </sec>
        <sec id="sec-4-2-3">
          <title>Technical Investigation - Retrospective Analysis</title>
          <p>Encoded Values and Hindrances to Fairness. Compared to human decision-making,
existing tools claim to save time and money for employers while simultaneously making the
recruitment process more objective and accurate. VSD rejects the notion that technology can
ever be objective; terms like objectivity and accuracy mask normative assumptions behind
neutral-sounding terminology, and the repeated discovery of encoded bias in machine learning
models has made it clear that such technologies are not objective. This is often explained via a
garbage in, garbage out understanding of bias in AI systems, but it is important to note that
value-encoding goes beyond training data. For example, ostensibly neutral metrics such as loss
and accuracy scores implicitly reinforce the values underpinning the status quo, while predictive
modeling treats unclassifiable outliers as data points to be ignored.</p>
          <p>The fundamental task–using a pool of candidate data, compute the best candidates—assumes
that job suitability is a one-dimensional trait, largely based on a pre-defined notion of what
constitutes a high-achieving individual. It is associated with objectivity because suitability is
usually evaluated based on educational background, work experience and other “objective”
3These simulated LLM-driven AI tools, allowing users to specify and weight various criteria, providing natural
language justifications for candidate rankings, and flagging potential unfair biases.
4User agency over the criteria under which candidates are evaluated, the ability to flag system errors or potential
harmful behaviors, an interactive process that allows users to explore candidate data from multiple quantitative
and qualitative angles, and the capacity for the tool to draw attention to their own unconscious biases.
criteria. But this ignores structural inequalities that can render apparently objective features
into grounds for proxy/indirect discrimination. By collapsing each candidate to a small subset of
features to be evaluated by a fixed algorithm, job suitability scores can work to the detriment of
creating a diverse employee pool with complementary skills.</p>
          <p>
            Fairness Methods. The technical research community has predominantly focused on
operationalizing group fairness [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ]. The most widely adopted fairness metrics are thus derived
from demographic distributions in algorithmic output. As described at length in [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ], these
metrics are (very rough) proxies for often implicit normative stances. Good performance on
fairness metrics is no guarantee of fairness, and technical methods to optimize fairness metrics
do not explicitly engage with underlying normative stances and are hence not necessarily fairer.
Furthermore, recent work demonstrates that no established mathematical notion of fairness
suficiently captures stakeholder notions of fair hiring practice [
            <xref ref-type="bibr" rid="ref23">23</xref>
            ]. Fairness metrics are thus
valuable in demonstrating model unfairness or evaluating fairness interventions that are made in
line with ethical values, but should not be the targets of blind optimization nor utilized as proofs of
fairness.
          </p>
          <p>
            The Fairness/Accuracy Trade-Of. Research shows that there are unavoidable trade-ofs in
attempting to simultaneously optimize performance and fairness metrics [
            <xref ref-type="bibr" rid="ref24">24</xref>
            ]. Through the lens
of VSD, this can be viewed as a numerical manifestation of value tensions and could be reframed
as a social justice/status quo trade-of or a diversity/job suitability trade-of. However, it should
be noted that VSD warns against framing the balance of values in terms of trade-ofs; doing
so predisposes the designer to seek approaches in which promoting one value will diminish
another, rather than seeking ways to promote both, e.g., by changing the evaluation criteria
for job suitability or scoring candidates in a way that is less entangled with the labels present in
historical hiring data.
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>3. Proactive Design</title>
      <sec id="sec-5-1">
        <title>Case-Based Reasoning (CBR) and Fairness</title>
        <p>Case-Based Reasoning (CBR) forms a foundational AI approach for our proposed Decision
Support System (DSS) in recruitment. This system assists HR professionals during the candidate
screening process and produces one of three potential outcomes for a candidate applying for
a specific job opening: “Shortlist,” “Longlist,” or “Negative.” In a CBR engine, the case-base
consists of a database of past candidate data along with the decision that was reached on each
candidate’s application. The system compares new candidates to the case-base, retrieves past
similar candidates, and a decision is reached by aggregating those made in the past. The features
stored in the case-base and the similarity metric are two key points where the designer has direct
influence over the operationalization of fairness principles, including the integration of domain
knowledge; the model’s computations are grounded in the decision-making processes of HR
professionals. The CBR workflow is depicted in Figure 1.</p>
        <p>We emphasize that the development of the CBR engine is fully situated within the intended
deployment context; the data and design insights were gathered from our industry partner,
and the model is bespoke for their own needs and internal processes. At the same time,
the model framework, principles and parameters can be readily adapted to new industry
contexts. We assert that this is preferable over the development of a generic tool targeting the
broadest possible context; one-size-fits-all approaches impose natural limits to accountability,
transparency, justified decision-making processes, user agency and the use of domain knowledge.
Data. The CBR system depends on careful feature engineering to determine precisely the
information on which the model bases its decisions. To efectively capture domain-specific
insights, our approach combined workshops and surveys with HR professionals from our industry
partner. This mixed methodology collects both quantitative and qualitative observations about
the candidate data and the operational context of the recruiting company. The domain insight
we gather represents HR experts’ background knowledge about the job types, attributes, and the
relevant range of values for those attributes5 for the given sector and company. Another key
aspect of our system is capturing the relative importance of each attribute in the decision-making
process. The importance of an attribute is context-sensitive and can vary across diferent job
types. To ensure the validity of our findings, it was crucial to gather responses from multiple
HR personnel employed by our industry partner. This approach helps mitigate potential biases,
as diferent evaluators may interpret applicant data in distinct ways. Figure 2 demonstrates the
extracted general attribute importance from the survey results.</p>
        <p>Model. The importance measured for each feature determines a baseline for their relative
weight in computing similarity scores, which can be further adjusted according to domain. This
5E.g. Bachelor’s or Master’s for Educational Level, as opposed to Some higher education and No higher education.
is represented by the equation
 global( ,  ) =
∑   ⋅  local, (  ,   ),
(1)
Where  and  consist of the feature vectors of two candidates, and   and  local, respectively
denote the weight and job-specific similarity metric
attributed to the  th feature. For each new
candidate, the most similar candidates from the case-base are retrieved by computing similarities
using the extracted, engineered features and relative weights. A decision is then rendered by
aggregating the decisions made on the retrieved cases, weighted by similarity.</p>
        <sec id="sec-5-1-1">
          <title>Evaluation.</title>
          <p>A key element of procedural fairness is that every candidate is subjected to
the same process. A weakness of the CBR engine is that its reliability depends on having
suficiently many similar candidates within the case-base. This is essentially a version of the
outlier problem that plagues all statistical models. We thus define the decision
certainty as a
metric for procedural/individual fairness; it is inversely proportional to the average distance
between a new case and the retrieved cases, as computed using the similarity metric in Eq.
1. As opposed to machine learning, one can use certainty to immediately if the candidate in
question is an outlier, so that their application may be considered with further scrutiny rather
than systematically rejected, and the case-base can be immediately updated. Whereas a machine

=1
learning model may require thousands of such updates to perform reliably on future, similar
candidates, a CBR-engine only requires a number proportional to the number of cases retrieved
for decision-making ( ≈ 10 ).</p>
          <p>Operationalizing Fairness. The setup we describe attempts to operationalize several fairness
notions established during our investigations:
• Omitting any direct information about protected attributes prevents direct discrimination
and limits the possibility of proxy discrimination compared to machine-learning-based
methods using raw text data.
• The selection of features deemed highly relevant in the candidate selection process
and weighted according to user specification promotes job relatedness, user agency, and
selection of the most qualified candidate in a well-grounded and explainable decision-making
process.
• The decision to sort candidates into rough categories instead of rankings acknowledges
that job suitability is not one-dimensional.
• The certainty metric provides a proxy for robustness and reliability, aspects of procedural
fairness, as well as the capability to flag outliers who may otherwise be treated unfairly.
• The ability to rapidly update the case-base, and customizability of extracted features and
similarity scores further support technical robustness and post-deployment adaptivity,
acting as guardrails against the hegemony of static algorithms.</p>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>Beyond CBR</title>
        <p>
          The CBR engine represents a design framework and prototype developed with explicit attention
to stakeholder perspectives on fairness, but we acknowledge its shortcomings; the engine’s
output is still dependent on historical hiring decisions that may be unfair, while features
rated highly by HR professionals such as University Name, District, Foreign Language and
Employment Status could serve as proxies for protected attributes and be used to reinforce
unfair biases in the hiring system rather than mitigate them. We thus conclude this position
paper with a summary of further research directions in proactive design that can complement
CBR and further integrate stakeholder fairness principles.
(Large) Language Models
Deep-learning-based language models extract complex features from raw text data,
achieving unparalleled performance and completing tasks that would be intractable with simpler,
interpretable models such as the CBR engine. The language comprehension capabilities of
large language models (LLMs) such as ChatGPT and DeepSeek further expand the frontier of
possibilities. Such models can ofer qualitative analyses that are not possible using numerical
methods and could further promote fairness. However, what these models gain in capability is
paid for with opaque processes that hinder transparency, agency, and justifiability. Moreover, a
large body of research demonstrates that such models encode harmful stereotypes that can lead
to negative outcomes when integrated into AI recruitment tools (e.g., [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]).
Bias Detection and Mitigation. In parallel to the CBR engine, the second main line of
technical research in the BIAS Project consists of a framework for and collection of bias detection
and mitigation techniques for various stages along the AI recruitment pipeline.
• Any recruitment tool that claims not to consider protected traits in the decision-making
process should be able to demonstrate that those traits cannot be inferred from training
or input data. We train a series of protected attribute classifier models for benchmark
testing: In the absence of proxies for protected attributes, such classifiers should not perform
better than random chance.
• We are developing a large repository of bias detection and mitigation tools for word
embeddings and language models, focusing on EU languages in the occupational context.
These are intended to help developers make a concentrated efort to mitigate harmful
stereotypes encoded in AI language models used in the recruitment context.
• A suite of tools for flagging potential grounds for unfair bias in candidate applications is
also being developed. On the candidate side, these tools could suggest rewrites to help the
candidate avoid prejudice, while, on the user side, they can either flag for consideration
or mask such information from the decision-making process.
        </p>
        <p>
          Justifying Decisions. Following [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], we understand justifications in the context of
decisionmaking AI to mean “not merely explaining the logic and the reasoning behind [automated
decisions], but also explaining why it operates in a legally acceptable (correct, lawful, fair) way.”
This is in contrast to explanations in the sense of explainable AI (XAI), which seek to shed light
on the internal decision-making process of the model. Justifications are often not only more
feasible, but more useful and desirable; they can serve as accountability measures for AI-aided
decision-making and address, e.g., the right to an individual explanation or contestation of
algorithmic decisions. Using LLMs, we have experimented with the generation of justifications
that cite elements of job postings and cover letters to support the decision rendered on a specific
applicant, while also referring to provisions in EU anti-discrimination law to justify decisions
made in the interest of substantive equality/positive discrimination. These justifications have
been utilized in mock tools to gather stakeholder feedback.
        </p>
        <sec id="sec-5-2-1">
          <title>Agonistic Machine Learning</title>
          <p>
            The fairness perspectives described in this work can be complementary or adversarial, and VSD
asserts that balancing these values must be conducted with care and intention. In the context
of AI tools, a candidate’s data stands as a proxy for the candidate themselves, and ranking
algorithms are typically calibrated to a particular, implicit balance of values, processing
candidate data and returning a deterministic output. Kate Crawford [
            <xref ref-type="bibr" rid="ref27">27</xref>
            ] and Mireille Hildebrandt
[
            <xref ref-type="bibr" rid="ref28">28</xref>
            ] find fault in this framing and propose an alternative framework called agonistic machine
learning. Agonistic machine learning is built upon the notion of the incomputable self, that
“any computation of our interactions can be performed in multiple ways–leading to a plurality
of potential identities” [
            <xref ref-type="bibr" rid="ref28">28</xref>
            ]. There is no single correct way to represent a candidate by data,
to process that data, nor to present the result of that processing—no best, most fair ranking.
The term agonistic refers to collective decision-making processes involving choices between
conflicting options–not necessarily by achieving rational consensus, but through a struggle
between adversaries.
          </p>
          <p>
            Agonistic Recruitment Tools. As an ideal, we imagine a multi-model, multi-human
decisionmaking process in which the aim of an AI recruitment tool is not simply to generate a single
ranked shortlist of candidates, but rather to facilitate a deliberative process by presenting
candidates according to multiple criteria and various value-balances. A CBR engine, bias-mitigated
language models and qualitative LLM components can all be combined in a multi-model
agonistic ecosystem, which can be further expanded to include fairness methods such as fair ranking
and counterfactual data augmentation. Models should incorporate randomness or aim to identify
outliers as means to avoid unfair systematic exclusion and place value on uniqueness. Such an
ecosystem extends the notion of ensemble models, in which the output and capabilities of
multiple models are aggregated to balance the strengths and weaknesses of individual components,
e.g., combining simpler models (CBR) with black-box neural networks to enable a fine-tuned
balance between interpretability and performance. By explicitly establishing numerical proxies
for diferent normative stances, mathematical constructs such as the Pareto front can be used
to select from a set of aggregating schemes that minimize value trade-ofs. Moreover, there
are existing guidelines for exploring the space of possible models according to various value
balances [
            <xref ref-type="bibr" rid="ref29">29</xref>
            ].
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>
        This work represents a reversal of a status quo in which fairness metrics act as proxies for
undeifned normative stances and existing models are modified to optimize them. Deep engagement
with stakeholder values should precede technical methods, which can then be built from the
ground up to operationalize and balance those values. To summarize our contributions,
• An explicit set of values and stakeholders is determined and the tri-partite investigation
of stakeholder normative stances and existing technical methods is described.
• The case-based reasoning engine is presented in the context of pro-active design, with
explicit attention towards the operationalization of stakeholder fairness perspectives.
• The CBR engine is situated within a larger technical research and development
environment that engages with fairness aspects that are not fully encompassed within the CBR
framework.
• We draw from the notion of agonistic machine learning as a means to combine and balance
the values underlying individual models and evaluation methods while further promoting
moral agency and fully engaging with the fact that candidates cannot be reduced to a
particular set of numerical features or a single score or deterministic automated decision.
VSD emphasizes progress, not perfection; the ideas described in this article aim to stimulate
further discussion and research.6 Designers bear a moral responsibility for the values embedded
6An EU Horizon sister project, FINDHR, has recently published parallel work detailing their design of fair
recruitment systems based on VSD [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], further supporting the position presented here. The topic of VSD was discussed
between members of our respective projects, but work was conducted independently.
within the technologies they develop. The role of AI in society is rapidly expanding, while
ethical, regulatory and social perspectives on AI, DEI initiatives and social justice as a whole are
in a state of flux and turmoil. We are at a critical point in history where profound engagement
with the interplay between human values and evolving sociotechnical contexts is of crucial
importance.7
      </p>
    </sec>
    <sec id="sec-7">
      <title>4. Acknowledgements</title>
      <p>This work is part of the Europe Horizon project BIAS, grant agreement number 101070468,
funded by the European Commission, and has received funding from the Swiss State Secretariat
for Education, Research and Innovation (SERI).
7Recent political developments demonizing DEI eforts appear to have already influenced ChatGPT. An inquiry
with no prior context, dated 12.2.2025, asked, “What is the definition of diversity bias? ” and received the response,
“Diversity bias refers to a type of bias that occurs when eforts to promote diversity lead to unintended discrimination,
favoritism, or misrepresentation. It can manifest in several ways, such as:…Tokenism – When organizations include
individuals from diverse backgrounds for appearance’s sake rather than fostering genuine inclusion. Reverse
Discrimination – When attempts to promote diversity result in bias against traditionally dominant groups. Diversity bias can
occur in hiring, media representation, education, and decision-making processes…” The ripple efects that polarized
politics and the political leanings of AI tech giants could have on AI tools in general–let alone in recruitment–are
both harrowing and dificult to quantify.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Friedman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. H.</given-names>
            <surname>Kahn</surname>
          </string-name>
          ,
          <article-title>Human agency and responsible computing: Implications for computer system design</article-title>
          ,
          <source>Journal of Systems and Software</source>
          <volume>17</volume>
          (
          <year>1992</year>
          )
          <fpage>7</fpage>
          -
          <lpage>14</lpage>
          . URL: https: //linkinghub.elsevier.com/retrieve/pii/016412129290075U. doi:
          <volume>10</volume>
          .1016/
          <fpage>0164</fpage>
          -
          <lpage>1212</lpage>
          (
          <issue>92</issue>
          )
          <fpage>90075</fpage>
          -
          <lpage>U</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Friedman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hendry</surname>
          </string-name>
          ,
          <article-title>Value sensitive design: shaping technology with moral imagination</article-title>
          , The MIT Press, Cambridge, Massachusetts,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <source>The Theory of Artificial Immutability: Protecting Algorithmic Groups Under</source>
          Anti-Discrimination
          <string-name>
            <surname>Law</surname>
          </string-name>
          (
          <year>2022</year>
          ). URL: https://arxiv.org/abs/2205.01166. doi:
          <volume>10</volume>
          .48550/ ARXIV.2205.01166, publisher: arXiv Version Number:
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Gilliland</surname>
          </string-name>
          ,
          <source>The Perceived Fairness of Selection Systems: An Organizational Justice Perspective, The Academy of Management Review</source>
          <volume>18</volume>
          (
          <year>1993</year>
          )
          <article-title>694</article-title>
          . URL: http://www.jstor. org/stable/258595?origin=crossref. doi:
          <volume>10</volume>
          .2307/258595.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T. J.</given-names>
            <surname>Thorsteinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Ryan</surname>
          </string-name>
          ,
          <article-title>The Efect of Selection Ratio on Perceptions of the Fairness of a Selection Test Battery</article-title>
          ,
          <source>International Journal of Selection and Assessment</source>
          <volume>5</volume>
          (
          <year>1997</year>
          )
          <fpage>159</fpage>
          -
          <lpage>168</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/10.1111/
          <fpage>1468</fpage>
          -
          <lpage>2389</lpage>
          .00056. doi:
          <volume>10</volume>
          .1111/
          <fpage>1468</fpage>
          -
          <lpage>2389</lpage>
          .
          <fpage>00056</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Zibarras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Patterson</surname>
          </string-name>
          ,
          <article-title>The Role of Job Relatedness and Self-eficacy in Applicant Perceptions of Fairness in a High-stakes Selection Setting: Selection Fairness Field Study</article-title>
          ,
          <source>International Journal of Selection and Assessment</source>
          <volume>23</volume>
          (
          <year>2015</year>
          )
          <fpage>332</fpage>
          -
          <lpage>344</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/10.1111/ijsa.12118. doi:
          <volume>10</volume>
          .1111/ijsa.12118.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schinkel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Van</given-names>
            <surname>Vianen</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. Van Dierendonck</surname>
          </string-name>
          ,
          <article-title>Selection Fairness and Outcomes: A field study of interactive efects on applicant reactions: Selection Fairness</article-title>
          and Outcomes,
          <source>International Journal of Selection and Assessment</source>
          <volume>21</volume>
          (
          <year>2013</year>
          )
          <fpage>22</fpage>
          -
          <lpage>31</lpage>
          . URL: https: //onlinelibrary.wiley.com/doi/10.1111/ijsa.12014. doi:
          <volume>10</volume>
          .1111/ijsa.12014.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Köchling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Wehner</surname>
          </string-name>
          ,
          <article-title>Better explaining the benefits why AI? Analyzing the impact of explaining the benefits of AI‐supported selection on applicant responses</article-title>
          ,
          <source>International Journal of Selection and Assessment</source>
          <volume>31</volume>
          (
          <year>2023</year>
          )
          <fpage>45</fpage>
          -
          <lpage>62</lpage>
          . URL: https://onlinelibrary.wiley.com/ doi/10.1111/ijsa.12412. doi:
          <volume>10</volume>
          .1111/ijsa.12412.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Annelies</surname>
            <given-names>E. M. Van Vianen</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruben Taris</surname>
          </string-name>
          , Eveline Scholten, Sonja Schinkel,
          <article-title>Perceived Fairness in Personnel Selection: Determinants andOutcomes in Diferent Stages of the Assessment Procedure</article-title>
          ,
          <source>International Journal of Selection and Assessment</source>
          <volume>12</volume>
          (
          <year>2004</year>
          )
          <fpage>149</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Gilliland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Groth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Dew</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Polly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Langdon</surname>
          </string-name>
          , IMPROVING APPLICANTS'
          <article-title>REACTIONS TO REJECTION LETTERS: AN APPLICATION OF FAIRNESS THEORY</article-title>
          , Personnel Psychology
          <volume>54</volume>
          (
          <year>2001</year>
          )
          <fpage>669</fpage>
          -
          <lpage>703</lpage>
          . URL: https://onlinelibrary.wiley.com/ doi/10.1111/j.1744-
          <fpage>6570</fpage>
          .
          <year>2001</year>
          .tb00227.x. doi:
          <volume>10</volume>
          .1111/j.1744-
          <fpage>6570</fpage>
          .
          <year>2001</year>
          .tb00227.x.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>D. D. Steiner</surname>
            ,
            <given-names>S. W.</given-names>
          </string-name>
          <string-name>
            <surname>Gilliland</surname>
          </string-name>
          , Procedural Justice in Personnel Selection: International and
          <string-name>
            <surname>Cross-Cultural</surname>
            <given-names>Perspectives</given-names>
          </string-name>
          ,
          <source>International Journal of Selection and Assessment</source>
          <volume>9</volume>
          (
          <year>2001</year>
          )
          <fpage>124</fpage>
          -
          <lpage>137</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/10.1111/
          <fpage>1468</fpage>
          -
          <lpage>2389</lpage>
          .00169. doi:
          <volume>10</volume>
          .1111/
          <fpage>1468</fpage>
          -
          <lpage>2389</lpage>
          .
          <fpage>00169</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>D. M. Truxillo</surname>
            ,
            <given-names>T. N.</given-names>
          </string-name>
          <string-name>
            <surname>Bauer</surname>
            ,
            <given-names>R. J.</given-names>
          </string-name>
          <string-name>
            <surname>Sanchez</surname>
          </string-name>
          , Multiple Dimensions of Procedural Justice:
          <article-title>Longitudinal Efects on Selection System Fairness</article-title>
          and
          <string-name>
            <surname>Test-Taking</surname>
          </string-name>
          Self-Eficacy,
          <source>International Journal of Selection and Assessment</source>
          <volume>9</volume>
          (
          <year>2001</year>
          )
          <fpage>336</fpage>
          -
          <lpage>349</lpage>
          . URL: https://onlinelibrary.wiley. com/doi/10.1111/
          <fpage>1468</fpage>
          -
          <lpage>2389</lpage>
          .00185. doi:
          <volume>10</volume>
          .1111/
          <fpage>1468</fpage>
          -
          <lpage>2389</lpage>
          .
          <fpage>00185</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>K. Van Den</surname>
            <given-names>Bos</given-names>
          </string-name>
          , R. Vermunt,
          <string-name>
            <given-names>H. A. M.</given-names>
            <surname>Wilke</surname>
          </string-name>
          ,
          <article-title>Procedural and distributive justice: What is fair depends more on what comes first than on what comes next</article-title>
          .,
          <source>Journal of Personality and Social Psychology</source>
          <volume>72</volume>
          (
          <year>1997</year>
          )
          <fpage>95</fpage>
          -
          <lpage>104</lpage>
          . URL: http://doi.apa.org/getdoi.cfm?doi=10.1037/
          <fpage>0022</fpage>
          -
          <lpage>3514</lpage>
          .
          <year>72</year>
          .1.95. doi:
          <volume>10</volume>
          .1037/
          <fpage>0022</fpage>
          -
          <lpage>3514</lpage>
          .
          <year>72</year>
          .1.95.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>D. M. Truxillo</surname>
            ,
            <given-names>D. D.</given-names>
          </string-name>
          <string-name>
            <surname>Steiner</surname>
            ,
            <given-names>S. W.</given-names>
          </string-name>
          <string-name>
            <surname>Gilliland</surname>
          </string-name>
          ,
          <article-title>The Importance of Organizational Justice in Personnel Selection: Defining When Selection Fairness Really Matters</article-title>
          ,
          <source>International Journal of Selection and Assessment</source>
          <volume>12</volume>
          (
          <year>2004</year>
          )
          <fpage>39</fpage>
          -
          <lpage>53</lpage>
          . URL: https://onlinelibrary.wiley.com/ doi/10.1111/j.
          <fpage>0965</fpage>
          -
          <lpage>075X</lpage>
          .
          <year>2004</year>
          .
          <volume>00262</volume>
          .x. doi:
          <volume>10</volume>
          .1111/j.
          <fpage>0965</fpage>
          -
          <lpage>075X</lpage>
          .
          <year>2004</year>
          .
          <volume>00262</volume>
          .x.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>U.</given-names>
            <surname>Konradt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Warszta</surname>
          </string-name>
          , T. Ellwart,
          <article-title>Fairness Perceptions in Web-based Selection: Impact on applicants' pursuit intentions, recommendation intentions, and intentions to reapply: Fairness in Web-based Selection</article-title>
          ,
          <source>International Journal of Selection and Assessment</source>
          <volume>21</volume>
          (
          <year>2013</year>
          )
          <fpage>155</fpage>
          -
          <lpage>169</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/10.1111/ijsa.12026. doi:
          <volume>10</volume>
          .1111/ ijsa.12026.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Furnham</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          Chamorro-Premuzic,
          <article-title>Consensual Beliefs about the Fairness and</article-title>
          Accuracy of Selection Methods at University: Fairness and Accuracy of Selection Methods at University,
          <source>International Journal of Selection and Assessment</source>
          <volume>18</volume>
          (
          <year>2010</year>
          )
          <fpage>417</fpage>
          -
          <lpage>424</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/10.1111/j.1468-
          <fpage>2389</fpage>
          .
          <year>2010</year>
          .
          <volume>00523</volume>
          .x. doi:
          <volume>10</volume>
          .1111/ j.1468-
          <fpage>2389</fpage>
          .
          <year>2010</year>
          .
          <volume>00523</volume>
          .x.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mirowska</surname>
          </string-name>
          , L. Mesnet,
          <article-title>Preferring the devil you know: Potential applicant reactions to artificial intelligence evaluation of interviews</article-title>
          ,
          <source>Human Resource Management Journal</source>
          <volume>32</volume>
          (
          <year>2022</year>
          )
          <fpage>364</fpage>
          -
          <lpage>383</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/10.1111/
          <fpage>1748</fpage>
          -
          <lpage>8583</lpage>
          .12393. doi:
          <volume>10</volume>
          . 1111/
          <fpage>1748</fpage>
          -
          <lpage>8583</lpage>
          .
          <fpage>12393</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Q. M. Roberson</surname>
          </string-name>
          (Ed.),
          <source>The Oxford handbook of diversity and work, Oxford library of psychology</source>
          , Oxford University Press, New York,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>T. E.</given-names>
            <surname>Landon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. D.</given-names>
            <surname>Arvey</surname>
          </string-name>
          ,
          <article-title>Ratings of Test Fairness by Human Resource Professionals</article-title>
          ,
          <source>International Journal of Selection and Assessment</source>
          <volume>15</volume>
          (
          <year>2007</year>
          )
          <fpage>185</fpage>
          -
          <lpage>196</lpage>
          . URL: https://onlinelibrary. wiley.com/doi/10.1111/j.1468-
          <fpage>2389</fpage>
          .
          <year>2007</year>
          .
          <volume>00380</volume>
          .x. doi:
          <volume>10</volume>
          .1111/j.1468-
          <fpage>2389</fpage>
          .
          <year>2007</year>
          .
          <volume>00380</volume>
          . x.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Alder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gilbert</surname>
          </string-name>
          ,
          <article-title>Achieving Ethics and Fairness in Hiring: Going Beyond the Law</article-title>
          ,
          <source>Journal of Business Ethics</source>
          <volume>68</volume>
          (
          <year>2006</year>
          )
          <fpage>449</fpage>
          -
          <lpage>464</lpage>
          . URL: http://link.springer.com/10. 1007/s10551-006-9039-z. doi:
          <volume>10</volume>
          .1007/s10551- 006- 9039- z.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Koivunen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Olsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Olshannikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lindberg</surname>
          </string-name>
          ,
          <article-title>Understanding Decision-Making in Recruitment: Opportunities and Challenges for Information Technology</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>3</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          . URL: https://dl.acm.org/doi/10. 1145/3361123. doi:
          <volume>10</volume>
          .1145/3361123.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>H.</given-names>
            <surname>Weerts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Xenidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tarissan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Olsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pechenizkiy</surname>
          </string-name>
          ,
          <article-title>Algorithmic unfairness through the lens of eu non-discrimination law: Or why the law is not a decision tree</article-title>
          ,
          <source>arXiv preprint arXiv:2305.13938</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>P.</given-names>
            <surname>Sarkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. C.</given-names>
            <surname>Liem</surname>
          </string-name>
          , ”
          <article-title>it's the most fair thing to do but it doesn't make any sense”: Perceptions of mathematical fairness notions by hiring professionals</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>8</volume>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wick</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-B. Tristan</surname>
          </string-name>
          , et al.,
          <article-title>Unlocking fairness: a trade-of revisited</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>32</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>A. K. Veldanda</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Grob</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Thakur</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Pearce</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Karri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Garg</surname>
          </string-name>
          ,
          <article-title>Investigating hiring bias in large language models, in: R0-FoMo: Robustness of Few-shot and Zero-shot Learning in Large Foundation Models</article-title>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>G.</given-names>
            <surname>Malgieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Pasquale</surname>
          </string-name>
          ,
          <article-title>From transparency to justification: Toward ex ante accountability for ai</article-title>
          ,
          <source>Brooklyn Law School</source>
          , Legal Studies Paper (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>K.</given-names>
            <surname>Crawford</surname>
          </string-name>
          ,
          <article-title>Can an algorithm be agonistic? ten scenes from life in calculated publics</article-title>
          ,
          <source>Science, Technology, &amp; Human Values</source>
          <volume>41</volume>
          (
          <year>2016</year>
          )
          <fpage>77</fpage>
          -
          <lpage>92</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hildebrandt</surname>
          </string-name>
          ,
          <article-title>Privacy as protection of the incomputable self: From agnostic to agonistic machine learning</article-title>
          ,
          <source>Theoretical Inquiries in Law</source>
          <volume>20</volume>
          (
          <year>2019</year>
          )
          <fpage>83</fpage>
          -
          <lpage>121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>J.</given-names>
            <surname>Simson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pfisterer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kern</surname>
          </string-name>
          ,
          <article-title>Everything, everywhere all in one evaluation: Using multiverse analysis to evaluate the influence of model design decisions on algorithmic fairness</article-title>
          ,
          <source>arXiv preprint arXiv:2308.16681</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>C.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Biega</surname>
          </string-name>
          ,
          <article-title>Developing a fair online recruitment framework based on job-seekers' fairness concerns</article-title>
          ,
          <source>arXiv preprint arXiv:2501.14110</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>