<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Algorithmic imaginary of AI for recruitment: perceptions and experiences of AI use from HR practitioners</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Silvia Ecclesia</string-name>
          <email>silvia.ecclesia@ntnu.no</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Norwegian University of Science and Technology</institution>
          ,
          <addr-line>Edvard Bulls veg 1, Trondheim, 7491</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>The sensationalistic narratives surrounding Artificial Intelligence have a significant influence on how people make sense of and use the technology. These narratives often create unrealistic expectations about AI's capabilities that afect how people use it [ 1]. Adopting the concept of algorithmic imaginary, which refers to the way in which people perceive and experience algorithms in their interaction with them [2], I investigate how HR practitioners perceive and experience AI for hiring. With AI being increasingly used in recruitment processes, a better understanding of HR practitioners' interaction with the technology is important when considering matters of bias and fairness. This short paper is based on 8 semi-structured interviews with HR practitioners and recruiters in Italy with experience in using AI for their work and the observation of a selection process with an Italian head-hunting company using their own AI-powered system. From this data emerged how HR practitioners' perception of algorithms as neutral shapes an understanding of bias as something that can be overcome through training of the system. This view, however, reproduces a simplistic idea of bias which does not account for its intersectional and complex entanglement with power structures. Excessive trust in AI's neutrality should be replaced with more critical engagement with AI outcomes.</p>
      </abstract>
      <kwd-group>
        <kwd>Artificial Intelligence</kwd>
        <kwd>Algorithmic imaginary</kwd>
        <kwd>HR</kwd>
        <kwd>Algorithmic bias</kwd>
        <kwd>AI fairness</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial Intelligence has a strong link to science fiction and used as a narrative means to create
highly evocative and polarized narratives. These narratives have been shown to influence people’s
understanding of the technology [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and practices in using it [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. Sensationalistic fictional narratives
of AI, however, are not technological blueprints and often create unrealistic expectations for what AI
can do [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Consequently, we can expect there to exist a gap between perceived and actual utility of AI.
The concept of algorithmic imaginary is useful in this regard to describe how people make sense of
and experience algorithms in their everyday lives through their own perceptions and beliefs of how
algorithms work [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The algorithmic imaginary shapes how people use algorithms, and in turn, it also
shapes the algorithm through use, influencing its workings. These influences on AI use can also afect
ethics and fairness, for example prompting to place excessive trust in the technology.
      </p>
      <p>
        In this paper, I will use the case of algorithms used for recruitment in human resources (HR) to
explore how practitioners’ algorithmic imaginary impacts AI fairness in hiring. In the last few years, the
human resources sector has known an increasing ofer of AI-powered tools that can rank or evaluate
CVs and show recommended candidates. On one side, these tools have been found to raise concerns
regarding bias, power imbalances, and fairness of the selection process [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. On the other side, they
are marketed as debiasing tools that can support practitioners in making fairer hiring decisions [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ].
This ambiguity will be explored in this paper looking at how the algorithmic imaginary influences
practitioners’ leaning towards one or the other belief, thus impacting their use of AI and understanding
of AI fairness for recruitment.
      </p>
      <p>This paper is based on 8 interviews with HR practitioners using algorithms for recruitment, and</p>
      <p>CEUR</p>
      <p>ceur-ws.org
around 12 hours of observation with a head-hunting company using an AI-powered system for search
and selection. Through analysis of interviews transcripts and observations notes, in this paper, I aim
at presenting the main ideas behind the algorithmic imaginary shared by HR practitioners and how
these might impact fairness in recruitment. First, considering the influence of science fiction on the
perception of AI, I will present existing literature on the role of AI imaginaries and the algorithmic
imaginary in shaping people’s use of AI. Secondly, I will present the specificity of the case of AI for
recruitment focusing on the discussion about bias and fairness in AI when used for this task. Then,
after explaining the method, I will present the algorithmic imaginary of recruiters that I identified in
the data. Lastly, I will reflect on this imaginary which is strongly influenced by popular narratives and
marketing narratives but also presents some interesting ambiguities that emerged when looking at
people’s use of algorithms. Through this analysis, I aim at proposing some venues for future research
and attention in the field of AI fairness for recruitment.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The Myth of Artificial Intelligence</title>
      <p>
        Thinking machines have been a classic element of science fiction since the beginning of the 20th century
[
        <xref ref-type="bibr" rid="ref10 ref3">3, 10</xref>
        ]. The first representation of a thinking robot in a movie was Maria the robot in Fritz Lang’s movie
Metropolis of 1927. Following that, the science fiction genre surged and with it also representations of
intelligent machines, including classic movies like Terminator, 2001: A Space Odyssey, and Ex-Machina
which are being brought up still today when talking about AI. The role of science fiction in informing
the public’s understanding of AI has been fundamental considering the complexity of the technology
which would be dificult to understand otherwise [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Nevertheless, these sensationalistic narratives
are not to be considered representations of what the future could become. More often, AI is used as
a metaphor to address other issues in our society [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. For example, it can be a metaphor for gender
oppression, like in Ex-Machina, or for class oppression, like in Metropolis. While powerful imaginaries,
the efect of using AI for these extreme future stories is creating unrealistic expectations about the
capabilities of the technology [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. For example, the many stories about artificial intelligence longing to
become human and have human feelings have already led many people to believe that current chatbots,
like Open AI’s ChatGPT, showed a similar desire [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        How people make sense of and perceive the workings of AIs can be understood as an algorithmic
imaginary [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. While I will not consider only algorithms in this paper, the concept of algorithmic
imaginary is useful for capturing the interplay between people’s beliefs about a technology’s work
and their use of it. Since algorithms are one of the most common AI applications we encounter in our
everyday lives and an application where we can directly experience the AI adapting to our behaviour,
they are particularly prone to be interpreted and easily create beliefs around their functioning [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ].
Algorithmic imaginary indicates “the way in which people imagine, perceive and experience algorithms
and what these imaginations make possible” [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The algorithmic imaginary becomes evident in
situations in which people become aware of and encounter the algorithm [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This imaginary shapes
how people interact and use algorithms, but it also shapes the algorithm in return, subtly changing
its functioning . For example, people can limit the information they share on social media to protect
themselves [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] or make explicit choices about what to post, comment, or like [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. These practices show
that the perception of how algorithms work shapes users’ behaviour, which influence the algorithm
itself, for example in the content it shows to each user [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In this paper, the relationship between
algorithmic imaginary and practices will be explored in the case of AI for recruitment.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. AI and recruitment</title>
      <p>AI is being increasingly used in human resources (HR) for selection and recruitment. The platform
LinkedIn, which makes use of an algorithmic recommendation system, has become an indispensable
platform for recruiters in any sector, meaning that recruitment has employed some form of digital
automation for a very long time [15]. Today, an increasing number of AI tools are being proposed on
the market to perform several diferent tasks within the recruitment process. One of the most common
uses pertains to the screening, ranking, or summarizing of candidates’ CVs or cover letters, providing an
evaluation based on previous hiring decisions, the job requirements, or simply making a summary for
ease of manual screening [16]. In diferent ways, these AI applications propose an analysis of candidates
to recruiters which aims at supporting them in the initial screening phases and making recruitment
faster.</p>
      <p>
        The ethical implications of using these tools are a matter of great discussion. On one side, many
companies creating these systems adopt as one of their selling points the idea that AI decision-making
will reduce human bias ([
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. These claims often rest on a deterministic understanding of technologies
and fairness, where AI’s reliance on so-called neutral data is believed to be leading to a neutral decision.
According to the understanding of AI as neutral, characteristics like gender or race can be simply
switched of, allowing AI to evaluate candidates only based on merits [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. On the other side, however,
anonymization of candidate’s personal information has been found to not be enough to make hiring
processes free from discrimination [17]. Issues of algorithmic bias are dificult to spot as AI operates as
a black box, without giving an explanation for its evaluations. Thus, in parallel with marketing eforts
selling AI as less biased, the use of automated systems for ranking CVs has raised many ethical concerns
regarding bias, discrimination, and fairness of the selection process [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. Algorithms are not neutral
and can reproduce or even amplify discrimination and inequality by making discrimination systematic
and ingrained in the system. Consequently, many scholars call for critical engagement with AI systems
for recruitment from the HR practitioners adopting them [
        <xref ref-type="bibr" rid="ref8">18, 8</xref>
        ].
      </p>
      <p>In short, perceptions of AI are translated into use can thus have an impact on fairness when AI is
being used for decision-making. Mainly, the influence of fiction on people’s understanding of AI can
prompt people to place excessive trust in the technology and discard their own agency in interpreting,
modifying, or questioning AI results. Placing excessive trust in artificial intelligence and not questioning
its outcome could especially eschew issues of bias and discrimination which require critical engagement
with AI results. Facebook’s algorithm has given us many examples of the negative consequences of not
being aware of algorithm workings on society, e.g. not being aware of the partial picture the Facebook
news feed shows us can lead to the “bubble efect” and influence political views [ 19]. Similarly negative
consequences have already emerged in the use of AI for hiring [20], which is why critical engagement
is fundamental to ensure fairness. In this paper, I will investigate how HR practitioners experience and
perceive algorithms in their everyday lives and reflect on the consequences this has on fairness in AI
for recruitment.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Method</title>
      <p>This paper is based on semi-structured interviews and observations with recruiters and HR practitioners
in Italy. This data collection was done as part of the activities for the European project BIAS, investigating
bias in the labour market and empowering the HR and AI communities in mitigating it. Participants
were first identified through BIAS events and workshops and then recruited through snowball sampling.
The site for observations has been found through online search. In total, I have conducted 26 interviews
mainly with HR practitioners and recruiters in Italy. From these 26 interviews, I selected a sub-set of
8 interviews which I deemed most relevant for this study as the interviewees had direct experience
of AI systems for screening or ranking CVs. Interviews were conducted both online and in-person
and lasted approximately one hour. The interview questionnaire (appendix A) was divided into six
parts: work experience and educational background, standard selection process with or without AI,
understanding of fairness of the selection process, use of AI for recruitment, perception of AI fairness,
and expectations about the future of AI.</p>
      <p>To complement this data, I conducted a series of observations during the span of two months for a
total of around 12 hours with Tech Hire, a head-hunting company using AI for CV screening. Within
Tech Hire, I followed one specific selection process for a sales manager in the cybersecurity sector
observing primarily Alberto, the main recruiter, and holding regular check-ins with him where he
updated me on the progress of the candidate search.</p>
      <p>
        The interviews were conducted in Italian; thus, they have been transcribed and then translated. The
observations resulted in a series of notes which have also been translated. The interviews and notes
from the observations have been coded in Nvivo through thematic analysis. During the first round of
coding, I identified many quotes relating to perceptions of AI. For example, seeing AI as an assistant, a
tool, or a threat to their job. Within the code “perception” however, I noticed a diference between the
perceptions of interviewees who had never used AI and those who did. The perception of interviewees
who did use AI relied on their experience of use and were often presented as certainties. These quotes
seemed close to the experiences registered by Bucher [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] in perception of the Facebook algorithm. Thus,
in a second round of coding I further refined the code “perception” and diferentiated the perceptions of
those who had never used AI for recruitment and those who did, calling the latter code “algorithmic
imaginary”. In the remainder of this paper, I will first introduce the HR algorithmic imaginary and
present it through two examples of moments in which the algorithm made its presence felt by the user.
Lastly, I will reflect on the implications for fairness in recruitment processes using AI systems.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Findings - The algorithmic imaginary of AI for recruitment</title>
      <sec id="sec-5-1">
        <title>5.1. Insecurity about AI</title>
        <p>A common theme across interviews is the general insecurity HR practitioners communicate about their
knowledge of what AI is and how it works. On two occasions the interviewees asked the interviewer
directly for confirmation that the tools they were describing were actually AI, which shows the dificulty
in understanding this highly complex technology. In addition, the term AI is often used as a general
term to indicate the dominant computational techniques [21] creating further confusion. During the
observations as well, the recruiter mentioned that while the company’s co-founder (who programmed the
platform) knows how everything works, the recruiter does not have the same overview and technical
expertise and there are features he is not sure how they work. This insecurity regarding how the
technology actually works is further exacerbated once people experience so-called AI hallucinations
, which refer to AI output being erroneous or even fictional [ 22]. Experiencing these hallucinations
further create a sense of mystery around the technology and reinforce the feeling of lack of transparency.</p>
        <p>In one case this insecurity translated into a reflection about the system’s fairness and a resistance to
AI use. Fiona, who tested an AI system for CV screening said:
“And yet there are continuous challenges [to AI implementation]. Because how can we tell
if this system has taken into account all the characteristics I want and has not discriminated
against someone? Based on what is it showing only male and not female candidates for
example? So there are still many doubts about the efectiveness and also the fairness of
this system.”</p>
        <p>Here Fiona reflects on how the lack of transparency of algorithms leads her to not trust the system,
considering not only fairness but also efectiveness, comparing fair hiring decisions with good hiring
decisions. On the other hand, interviewees who were currently using AI systems for recruitment fought
this insecurity through continuous oversight of the system , which they felt they couldn’t trust. For
example, Monica, a recruiter using AI for candidate, mentioned that recruiters in her company still
check all of the candidates marked red by the AI as they know the system can make mistakes. She then
reflects on how sometimes it is clear if the AI made a good ranking but other times it is more ambiguous
and they need to do a manual screening of the CV to make sure it has been properly evaluated.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. AI is based on data</title>
        <p>The most recurring theme across interviews referred to the perception of AI relying on data and
probability. Maria, HR junior specialist, described it as being “based on science”, like a match-making
platform proposing the best possible candidate. As a consequence, interviewees understand that
the algorithm works best when given objective and numerical data, which it can easily understand.
Weighing the pros and cons of using AI, HR consultant Valeria says:
“For objective variables, it can probably be useful, in the sense that if I link some KPIs, or
some indicators, to more objective indicators… i.e. how many projects you have managed,
what level of satisfaction you have concerning customer service. [...] Or if you are in
production, how many rubber pads have you produced, in what timeframe, in what ways.</p>
        <p>When there are objective variables artificial intelligence can do well.”</p>
        <p>Here Valeria reflects on the ways in which AI can be useful during CV screening. She draws a
distinction between parts of the recruitment process that rely on objective data, usually measuring hard
skills, and those who rely on subjective data that need to be evaluated by a human. Nevertheless, this
distinction is not practically applied as recruiters evaluate hard and soft skills at the same time. Having
to incorporate the use of AI with the full process of CV screening thus means that they have to turn
their requirements into quantifiable data that can be understood by the algorithm. Nadia for example
mentioned that:
“the advantage of using artificial intelligence rather than a person is that AI works a lot
based on data. To work a lot with data the person must also know how to classify them
and give rules that generate clearer results.”</p>
        <p>In this quote, Nadia emphasizes the crucial role of users inputting data into the system to actually get
good results from it. While there are few examples in the interviews and observation of how recruiters
translate requirements into objective data, the ones using AI mention having to learn how to use the
system properly and adapt their use to the algorithm. Nadia, for example, complained about having
to “be patient” and every day refine the way in which she used the system to screen candidates as the
results never fully met her expectations.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. AI needs to be trained</title>
        <p>Starting for the perception of AI relying on data, interviewees moved on to explaining how the algorithm
needs to be trained and guided. The common narrative across the eight interviews is that, now, AI
needs to be continuously checked by humans, because of the insecurity it elicits, but with continuous
training and feedback loops the system is improving or will improve over time. For example, Matteo
said:
“Today the algorithm needs to be trained. […] The supplier is training the algorithm. Two
years of training have now passed, and it is getting better and better.”</p>
        <p>Similarly, Tech Hire employees mentioned training AI as an investment for the future. They take
time now to train and give feedback to the system to reap the benefits of it later.</p>
        <p>This training was sometimes interpreted also as a customization or personalization, trying to convey
their own screening “style” the system. For example, Denise, reflecting on the possibility of using AI
for CV screening, says:
“For example, when I look at CVs I check if people have worked during university or if
they have studied abroad and so on and so forth. I don’t know if I can ask the software
to look for [this]. […] The day I decide to use it, I would like to be able to customize it as
much as possible.”</p>
        <p>Understanding training as a personalization further enhances the need for control over the AI system,
which is expected to not just become a good recruiter but a good recruiter for their company, aware of
their context and company identity and values.</p>
        <p>Imagining the algorithm as a blank slate in which data is inputted, however, leads to perceiving AI
mistakes as a consequence of human mistakes in training or in using the technology, either due to bias
in training the AI or for lack of knowledge in using it. The problem of bias is seen as inherently human
rather than technological. AI is then considered not biased but, if ever, trained on biased data generated
by people. Thus, recruiters are weary of the results produced by AI but still hold the ambiguous belief
that by training it and making sure it is fed good data it will become better, and thus neutral.</p>
      </sec>
      <sec id="sec-5-4">
        <title>5.4. Encounters with the algorithm</title>
        <p>The HR algorithmic imaginary is highly ambiguous and even contradictory in some regards, perceiving
AI as both objective but also not fully trusting it. Experiences of encounters with algorithms can help us
understand how people made sense of this ambiguity in their use of AI and how diferent perceptions
emerged according to how the algorithm made itself known. At times, the interaction between the input
given by the user and the AI output based on it was misinterpreted, showing an unrealistic expectation
for what AI should do.</p>
        <p>Matteo and Monica, while talking about their use of their AI system for candidate screening and
ranking, experienced the algorithm’s output as a way for themselves to check their own input and the
job description.</p>
        <p>“We are the first ones to check, not so much if the algorithm runs well, but if in setting
the filters of the job requirements we have missed some important ones. Sometimes we
realize that the candidate Mario Rossi ended up in the red CVs and we wonder why and we
realize that we have not included that type of degree or those years of experience. Maybe
the manager tells you ”Mario Rossi is a good profile for me”, so we go and check his CV
because, for example, the manager could have had a recommendation from someone. But
Mario Rossi is in the reds, why? Because you [manager] told me you wanted up to five
years of experience and he has ten. Let’s include Mario Rossi but also let’s expand the
research up to ten years of experience so that in addition to him other candidates are also
included.”</p>
        <p>Their experience shows a very grounded understanding of AI and its objectivity, which in turn
highlights people’s bias in setting job requirements. In this case, AI is perceived as a neutral tool, and
this perception leads them to question their own biases and rethink the recruitment process.</p>
        <p>A diferent experience comes from the observations with Tech Hire. Alberto often described himself
and his colleagues as “pedantic”, always checking the AI ranking and making sure there are no mistakes.
He then mentions some examples of AI mistakes which I reported in my notes:
“For example, we saw the CV of a suitable candidate, but he lives in Rome. The AI evaluated
it 8/10 and set him at a high priority because the job description says that Milan/Turin/Rome
are all acceptable locations, but in reality the priority is Milan. Alberto identifies this as an
AI mistake. He evaluates the candidate 8 but sets him at a lower priority as he will first
contact the ones living in Milan. A similar example had happened earlier in the day. A
candidate had been flagged red (not meeting the must-have requirements) by the system
because she indicated a level 7 of English and the minimum required was 8. Alberto,
however, says that between 7 and 8 there is not much diference especially since it is a
self-assessment, and the candidate matches all the other requirements. He again identifies
this as an AI mistake and marks the candidate green.”</p>
        <p>In this situation, the recruiter explained the diferences between the AI screening and the human
screening as AI mistakes, conveying an unrealistic expectation of what AI could do since the system
followed the job requirements . These types of information, such as Milan being a priority location
and the language requirement not being so strict, are contextual information depending on the specific
search. These could be communicated to the AI system but would require more work in the initial
setting up phases. The perception, however, is that through continuous training, and what interviewees
called “personalization”, this kind of contextual knowledge and HR expertise can be replicated by AI.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion - Implications for fairness</title>
      <p>To summarize, the HR practitioners’ algorithmic imaginary relied mostly on the view of AI being based
on data. Data, understood as objective and numeric data, as opposed to subjective requirements, are
at the core of the algorithmic imaginary. The way in which algorithms elaborate and use these data,
however, is still occasionally unclear, leading to a sense of insecurity and mistrust towards the algorithm
and a need to continuously check its output. While checking the algorithm’s output is sometimes
perceived as a burden, like in the case of Nadia mentioned before, it is also perceived as an investment,
like in the case of Tech Hire. Recruiters check the algorithm with the intention of training or guiding it
through feedback, improvements, or personalization to make it better. The algorithmic imaginary is
that of a technology which can eventually become neutral, and thus fair but that for now needs to be
monitored.</p>
      <p>
        The presented algorithmic imaginary shows a clear influence from external narratives. First, it
borrows from public policies and popular media narratives about the inevitability of AI [23, 24]. HR
practitioners’ encounters with algorithms today suggest a need for continuous oversight, but do not
shake their belief that AI will be better in the future, even better than people. Secondly, their algorithmic
imaginary is strongly influenced by companies and marketing strategies presenting AI as neutral and
unbiased. This idea is upheld by interviewees as well, who adopt this view of AI as being objective, and
therefore, less biased than people. Since AI is based on data and probability, many assume it naturally
leads to better decisions, thus, mistakes or biases in AI are perceived as solvable through additional
training. However, previous research has already challenged this narrative and highlighted the issue it
could lead to in relation to fairness and bias [
        <xref ref-type="bibr" rid="ref8 ref9">9, 8</xref>
        ]. The understanding of AI as neutral essentializes bias
overlooking the complex situated nature of bias and fairness in recruitment.
      </p>
      <p>
        Despite the importance of external imaginaries in influencing recruiters, I identified a tension between
perceptions of AI today and future expectations about AI which coexist within the same imaginary. On
one side, the policy and media narratives and the sensationalistic science fiction depictions of AI support
the hope for AI to become better and continue to improve. This expectation stems from depictions of
AIs as humans, too realistic to be true [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], prompting HR practitioners to imagine a future in which AI
will be able to perfectly replicate human consciousness. On the other side, everyday encounters with
algorithms do not meet this expectation, leading people to not trust algorithms (yet) and continue to
check their output. Trust in AI is, thus, a future promise rather than a present possibility. The recruiters’
algorithmic imaginary cannot be understood as pertaining only to the present or only to the future.
The experience of present AI capabilities and future expectations co-exist in the HR practitioners’
algorithmic imaginary and equally influence their use. For example, giving a double purpose to the
act of algorithmic oversight for both checking and training the system. While the imaginary shapes
present use, it also shapes expectations about the future of AI and how current practices will impact it;
in return, these expectations influence current use.
      </p>
      <p>
        The ambiguity imbued in the algorithmic imaginary and the amount of trust that can be put into
algorithms finds certainty and stability in the adjacent data imaginary. Data, within the understanding
of AI, are not problematized. While interviewees recognize that data can be biased, bias in data is
perceived as a flaw that can be easily fixed by removing markers like gender, race, sexuality or other.
”In this sense, AI-powered hiring tools evoke a specific sociotechnical imaginary centred on the concept
of meritocracy” [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] suggesting that erasing personal markers will lead to decisions based only on merits
and thus neutral and fairer. With data imaginary I refer to the perceptions and expectations about
neutrality of data, perceiving data as something that inherently reflects reality and the truth and that
couldn’t possibly lead to non-neutral outcomes. Data, however, hold power within it in the way in
which it is collected, how, by whom, from whom [25]. Reducing complex processes, like evaluating
a job candidate, into small bits and pieces of data does not mean achieving objectivity. The framing
of data as neutral overlooks the fact that AI models are trained on data models who reflect one of the
possible realities experienced by people, usually the dominant group, but not all. Science fiction also
plays a role in hiding the process of feeding data to AI, which is often absent or glossed over in popular
narratives, and the importance of diversity within data [26]. The imaginary of data as neutral and
consequently of AI as neutral hides the structural and systemic issues that could be embedded in AI
design and implementation.
      </p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>Artificial Intelligence has long been a staple of science fiction movies which propose sensationalistic
narratives about intelligent machines. While these science fictions often have the scope of inviting
reflections on other societal issues, one of their efects is that of creating false expectations about AI
capabilities. These expectations form the algorithmic imaginary which inform use and perception.
Through interviews and observations with recruiters and HR practitioners in Italy, I examine the
algorithmic imaginary of AI in recruitment and how it could impact AI fairness. From the interviews
and observation, I identified an algorithmic imaginary relying on the understanding of AI as something
that works with data. There is high awareness of the fact that the algorithm needs to be trained, which
interviewees think will make it better over time. While this algorithmic imaginary gives people agency
to experiment with input, it also conveys this idea of algorithmic bias as a barrier to achieving otherwise
neutral AI decisions about hiring. My findings show how the underlying assumption about AI is highly
influenced by external sources: science fiction unrealistic depictions of AI and company’s marketing
of AI as intrinsically neutral and unbiased. Despite these influences, however, recruiters’ algorithmic
imaginary presents an internal ambiguity between the current experience of AI as flawed and opaque
and the future expectation of human-like, neutral AIs. The algorithmic imaginary thus relates to both
the present and future of AI and recruiters perceive their role as trainers of AI as a temporary step
for AI development. Within this ambiguous algorithmic imaginary, I also recognize a strong influence
from the adjacent data imaginary that presents data as trustworthy and neutral. However, the intrinsic
situatedness and complexity of data and how it can also reflect unequal power structures is overlooked.
The way in which the data imaginary is inherited and incorporated into the algorithmic imaginary and
its impact on understandings of debiasing could be an interesting avenue for further research.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work received financial support by the European Union’s Horizon Europe Research and Innovation
program as part of the project “BIAS: Mitigating Diversity Biases of AI in the Labor Market” (grant
agreement No. 101070468).</p>
      <p>A. Semi-structured interview guide
1. What is your background?
2. What is your current position?
3. Could you explain to me step by step a typical recruitment process at your company?
4. Thinking about the work context and personnel selection activities in particular, what is your
interpretation of the concept of fairness and equality?
5. Are measures implemented at your company to mitigate discrimination in the workplace and in
the selection phases? Do you think they work?
6. Do you use or have you ever used an AI system for any phase of the recruitment process?
7. If yes, could you explain to me how it works?
8. Do you know how the data on which the AI system is based is collected?
9. Does the AI system play a role in the final hiring decision ?
10. What is your opinion on the introduction of AI in the workplace?
11. Thinking about the role of HR, what tasks do you think AI can replace and what tasks can only
be done by humans?
12. What do you think is the impact of AI on fairness and equality in recruitment practices?
13. How do you think AI can be best exploited in personnel selection and management activities
(especially considering fairness)?
14. What are your concerns related to the introduction of AI in the workplace? And your hopes?
15. What do you think the role of AI could be in 20 years?
16. Would you like to make some final comments or reflections?</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>I. Hermann</surname>
          </string-name>
          ,
          <article-title>Beware of fictional AI narratives</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>2</volume>
          (
          <year>2020</year>
          )
          <article-title>654</article-title>
          . URL: https://www.proquest.com/docview/2621045548/abstract/E519E44FBEB34D1APQ/1. doi:
          <volume>10</volume>
          .1038/ s42256- 020- 00256- 0, num Pages: 654 Place: Basingstoke, United States Publisher: Nature Publishing Group.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Bucher</surname>
          </string-name>
          ,
          <article-title>The algorithmic imaginary: exploring the ordinary afects of Facebook algorithms</article-title>
          , Information,
          <source>Communication &amp; Society</source>
          <volume>20</volume>
          (
          <year>2017</year>
          )
          <fpage>30</fpage>
          -
          <lpage>44</lpage>
          . URL: https://doi.org/10.1080/1369118X.
          <year>2016</year>
          .
          <volume>1154086</volume>
          . doi:
          <volume>10</volume>
          .1080/1369118X.
          <year>2016</year>
          .
          <volume>1154086</volume>
          , publisher: Routledge _eprint: https://- doi.org/10.1080/1369118X.
          <year>2016</year>
          .
          <volume>1154086</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Musa</surname>
          </string-name>
          <string-name>
            <surname>Giuliano</surname>
          </string-name>
          ,
          <article-title>Echoes of myth and magic in the language of Artificial Intelligence</article-title>
          ,
          <source>AI &amp; SOCIETY</source>
          <volume>35</volume>
          (
          <year>2020</year>
          )
          <fpage>1009</fpage>
          -
          <lpage>1024</lpage>
          . URL: http://link.springer.
          <source>com/10.1007/s00146-020-00966-4</source>
          . doi:
          <volume>10</volume>
          . 1007/s00146- 020- 00966- 4.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Bechky</surname>
          </string-name>
          ,
          <source>Evaluative Spillovers from Technological Change: The Efects of “DNA Envy” on Occupational Practices in Forensic Science, Administrative Science Quarterly</source>
          <volume>65</volume>
          (
          <year>2020</year>
          )
          <fpage>606</fpage>
          -
          <lpage>643</lpage>
          . URL: https://doi.org/10.1177/0001839219855329. doi:
          <volume>10</volume>
          .1177/0001839219855329, publisher: SAGE Publications Inc.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Strengers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pink</surname>
          </string-name>
          , L. Nicholls,
          <article-title>Smart energy futures and social practice imaginaries: Forecasting scenarios for pet care in Australian homes</article-title>
          ,
          <source>Energy Research &amp; Social Science</source>
          <volume>48</volume>
          (
          <year>2019</year>
          )
          <fpage>108</fpage>
          -
          <lpage>115</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S221462961830464X. doi:
          <volume>10</volume>
          .1016/j.erss.
          <year>2018</year>
          .
          <volume>09</volume>
          .015.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rigotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Fosch-Villaronga</surname>
          </string-name>
          , Fairness,
          <string-name>
            <surname>AI</surname>
          </string-name>
          &amp; recruitment,
          <source>Computer Law &amp; Security Review</source>
          <volume>53</volume>
          (
          <year>2024</year>
          )
          <article-title>105966</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0267364924000335. doi:
          <volume>10</volume>
          . 1016/j.clsr.
          <year>2024</year>
          .
          <volume>105966</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sassetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Cavaliere</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bonti</surname>
          </string-name>
          ,
          <article-title>A systematic literature review on artificial intelligence in recruiting and selection: a matter of ethics, Personnel Review (</article-title>
          <year>2024</year>
          ). URL: https://www.emerald. com/insight/content/doi/10.1108/PR-03-2023-0257/full/html. doi:
          <volume>10</volume>
          .1108/PR- 03- 2023- 0257.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Drage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mackereth</surname>
          </string-name>
          ,
          <string-name>
            <surname>Does AI Debias</surname>
          </string-name>
          <article-title>Recruitment? Race, Gender, and AI's “Eradication of Difference”</article-title>
          ,
          <source>Philosophy &amp; Technology</source>
          <volume>35</volume>
          (
          <year>2022</year>
          )
          <article-title>89</article-title>
          . URL: https://doi.org/10.1007/s13347-022-00543-1. doi:
          <volume>10</volume>
          .1007/s13347- 022- 00543- 1.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Seppälä</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Małecka, AI and discriminative decisions in recruitment: Challenging the core assumptions</article-title>
          ,
          <source>Big Data &amp; Society</source>
          <volume>11</volume>
          (
          <year>2024</year>
          )
          <article-title>20539517241235872</article-title>
          . URL: https://journals.sagepub. com/doi/10.1177/20539517241235872. doi:
          <volume>10</volume>
          .1177/20539517241235872.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Cave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dihal</surname>
          </string-name>
          ,
          <article-title>Hopes and fears for intelligent machines in fiction and reality</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>74</fpage>
          -
          <lpage>78</lpage>
          . URL: https://www.nature.com/articles/s42256-019-0020-9. doi:
          <volume>10</volume>
          . 1038/s42256- 019- 0020- 9.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Al Lily</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Ismail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Abunaser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Al-Lami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K. A.</given-names>
            <surname>Abdullatif</surname>
          </string-name>
          ,
          <article-title>ChatGPT and the rise of semi-humans</article-title>
          ,
          <source>Humanities and Social Sciences Communications</source>
          <volume>10</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          . URL: https: //www.nature.com/articles/s41599-023-02154-3. doi:
          <volume>10</volume>
          .1057/s41599- 023- 02154- 3, publisher: Palgrave.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Eslami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rickman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vaccaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aleyasen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vuong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Karahalios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hamilton</surname>
          </string-name>
          , C. Sandvig, ”
          <article-title>I always assumed that I wasn't really that close to [her]”: Reasoning about Invisible Algorithms in News Feeds</article-title>
          ,
          <source>in: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2015</year>
          , pp.
          <fpage>153</fpage>
          -
          <lpage>162</lpage>
          . URL: https://dl.acm.org/doi/10.1145/2702123.2702556. doi:
          <volume>10</volume>
          .1145/2702123.2702556.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <article-title>Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed</article-title>
          ,
          <source>in: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2015</year>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>182</lpage>
          . URL: https://dl.acm.org/doi/10.1145/2702123.2702174. doi:
          <volume>10</volume>
          .1145/2702123.2702174.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Büchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Fosch-Villaronga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tamò-Larrieux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Velidi</surname>
          </string-name>
          ,
          <article-title>Making sense of algorithmic profiling: user perceptions on Facebook, Information</article-title>
          ,
          <source>Communication &amp; Society</source>
          <volume>26</volume>
          (
          <year>2023</year>
          )
          <fpage>809</fpage>
          -
          <lpage>825</lpage>
          . URL: https://doi.org/10.1080/1369118X.
          <year>2021</year>
          .
          <volume>1989011</volume>
          . doi:
          <volume>10</volume>
          .1080/1369118X.
          <year>2021</year>
          .
          <volume>1989011</volume>
          , pub-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>