<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>R. Prabhu);</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>generated content in the wild</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Iris S. Bore</string-name>
          <email>iris.bore@nav.no</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Robindra Prabhu</string-name>
          <email>robindra.prabhu@nav.no</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jacob Sjødin</string-name>
          <email>jacob.sjodin@nav.no</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Large Language Models</institution>
          ,
          <addr-line>Recruitment, Job advertisements, Generated content, Fairness, Discrimination</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>9</fpage>
      <lpage>0009</lpage>
      <abstract>
        <p>This work discusses ongoing eforts to explore the discrimination risks of a generative component added to a recruitment tool developed and made publicly available by the Norwegian Labour and Welfare Administration. We examine potential discrimination triggers, propose a method to identify risks of representation skew and non-inclusive language, and highlight governance shortcomings that complicate future design of the service in ways that can mitigate the risks. The aim of this contribution, is to showcase some of the practical challenges faced when evaluating discrimination risk from generative models embedded into services in public administration.</p>
      </abstract>
      <kwd-group>
        <kwd>1Translates to ”Super quick application” or ”Short cut application”</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>vulnerable populations.
(Nav).</p>
      <sec id="sec-1-1">
        <title>1.1. The case</title>
        <p>
          As generative AI models advance and become more accessible, they are likely to integrate into digital
applications, systems, and processes in recruitment [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], including those used and developed by public
agencies. At the same time, studies have shown generative models to exhibit biases [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], which may
have adverse downstream consequences. Such developments raise concerns about the responsible and
trustworthy use of the technology, not least with respect to issues of fairness and discrimination. While
the literature on discrimination risks from algorithmic systems that attribute scores or categories to
individuals is comparatively well-developed, less is known about the risks from systems producing
text and other media. In the former, discrimination often arises from variations in the distribution of
scores or error rates between groups. In the latter, the risks are subtler and more nuanced, influenced
by factors such as content, word choice, tone, and context. This complicates the practical assessment of
discrimination risk in services employing generative components, not least in public agencies serving
        </p>
        <p>
          This study conceptualises, describes and discusses the practical assessment of discrimination risk in an
assistive recruitment tool developed and employed by the Norwegian Labour and Welfare Administration
Nav is a central gateway to a range of public benefits and social services in Norway, such as
unemployment benefits, sick leave, work assessment allowance, and pensions. Its mission is to ensure social
and financial security, support the transition to work and activity, and promote an inclusive society,
inclusive working life, and a well-functioning labour market [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>In accordance with this mandate, Nav develops and hosts the free online plattform arbeidsplassen.no,
where jobseekers and employers can connect. With aim to lower barriers for job seekers and providers
to connect, the platform hosts a service called Superrask søknad 1, where traditional CVs and cover</p>
        <p>CEUR</p>
        <p>ceur-ws.org
letters are replaced by a simple match between the requested qualifications and attributes emphasised
in the job ad and those the applicants report they possess. This requires the advertiser to clearly specify
the qualifications, skills and traits they wish for. Experience has shown that this may slip, hence a
generative AI component was added, whereby the advertiser uploading a new job ad is presented with
ifve suggested skills and traits to include based on the draft ad provided 2. The advertiser is then free to
modify the text to include all, some or none of the provided suggestions.</p>
        <p>The following analysis is exclusively limited to this latter component, namely the language
modelbased ’suggestion generator’ - hereafter referred to as SRS. Our concern is how the evaluation of
discrimination risk in SRS may be conceptualised and probed in practice.</p>
        <p>This discussion paper is organised as follows: in Section 2 we unpack potential discrimination triggers
in SRS, in Section 3 we put forward a proposal to technically probe and evaluate this risk, and in Section
4 we discuss the apparent weaknesses in regulatory protections and the lack of guiding principles for
fair and equitable design of services with an associated risk of discrimination from generated content.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Potential triggers of harm in SRS</title>
      <p>SRS forwards text suggestions from a language model to the user, based on the user’s drafted job ad.
Users are at liberty to integrate these suggestions into the final job ad, before posting it on Nav’s platform
. Here we explore possible discrimination risks from AI-generated text and assess their likelihood and
relevance in the context of SRS.</p>
      <p>
        Preference statements that explicitly express preferences related to protected categories (e.g., sex,
race, ethnicity) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] pose a high risk of discrimination in job ads: ”Hair salon seeks female co-worker”,
”Agency seeks young and energetic co-worker”. SRS suggests relevant qualifications, skills and traits,
not full sentences. Even if terms like ”female” or ”young” were suggested, the author is ultimately
responsible for the content of the job ad. However, even suggesting potentially discriminatory content
is arguably problematic, especially for a public service. To mitigate this risk, SRS is configured to avoid
such outputs (see Appendix B).
      </p>
      <p>Denigrating statements that are inherently ofensive, degrading, or humiliating to individuals or
groups have a high potential for discrimination, such as: ”Women are unfit for work in the fire-service.”</p>
      <p>Although language models can generate such statements, they are unlikely to appear in SRS.
SRSsuggestions are not full sentences, but rather stand-alone or compound nouns and adjectives describing
skills and traits related to a job. While such suggestions may inadvertently reinforce stereotypes on a
broader level, they are less likely to result in ofensive or degrading language in individual job ads.</p>
      <p>Non-inclusive statements reflect attitudes, values and biases through word choice, tone, and
phrasing. When these evoke negative associations in the reader of the ad, the text can be perceived
as non-inclusive: ”Seeking enthusiastic co-worker for a young and dynamic start-up”. While language
models in general can arguably be employed to generate both less and more inclusive text, SRS is limited
to generating qualifications, skills and traits. The rest of the text is authored by the advertiser. The risk
of the generated content afecting the inclusiveness of the job ad directly is therefore considered small.</p>
      <p>
        Still, words like ”competitive” and ”leader” may be linked to male stereotypes, while ”support” and
”interpersonal” may be associated with female ones [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Such ’gender-coded’ language can imply gender
preferences and unintentionally discourage potential applicants. Given the prevalence of such language
in existing job ads3, the possibility that SRS could exacerbate this risk by generating ”gender-coded”
suggestions cannot be overlooked.
      </p>
      <p>Skewed aggregate representation: While the above indicates that severe harms are unlikely
to result from SRS-suggestions in isolated job ads, harms may manifest as tendencies to neglect or
underrepresent qualifications across a range of similar ads. In generative models, these tendencies may
appear as variations in word choice, phrasing, or tone between groups, leading to a skew in generated
qualifications and attributes. In SRS, this could manifest by e.g. favouring the generation of certain
2See Appendix D for examples of generated suggestions of relevant skills and traits from the service.
3And hence in all likelihood the generative model’s training data.
attributes over others, in ways that are not visible in isolated job ads. Amongst the potential triggers
of harm in SRS discussed herein, we consider ”skewed representation” the most pertinent due to its
likelihood and the challenge of detection. In the following we will discuss a method proposal to detect
its occurrence.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Probing for representation skew in SRS</title>
      <p>SRS utilises a language model to suggest qualifications, skills and attributes based on the drafted ad.
In order to assess whether the generated suggestions exhibit skewed representation, we generate
suggestions from many similar ads and compare the resulting distribution of suggestions against an
appropriate reference distribution. A representation skew will then appear as a deviation between the
two distributions.</p>
      <p>We see two possible references against which the distributions of SRS can be evaluated.</p>
      <p>Historical parity: A correspondence between previous distributions and new distributions. SRS
should ideally not worsen the status quo. Any deviation from historical parity should be towards desired
norms, for instance towards a more gender-neutral appeal.</p>
      <p>Normative parity: SRS should propose qualifications and attributes that are professionally relevant.
The Norwegian labour market is characterized by a high degree of standardization, where qualification
requirements and relevant skills and attributes are often defined industry norms. The suggestions from
SRS should align with these established norms.</p>
      <sec id="sec-3-1">
        <title>3.1. Proposed method</title>
        <p>
          Representation skew with respect to these references is proposed via comparisons of embeddings of the
output of SRS and the reference. The embeddings are retrieved in a shared space, using an appropriate
language model. We note that model architecture and training data will influence the assessment of
semantic similarity [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. These variations are out of scope for this discussion. Here we will simply
employ the model NbAiLab/nb-sbert-base4 from the National Library of Norway for illustration. The
method is further detailed in Appendix A.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Simulating job ad drafts</title>
        <p>The drafts users submit to SRS are not saved, and hence not available as input for our analysis. To
circumvent this problem, we generate proxy drafts using the GPT-4 language model 5. The model is
instructed to limit itself to short “ad titles”. Examples of resulting, fictional job ads are given in Appendix
D.</p>
        <p>As indicated in these examples, the simulated ads are quite homogeneous. One possible advantage is
that the generated suggestions are not overly reliant on the specific details of the individual generated
ad. A disadvantage is that the true ad drafts submitted by the users of the SRS service are likely to difer
from our generated ads.</p>
        <p>In order to produce a corpus of SRS-generated suggestions for each profession, the simulated job
ads are fed into SRS in a simulated environment6. Examples of the generated suggestions are given in
Appendix D.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Reference extraction</title>
        <p>
          Because we do not have access to historical data from SRS, we compare the suggestions with a normative
reference. Utdanning.no is a national website for education and career information operated by the
Norwegian Directorate for Higher Education and Skills [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. The website contains information on more
4Available at huggingface: https://huggingface.co/NbAiLab/nb-sbert-base
5see Appendix C for prompt instructions.
6Rather than using the actual service, we simulate the response using GPT4 and the prompt in Appendix B.
than 600 diferent professions, including key characteristics, as shown in Appendix E. As Utdanning.no
provides public, quality-assured job descriptions for a variety of professions in the Norwegian labour
market, we consider the characteristics listed in the job descriptions to be a suitable ’normative’ reference.
It is therefore reasonable to expect SRS to be aligned with these skills and traits in the suggestions it
generates.
        </p>
        <p>In order to extract the relevant skills and traits corresponding to various professions from
Utdanning.no, a combination of web scraping techniques and LLM-based text extraction were employed.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Analysis of representation skew in SRS</title>
        <p>The representational skew in our analysis is the statistical deviation between the distribution of
suggestions generated by SRS for a given profession and the distribution for the same profession
extracted from the normative reference Utdanning.no.</p>
        <p>Diferences in wording are not necessarily indicative of a deviation: empathy and compassion and
sensitivity and care may both be valid ways of expressing virtues of the nursing profession. Moreover,
in a job ad, it may be more natural to choose, and hence for SRS to suggest, the wording enjoys teaching
in place of the more formal pedagocially inclined. To address this challenge, we compare the semantic
similarity between the SRS generated and normative distributions using the corresponding embedding
vectors.</p>
        <p>The reference distribution drawn from Utdanning.no, does not specify a relative weighting for the
various skills and traits associated with a profession. For simplicity, we assume all reference skills and
traits are of equal importance, and consider only the extent to which a similarity overlap is observed in
the SRS-generated skills and traits. Where multiple generated suggestions show overlap, we select the
maximal overlap.7</p>
        <p>The average maximal overlap between the reference distribution and the SRS-generated distribution
for a selection of professions is shown in Figure 1. This indicates that the reference skills and traits
are not equally well reflected in the SRS-generated suggestions across all professions, but also that the
variations around the average overlap are mostly moderate. In the examples in Figure 1, SRS appears to
be most aligned with Utdanning.no in its suggetions for ads for kindergarten assistants.</p>
        <p>
          Figure 2 shows the maximum overlap between reference and generated suggestions for individual
reference attributes within professions. For some professions, e.g. kindergarten assistant SRS-suggestions
cover reference attributes well. Other professions, e.g. electrician, display poorer coverage. In some
7It is worth noting that this method does not capture the relative frequency with which suggestions are produced: even if the
overlap is large, it is possible the generated suggestion only appears in a few instances.
cases, such deviations are to be expected: it may not be natural to include attributes such as ”technical
interest” or ”independent” in job ads for electricians or nurses, because such traits may be implicitly
expected of the candidates. Nevertheless, Figure 2 indicates that SRS-generated suggestions may deviate
more for some professions and less for others. We may further explore how this deviation pans out
in professions with a known high proportion of women or high proportion of men. Using data from
Utdanning.no on the gender proportion of various professions in the Norwegian labour market [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], we
can compare the overlap in female and male dominated professions.
        </p>
        <p>Comparing the mean overlap across the female dominated professions with the mean overlap across
the male dominated professions, we may be able to unpack interesting deviations across profession.
Part of our ongoing work is a thorough comparison between all professions on Utdanning.no to see
whether SRS exhibits diferent representation skews for male and female dominated professions.</p>
        <p>Another discrimination risk that can be probed is the inclusion of non-inclusive statements in
SRSsuggestions. We propose that similar studies can be performed against a reference of gendered words
to evaluate these risks.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>The case study explored herein constitutes a comparatively simple use of generative models: an assistive
service with aim to empower an author of a job ads with suggestions of relevant qualifications, skills
and traits to include. While the use of generative models in this context appears benign, we find that it
can trigger risks with adverse consequences. A skewed representation of skills, traits and attributes in
job ads and the use of non-inclusive language can, over time, contribute to reinforcing stereotypes and
weakening a culture of diversity, thereby making it more dificult for underprivileged groups to gain
access to new work arenas and areas of society. They will not be apparent in individual job ads. They
only manifest in aggregate, and even then they can be challenging to identify.</p>
      <p>Left unchecked, SRS can exacerbate this tendency, but it can equally be used as a tool to counter
historical ills and increase inclusion and equity. The latter hinges on the existence of frameworks and
methods of testing and auditing in ways that can guide the design of such services toward increased
inclusivity. As shown in Section 3, SRS lends itself to technical scrutiny in ways that ofer public
administration new opportunities to spotlight areas where the services they provide are at odds with
legal and societal norms. If methods to unpack discrimination risk in generated content exist, it is
reasonable to expect public administration to make use of them as part of broader ”product testing”.
But what is an acceptable skew in a service like SRS and what is not? To shape the design of the
service, developers in public administration will naturally turn to regulation for guidance. It remains
unclear, however, if, and to what extent existing regulation provides the requisite protection against the
discrimination risks associated with SRS, or norms to guide its design. The EADA is a case in point:</p>
      <p>
        As long as the generated suggestions are limited to job-relevant qualifications, skills and traits, it is
less obvious how SRS-generated suggestions can trigger cases of direct discrimination. The suggestion
kind and caring for the position of childcare worker may not be perceived of as gender-neutral or have
the same appeal for men and women, still the attribute is arguably relevant and not a requirement
that directly excludes applicants of either sex. However, even if a clear link between legally protected
attributes and generated suggestions is hard to establish, it is conceivable that a skew in generated
skills and traits will favour some groups over others. As discussed in 2, the use of non-inclusive and
”coded” language can subtly convey a perceived preference or discourage applications from groups,
whether intended or not. If the disfavoured group is protected, such cases might be seen as indirect
discrimination. The challenge will likely again be to point to a specific disadvantage for a legally
recognised harm in non-discrimination law. As elaborated in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the concept of non-inclusive language
is not straightforwardly aligned with established regulatory notions of discrimination. It is therefore
not clear what protections the notion of indirect discrimination ofers . The absence of clear social and
legal norms in this area, not only draws the protections aforded in doubt, but also makes it challenging
to articulate guiding design principles for services like SRS.
      </p>
      <p>
        According to the EADA[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], all public authorities have a duty to work actively, targeted, and
systematically to promote equality and prevent discrimination in all their activities, including countering
stereotyping. They must also describe actionable steps taken to put equality and non-discrimination
principles, procedures and standards into practice. Insofar as this includes the design, implementation
and deployment of services, it is reasonable to assume that the duty applies to the design of the service
SRS.
      </p>
      <p>Beyond testing, such methods can be leveraged to promote more ex ante accountability in the domain
of equality and non-discrimination. Public agencies like Nav can be asked to show adherence to
nondiscrimination norms in their digital services before they launch, shifting some of the burden of proof
from citizen to public administration.</p>
      <p>
        The inherent limitations in both the Norwegian and European equality and anti-discrimination laws
in addressing the discrimination challenges posed by ’traditional AI’ is increasingly better illuminated
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ],[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. How the law holds up against the discrimination challenges from generated content is
less studied, but early studies [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] do perhaps indicate a need for a rethink to make it more adept.
A reassessment of EADA should consider both how existing legislation falls short of providing the
requisite protections, and how the new tools that come with the technology can be leveraged in support
of those protections.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>We would like to thank the developing team behind ”Superrask søkand” and colleagues at The Norwegian
Labour and Welfare Administration (Nav) for discussion and comments.</p>
      <sec id="sec-5-1">
        <title>EADA The Norwegian Equality and Anti-discrimination Act. 6</title>
      </sec>
      <sec id="sec-5-2">
        <title>Nav The Norwegian Labour and Welfare Administration. 1, 6 SRS Superrask søknad - a service on Nav’s platform arbeidsplassen.no. 2–6, 8, 9</title>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Microsoft Copilot and Writefull in order to:
Grammar and spelling check. Further, the authors used Microsoft Copilot in order to: Text translation,
Paraphrase and reword, Citation management. After using these services, the authors reviewed and
edited the content as needed and take full responsibility for the publication’s content.</p>
    </sec>
    <sec id="sec-7">
      <title>A. Method for assessing representation skew</title>
      <p>The following steps detail the method employed to evaluate representation skew:
1. For profession P, retrieve   drafts of job advertisements for this profession.
2. Feed these advertisements into SRS and retrieve a vector with (five) suggestions for each. Keep
unique occurrences and place these in an SRS vector   .
3. Create a reference vector   with characteristics for P.
4. Retrieve embedding vectors   ,  for  , in a common space  .
5. Measure pairwise cosine similarity between   and   . Large similarity suggests congruence
between SRS and the reference, while dissimilarity may indicate a skew.</p>
    </sec>
    <sec id="sec-8">
      <title>B. SRS-prompt</title>
      <p>The following prompt was used to instruct and constrain the output from SRS:
You are an expert tasked with suggesting relevant job qualifications and requirements based
on a job advertisement and profession. Each qualification or requirement must be no longer
than 5 words. The qualifications and requirements should be in the original language of the
job advertisement. Return the qualifications and requirements as a json. Follow this format:
“suggestions”:[“Example1”, “Example2”]. Do not include discriminatory qualifications such
as age, gender, ethnic background and similar in the response. Do not include any personal
identifiable information such as name, address, phone number in the response. Reply with
a “suggestions”:[] if the user inputs something diferent than a job advertisement.</p>
    </sec>
    <sec id="sec-9">
      <title>C. Generated ads prompt</title>
      <p>The following prompt was used to generated proxy job advertisements drafts:
Create a simple job ad for the profession {}. Example: ‘Restaurant seeks pizza delivery
person immediately’, ‘Hair salon seeks substitute’. Max 20 words.
where “{}” is replaced by a profession.</p>
    </sec>
    <sec id="sec-10">
      <title>D. Simulated job ads</title>
      <p>Well-established electrical company seeks ex- [”Technical interest”, ”Flexible”,
”Physiperienced electrician for a full-time position cally fit”, ”Driver’s license”, ”Ability to
immediately. multitask”]
Medical center seeks dedicated and compassion- [”Registered nurse”, ”Valid nursing
liate nurse for immediate hire. cense”, ”Clinical experience”, ”Strong
communication skills”, ”Ability to
multitask”]</p>
    </sec>
    <sec id="sec-11">
      <title>E. Reference extraction from Utdanning.no</title>
      <p>Text on utdanning.no Extracted skill and traits
As a stockbroker, your personal qualities count strong communication skills,
indepenin addition to your education in economics. You dent, genuinely interested in stocks and
should have strong communication skills, be the stock market, have integrity, work
independent, and have a genuine interest in well under time pressure
stocks and the stock market. Integrity is also
very important because clients need to trust
you in order to follow the advice you give them.</p>
      <p>The stockbroker profession requires you to work
well under time pressure.</p>
      <p>To be a farmer, you must be able to work inde- independence, ability to handle
unforependently, handle unforeseen events, and cope seen events, ability to cope with
irwith irregular working hours. You need practi- regular working hours, practical skills,
cal skills, an interest in and knowledge of ani- knowledge of animals and plants
mals and plants, technical insight, and an
interest in economics and farm management. You
must be able to plan and lead the work on the
farm. You must also be able to instruct others,
such as a relief worker or other employees, in
the safe execution of various tasks.</p>
      <p>As a police oficer, you must be open, coura- open, courageous, decisive, analytical,
geous, and decisive. You must be able to ana- integrity, cooperative
lyze situations, show integrity, and work well
with others.</p>
    </sec>
    <sec id="sec-12">
      <title>F. Translation of reference skills from Utdanning.no</title>
      <p>kindergarten assistant
(no: barnehageassistent)</p>
      <p>electrician
(no: elektriker)
security guard/watchman
(no: vekter)</p>
      <p>nurse
(no: sykepleier)
Original reference skills from Utdanning.no
(no)
’evne til å jobbe med barn’
’gode samarbeidsevner’
’gode kommunikasjonsevner’
’glad i barn’
’fleksibel’
’teknisk interesse’
’teknisk innsikt’
’god fysisk form’
’gode kommunikasjonsevner.’
’ansvarsbevisst’
’selvstendig’
’evner til å håndtere konflikter’
’empatisk’
’god vurderingsevne’
’god hørsel.’
’serviceinnstilt’
’initiativrik’
’godt syn’
’gode kommunikasjonsevner.’
’tørre å ta ansvar’
’selvstendig’
’evne til å sette seg inn i pasientens situasjon’
’snarrådig’
’kunne ta ledelsen i kritiske situasjoner’
’evne til å holde seg oppdatert på forskning’
’omgjengelig’
Translation (en)
’ability to work with children’
’good teamwork skills’
’good communication skills’
’fondness for children’
’flexible’
’technical interest’
’technical insight’
’physically fit’
’good communication skills’
’responsible’
’independent’
’conflict management’
’empathetic’
’good judgment’
’good hearing’
’service-minded’
’proactive’
’good vision’
’good communication skills’
’dare to take responsibility’
’independent’
”ability to understand the patient’s
situation”
’quick-witted’
’ability to lead in critical situations’
’ability to stay updated on research’
’sociable’</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Baranowska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Dennis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Graus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hacker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Saldivar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zuiderveen Borgesius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Biega</surname>
          </string-name>
          ,
          <article-title>Fairness and bias in algorithmic hiring: A multidisciplinary survey</article-title>
          ,
          <source>ACM Trans. Intell. Syst. Technol</source>
          .
          <volume>16</volume>
          (
          <year>2025</year>
          ). URL: https://doi.org/10.1145/3696457. doi:
          <volume>10</volume>
          .1145/3696457.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Navigli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Conia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <article-title>Biases in large language models: Origins, inventory, and discussion</article-title>
          ,
          <source>J. Data and Information Quality</source>
          <volume>15</volume>
          (
          <year>2023</year>
          ). URL: https://doi.org/10.1145/3597307. doi:
          <volume>10</volume>
          .1145/ 3597307.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Bias in large language models: Origin, evaluation, and mitigation</article-title>
          , arXiv.org (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .48550/arxiv.2411.10915.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Nav</surname>
          </string-name>
          ,
          <article-title>Nav's social tasks</article-title>
          and services, Website,
          <year>2025</year>
          . URL: https://www.nav.no
          <article-title>/hva-er-nav/en# navs-social-tasks-and-services.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Lovdata</surname>
          </string-name>
          ,
          <article-title>Act relating to equality and a prohibition against discrimination (equality and antidiscrimination act), section 6. prohibition against discrimination</article-title>
          ,
          <source>Website</source>
          ,
          <year>2017</year>
          . URL: https:// lovdata.no/NLE/lov/2017-06-16-51/§6.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>BBC</surname>
          </string-name>
          ,
          <article-title>Why do some job adverts put women of applying?</article-title>
          ,
          <source>Website</source>
          ,
          <year>2018</year>
          . URL: https://www.bbc. com/news/business-44399028.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Montgomery</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. K.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <article-title>Large language models portray socially subordinate groups as more homogeneous, consistent with a bias observed in humans</article-title>
          ,
          <source>in: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency</source>
          , FAccT '24,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2024</year>
          , p.
          <fpage>1321</fpage>
          -
          <lpage>1340</lpage>
          . URL: https://doi.org/10.1145/3630106.3658975. doi:
          <volume>10</volume>
          .1145/3630106.3658975.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>[8] Norwegian Directorate for Higher Education and Skills, Likestilling i arbeidslivet</article-title>
          ,
          <source>Website</source>
          ,
          <year>2025</year>
          . URL: https://utdanning.no/likestilling.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Hacker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Borgesius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <article-title>Generative discrimination: What happens when generative ai exhibits bias, and what can be done about it</article-title>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .2139/ssrn.4877398.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Lovdata</surname>
          </string-name>
          ,
          <article-title>Act relating to equality and a prohibition against discrimination (equality and antidiscrimination act), section 24. activity duty of public authorities and duty to issue a statement</article-title>
          ,
          <source>Website</source>
          ,
          <year>2017</year>
          . URL: https://lovdata.no/NLE/lov/2017-06-16-51/§24.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Strand</surname>
          </string-name>
          ,
          <article-title>Algoritmer, kunstig intelligens og diskriminering: En analyse av likestillingsog diskrimineringslovens muligheter og begrensninger</article-title>
          ,
          <year>2024</year>
          . URL: https://ldo.no/content/ uploads/2024/06/Algoritmer-kunstig
          <article-title>-intelligens-og-diskriminering-2024</article-title>
          .pdf,
          <source>iSBN 978-82-8320- 032-4 (print)</source>
          ,
          <fpage>978</fpage>
          -
          <lpage>82</lpage>
          -8320-033-1 (electronic).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>E.</given-names>
            <surname>Commission</surname>
          </string-name>
          , D.-G. for Justice, Consumers,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gerards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Xenidis</surname>
          </string-name>
          ,
          <article-title>Algorithmic discrimination in Europe - Challenges and opportunities for gender equality and non-discrimination law</article-title>
          ,
          <source>Publications Ofice</source>
          ,
          <year>2021</year>
          . doi: doi/10.2838/544956.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>