<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>misunderstood: How AI Literacy shapes HR Managers' interpretation of User Interfaces in Recruiting Recom mender Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yannick Kalf</string-name>
          <email>yannick.kalff@htw-berlin.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Katharina Simbeck</string-name>
          <email>katharina.simbeck@htw-berlin.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AI Literacy</institution>
          ,
          <addr-line>Explainable AI, Recommender Systems, Human Resource Management, Recruitment, HR Analytics, People Analytics</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>HTW Berlin University of Applied Sciences</institution>
          ,
          <addr-line>Treskowallee 8, 10318 Berlin</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>22</fpage>
      <lpage>26</lpage>
      <abstract>
        <p>AI-based recommender systems increasingly influence recruitment decisions. Thus, transparency and responsible adoption in Human Resource Management (HRM) are critical. This study examines how HR managers' AI literacy influences their subjective perception and objective understanding of explainable AI (XAI) elements in recruiting recommender dashboards. In an online experiment, 410 Germanbased HR managers compared baseline dashboards to versions enriched with three XAI styles: important features, counterfactuals, and model criteria. Our results show that the dashboards used in practice do not explain AI results and even keep AI elements opaque. However, while adding XAI features improves subjective perceptions of helpfulness and trust among users with moderate or high AI literacy, it does not increase their objective understanding. It may even reduce accurate understanding, especially with complex explanations. Only overlays of important features significantly aided the interpretations of high-literacy users. Our findings highlight that the benefits of XAI in recruitment depend on users' AI literacy, emphasizing the need for tailored explanation strategies and targeted literacy training in HRM to ensure fair, transparent, and efective adoption of AI.</p>
      </abstract>
      <kwd-group>
        <kwd>Recruiting</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial intelligence (AI)-based recommender systems have
become widespread in recruitment [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Recommender
systems are software applications that use artificial intelligence
techniques to analyze data and provide specific suggestions
or predictions to users. AI-based systems typically assist in
discovering promising talents for development, identifying
the most suitable candidates for a job opening, or assigning
the right employees to projects based on their skill sets. In
human resource management (HRM), these tools promise
to accelerate processes, reduce human bias, and ground
decisions in objective data [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Current trends, such as
HR Analytics and People Analytics, integrate AI to ofer a
broader promise of analytical rigor, predictive opportunities,
and prescriptive recommendations for informed
decisionmaking and actions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], alongside technological modes of
control [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In recruiting, AI recommender systems directly
influence decisions about individuals, making them the
subject of regulations, such as the EU AI Act [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and the GDPR
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Moreover, HR systems have faced severe criticism for
fairness issues and biased recommendations [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9, 10, 11, 12</xref>
        ].
      </p>
      <p>
        To mitigate potential biases and, equally important, to
make optimal decisions, HR managers must understand
the underlying data models and the mechanisms by which
individual recommendations are generated. Explainable
AI (XAI) techniques aim to make “black-box” models
interpretable when they are opaque—due to complexity or
proprietary constraints [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ].
      </p>
      <p>
        XAI methods ofer interpretable, context-specific
explanations for model decisions [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ]. In recruitment, these
RecSys in HR’25: The 5th Workshop on Recommender Systems for Human
Resources, in conjunction with the 19th ACM Conference on Recommender
(K. Simbeck)
∗Corresponding author.
†These authors contributed equally.
0000-0003-1595-175X (Y. Kalf);
0000-0001-6792-461X (K. Simbeck)
explanations can clarify why a candidate’s application is
ranked highly or why specific competencies are flagged
during the CV parsing process. This transparency is essential
for HR managers who often have non‐technical backgrounds
and are responsible for legally and ethically sound decisions
that comply with anti‐discrimination laws. At the same
time, from a human resources management perspective,
their decisions must be economically sensible and
strategically appropriate for the company. A lack of transparency
in AI elements or data, combined with unrecognized
distortions, can lead users to incorrect conclusions. The issue is
amplified by providers and developers, as transparency and
explanations of AI interfaces remain the exception in
practice. Lacking transparency often seems to be a deliberate UI
design decision (three exemplary dashboards can be found
in the appendix Figure 4–6).
      </p>
      <p>
        However, attaching explanation widgets to a recruitment
dashboard does not guarantee impact. For XAI to be
effective, HR managers must decode and critically
evaluate the provided information [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        We contend that AI
literacy—a combination of knowledge, skills, and attitudes
that enables individuals to understand and assess AI systems
[
        <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
        ]—directly afects both the subjective and objective
effectiveness of XAI. Subjectively, AI literacy influences how
helpful, trustworthy, and accessible explanations appear.
Objectively, it afects accurate factual understanding that
HR managers present when they interpret and act on the
information provided by AI dashboards.
      </p>
      <p>
        To investigate this efect, we conducted an experiment
with 410 German-based HR managers, who compared a
baseline AI dashboard with versions enriched by three
explanation styles: important features (a simplified feature
importance approach), counterfactual explanations, and global
model criteria summaries [
        <xref ref-type="bibr" rid="ref20 ref21">20, 21</xref>
        ]. Drawing on a genuine
recruitment ranking tool that exists on the market, we
measured participants’ perceived trust, usability, and assessment
quality for each existing dashboard variant. Further, the
participants assessed statements on the dashboards with
correct or false answer possibilities. We address two guiding
research questions:
© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License
      </p>
      <p>RQ1 How do HR managers’ subjective perceptions of a
reCEUR</p>
      <p>ceur-ws.org
cruiting recommender system change when adding
explainable AI elements, and does this efect difer
across diferent levels of AI literacy?
RQ2 How does HR managers’ objective understanding
of a recruiting recommender system change when
adding explainable AI elements, and does this efect
difer across diferent levels of AI literacy?
Our results show that higher AI literacy is associated with
greater perceived usefulness and transparency of
XAI‐enhanced dashboards. Paradoxically, higher AI literacy also
corresponds to a lower objective understanding of the user
interfaces. These findings suggest that the benefits of XAI
depend critically on users’ AI literacy levels, though
selfassessed AI literacy may be prone to overconfidence.</p>
      <p>The article is structured as follows: First, we review the
literature on AI literacy and XAI, with a focus on their
intersection in recruitment contexts (2). Next, we present our
methodological approach (3) and empirical findings ( 4). We
discuss the theoretical and practical implications for
responsibly embedding explainable recommender systems in HRM
that arise from AI literacy and its impact on subjective
perception and objective understanding (5). We conclude with
an outlook on future research topics that can be derived
from our findings ( 6).</p>
    </sec>
    <sec id="sec-2">
      <title>2. State of Research</title>
      <p>
        The scholarly discourses on AI literacy and explainable
AI (XAI) have so far evolved independently (with few
exceptions [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]). AI literacy research emphasizes the
competencies required to comprehend, evaluate, and interact
with AI‐enabled tools [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Several authors have developed
scales to assess individuals’ abilities to recognize,
understand, apply, and critically or ethically evaluate AI systems
[
        <xref ref-type="bibr" rid="ref18 ref19 ref24">18, 19, 24</xref>
        ]. Empirical studies underscore the need for
contextualized training that embeds domain-relevant
examples and ethical deliberation, arguing that generic digital
skills programs fall short of preparing professionals for
AImediated work [
        <xref ref-type="bibr" rid="ref25 ref26">25, 26</xref>
        ]. In HRM, such contextualization is
particularly vital, as recruitment decisions carry significant
strategic, legal, or ethical weight with impact on companies’
success, diversity, equity, and the organization’s reputation.
A high level of AI literacy would include a foundational
understanding of the technical principles underlying AI
systems—such as training procedures, or the critical role
of data quality—and the ability to use AI tools efectively
in appropriate contexts. Proficiency in AI literacy would
further indicate awareness of AI’s limitations and
boundaries, including ethical considerations and potential grey
areas, and the necessity of human oversight. Moreover,
high levels of AI literacy involve the ability to recognize
AI-driven processes, critically assess the outputs generated
by such systems, and accurately identify their capabilities
and limitations.
      </p>
      <p>
        XAI research, by contrast, focuses on designing
algorithms, systems, and user interfaces that render AI
decisions and recommendations transparent and understandable
[
        <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
        ]. This approach treats users as accountable agents
who must comprehend, evaluate, and, if necessary, correct
AI outputs [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. Initially, XAI addressed the needs of ML
engineers and developers seeking to understand and debug
complex AI models [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Recent developments in XAI have
extended the audience for explanations and established that
explanations need to account for users’ roles, backgrounds,
and prior technical knowledge [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. Researchers
distinguish between global explanations, which clarify an entire
model’s logic, and local explanations, which justify
individual decisions [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Adequate XAI should also draw on
domain-specific knowledge—for example, highlighting key
CV attributes or motivational letter elements that influenced
an AI recommendation [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ].
      </p>
      <p>Although contextualization and audience adaptation have
been emphasized, little attention has been paid to how users’
AI literacy afects the eficacy of XAI elements.
Explanations are meaningful only if recipients possess the cognitive
and critical frameworks to interpret them: Global
explanations of feature weights presuppose familiarity with model
training and evaluation metrics, whereas local explanations
of ranking positions require understanding how feature
contributions difer across cases. Without critical literacy, HR
managers may overlook XAI elements or succumb to
confirmation bias, disregarding explanations that challenge their
prior assumptions. Similarly, deficient practical AI literacy
can lead users to fail to recognize when they are
interacting with AI, thereby undermining the necessary critical
scrutiny. Consequently, even well‐designed explanations
may fail to foster appropriate trust or may inadvertently
reinforce erroneous mental models.</p>
      <p>
        A significant research gap is the lack of systematic
insight into how non-technical experts, such as HR managers,
perceive XAI subjectively (for example, in terms of
perceived usefulness or trustworthiness), how XAI contributes
to their objective understanding (such as the accurate
interpretation of AI outputs), and how the efectiveness of XAI
varies according to diferent levels of AI literacy among HR
managers. Addressing this gap is crucial for three reasons.
First, without insight into user comprehension,
organizations risk deploying XAI that engenders misplaced trust or
unwarranted skepticism. Second, regulatory frameworks
increasingly mandate transparency, but compliance depends
on decision-makers’ ability to understand the provided
explanations [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. Third, investments in AI systems for HR—
especially recruiting recommender systems—must not only
incorporate explainability features but also ensure that HR
professionals receive the AI literacy training necessary to
operate these tools responsibly and sustainably.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Research design</title>
      <p>
        In an experiment, we queried 427 HR managers in Germany.
After excluding implausible cases, the study retained 410
valid responses. We assessed the HR managers’ AI literacy
using the “scale for the assessment of non-experts’ AI
literacy” (SNAIL) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. The scale comprises three dimensions—
technical understanding (TU), critical appraisal (CA), and
practical application (PA)—each measured with ten
Likertscaled items from which we picked five. We selected the
items from the full 30-item SNAIL scale based on their
relevance to the HR domain (cf. Table 4 for an overview of
items and individual statistics).
      </p>
      <p>The scale demonstrated high reliability, with strong
Cronbach’s  values across all three dimensions (Table1). The
dimensions exhibited strong collinearity, indicating that
participants who scored low/high on one dimension tended to
do so on the other dimensions as well. For further
analyses, we classified participants—low, medium, and high AI
literacy—by dividing the scale into three equal intervals
(Table 2).
(a) Baseline UI (BL)
(b) Feature importance UI
counterfactuals (what-if?) (c), and model criteria (ranking criteria) (d).
elements on users. We researched interfaces and dashboards
of existing HR tool vendors that advertise AI functions (for
example, Figure 4-6). Those vendors present their
dashboards as advertisements and use cases, usually on the
companies’ websites. Figure 4 shows an application
recommender system with several active applications, their
grading, skill match, and additional personal information.
On closer examination, the criteria for ranking grade and
skill match, and consequently, the resulting
recommendation, seem ambiguous. For example, it is unclear why the
recommended first-place candidate receives an A grade,
despite having a substantially lower skill match than
subsequent candidates. There is no explanation of how the sorting
was conducted or why the ranking, which initially appears
implausible, could nevertheless be justified. Moreover, it is
uncertain whether the results reflect possible errors in the
underlying AI system.</p>
      <p>Overview of the index AI Literacy</p>
      <p>Items</p>
      <p>Cronbach’s</p>
      <p>Mean (SD)  (
. )
5
5
5
0.92
0.90
0.89
2.85 (1.28)
3.29 (1.22)
3.10 (1.24)
0.79
0.75
0.73
the illustrative materials, ambiguities are bound to occur.
AI elements are utilized without further explanation of their
core function, data sources, operations, or results, making
the need for appropriate, targeted, and reliable XAI even
more urgent.</p>
      <p>The experiment focused on a ranking system that
provides a recommendation (ranking) of incoming applications.
From the advertisement material, the AI’s ranking decision
remains in need of explanation. Our experiment addressed
this issue: we provided three diferent XAI-enhanced
versions of the interface that each contained a diferent type of
explanation—feature importance to assess the influencing
factors on the results, counterfactuals to assess the decision
boundaries of the model, and general model criteria to
understand the meta-reasoning of the model (Figure1). To
facilitate understanding among non-technical professionals,
we referred to the XAI elements used in our experiments
using more accessible terms: “Important features” (FI), “What
if?” (CF), and “Ranking Criteria” (MC).</p>
      <p>We measured subjective perception using five Likert-scaled
items to assess the perceived trustworthiness, transparency,
comprehensibility, usefulness, and practical capability to
act on the information provided by the participants. The
items constituted a highly reliable indicator for subjective
perception with consistently high Cronbach’s  values for
the baseline dashboard and all XAI enhanced dashboards
(cf. Table 3).
contained no explicit information or warning about the
results being AI-generated. Furthermore, the proposed
metrics to assess, for example, performance, retention chances,
or churn risks, lack further explanation. If the AI-based
recommender systems in the tools used in practice resemble
Objective understanding was operationalized as the
number of correct responses to five factual statements derived
from the information displayed on the dashboards (for
example, “The person in second place has more suitable skills,”
or “The data foundation is known.”). These questions could
be answered with “yes,” “no,” or “you can’t tell.” The last
option indicated that the information presented on the
dashboard did not support a definitive yes or no answer. This
approach enabled us to award points for correct answers
and thereby assess whether individuals could correctly
interpret AI dashboards. Using this design, we examined how
AI literacy influences objective understanding. By
incorporating XAI elements, we evaluated their efectiveness by
comparing the number of correct answers and drawing
conclusions about XAI’s impact across diferent levels of AI
literacy.</p>
      <p>We randomly assigned participants to three groups. All
participants first evaluated the baseline user interface
without explanations. Subsequently, each group was randomly
assigned to assess a second interface with a specific
explanation type. This randomization was implemented to prevent
selection bias and systematic diferences between groups.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Findings</title>
      <sec id="sec-4-1">
        <title>4.1. Subjective perceptions of the interfaces</title>
        <p>The dashboards were first evaluated based on subjective
perception by HR managers (Figure 2a). To compare the
subjective perception with and without XAI element for
each literacy group, we conducted paired-sample Wilcoxon
tests, because the results of the Likert scales for baseline
and XAI-enhanced interfaces were non-normally distributed
(Shapiro-Wilk test: Baseline  = 0.972 ,  &lt; .001 ; XAI
 = 0.974 ,  &lt; .001 ).</p>
        <p>Adding any of the three XAI components to the interface
yielded a consistent, though modest, upward shift in users’
subjective perception. Mean ratings for the “important
features” interface rose from roughly3.10 without XAI to 3.31
with XAI, for “counterfactuals” from2.81 to 3.10, and for
“model criteria” from 2.85 to 3.11. Paired‐samples Wilcoxon
tests confirmed that all three increases were statistically
robust ( = 4.8 × 10 –5,  = 1.1 × 10 –4, and  = 1.4 × 10 –5,
respectively), indicating that the addition of explanatory
information produced a small-to-moderate positive efect
on perceived dashboard quality across the board.</p>
        <p>When we segmented participants by AI literacy (low,
medium, high), however, the efect of XAI explanations was
concentrated among users with at least moderate scores
on the SNAIL scale (Figure 2b-d). Low literacy users saw
slight mean increases of0.2–0.3 points in all three interfaces,
but none of these changes reached significance (influencing
factors  = .09 ; counterfactuals  = .14 ; model criteria
 = .75 ). Medium literacy users exhibited clear gains across
every condition: the influencing‐factors interface increased
by about0.3 points ( = .0055 ), counterfactuals by0.4 points
( = .0026 ), and model criteria by 0.5 points ( = 4.4 × 10 –5).
High-literacy users displayed a comparable pattern, with all
three efects reaching significance (influencing factors,  =
.012; counterfactuals,  = .042 ; model criteria,  = .017 ),
and mean improvements of roughly0.3–0.4 points.</p>
        <p>Taken together, these results suggest that explanatory
interfaces—in the form of influencing factors,
counterfactuals, or explicit model criteria—systematically elevate users’
subjective perceptions of a user interface, if the recipient’s
AI literacy is at least moderate. Crucially, the benefit is
most pronounced among individuals who already possess
medium or high levels of AI literacy, whereas novices derive
less measurable perceived usefulness. XAI elements do not
compensate for low AI literacy and do not raise the group’s
subjective perception of the user interface’s perceived
quality. This suggests that users require prior knowledge to
achieve subjective improvements in any explanation—and
that explanation types for low AI literacy levels must be
constructed diferently.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Objective understanding of the interfaces</title>
        <p>In the second part of the experiment, we examined how
the three XAI elements influenced participants’ objective
understanding of dashboard outputs. We again stratified
these results by AI literacy (see Figure3). To compare
performance with and without each XAI overlay within each
literacy group, we conducted paired-sample Wilcoxon tests,
because the scoring for baseline and each XAI-enhanced
dashboard was non-normally distributed (Shapiro-Wilk test:
Baseline  = 0.910 ,  &lt; .001 ; XAI  = 0.904 ,  &lt; .001 ).</p>
        <p>The XAI elements that show influencing factors on AI
results overall performed best, since it was the only one that
could increase the scores to the baseline dashboards. When
participants were presented with enriched interfaces that
displayed relevant features for the assessment scores, low‐
and medium‐literacy users exhibited negligible changes in
their interpretation scores (low: mean2.70 vs. 2.48,  = .41 ;
medium: mean 2.75 vs. 2.95,  = .33 ), indicating that
additionally represented features neither aided nor hindered
their objective understanding. By contrast, high literacy
users showed a significant improvement, with mean values
rising from 1.98 to 3.35 ( = 2.2 × 10 −07). This large, highly
significant efect suggests that only those with profound
knowledge of AI were able to translate feature‐importance
annotations into more accurate data interpretations.</p>
        <p>The results are astonishing and indicate that the
(randomized) group of high AI literacy might have been subjected to
a systematic bias: the group performed worst in comparison
to all other groups, especially to other randomized groups
of high AI literacy, where equal scores would have been
expected. The randomization of the experimental groups was
successful with respect to the control variable AI literacy
(Kruskal-Wallis test  = .73 ). However, significant
diferences were observed between the experimental groups in
the baseline measurement of the dependent variables for
subjective perception ( = .03 ), but non-significant diferences
for objective understanding ( = .54 ). Such discrepancies
may arise despite randomization due to random variation.</p>
        <p>Introducing counterfactuals as “if‐then” instances led
to mixed outcomes. Low literacy participants performed
worse when counterfactuals were present (mean 2.93 vs.
2.36,  = .031 ), a small but statistically significant decline.
Medium literacy users showed a non-significant downward
trend (mean 2.59 vs. 2.32,  = .11 ), and high literacy users
remained essentially unchanged (mean 2.29 vs. 2.55,  = .18 ).
These results suggest that novices may find counterfactual
information distracting or confusing, while more
experienced individuals neither consistently benefit nor sufer.</p>
        <p>Finally, overlaying explicit model criteria as information
on what the model deems essential for the presented
recommendations and decisions proved to be counterproductive
across the board. Low literacy users’ interpretation scores
fell from a mean of2.73 to 2.17 ( = .03 ), medium literacy
from 2.68 to 2.20 ( = .004 ), and high literacy from 2.48 to
2.04 ( = .007 ). All three decreases were statistically
significant and of moderate efect size, indicating that detailed
model criteria overwhelmed users regardless of their AI
background, leading to poorer objective understanding.</p>
        <p>Taken together, these findings demonstrate that XAI
elements do not uniformly enhance users’ comprehension of
dashboard information. While influencing factors can
significantly boost accurate interpretations for technically savvy
users, counterfactuals and explicit model criteria may
impair or fail to improve objective understanding—particularly
among those with limited AI literacy.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>
        Our study provides novel empirical evidence on how AI
literacy shapes HR managers’ interpretations of AI-based
recruitment recommender systems. Addressing our first
research question, we found that XAI elements can increase
the quality of AI interfaces, depending on users’ AI
literacy levels. However, enhanced subjective perceptions of
informativeness, trustworthiness, and interpretability were
statistically significant only for participants with at least
moderate AI literacy. This suggests that users must possess
foundational conceptual and critical skills to experience
benefits from explainability elements in recruitment interfaces
[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Low-literacy users gained little subjective value from
XAI, in line with research that emphasizes the need for
contextualized, domain-specific AI literacy interventions, and
potentially confusing and misleading complexity [
        <xref ref-type="bibr" rid="ref23 ref25">23, 25</xref>
        ].
      </p>
      <p>
        Turning to our second research question, we observed
a paradox: XAI explanations did not uniformly improve,
and in some cases, impaired objective understanding. Only
the “important features (feature importance)” element
improved performance, but primarily for high literacy users.
Counterfactual and model-criteria explanations either had
no efect or reduced objective understanding, particularly
among participants with low and medium literacy. Notably,
high literacy users—while more likely to benefit subjectively
from explanations—exhibited lower absolute understanding,
which could be a sign of overconfidence in their AI literacy.
This finding complicates earlier claims about the
universal eficacy of XAI in fostering trust and better decisions
[
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ], and supports concerns about information overload
and cognitive miscalibration of non-technical users 1[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Our results align with and extend the literature on the
intersection of AI literacy and XAI. The state of research has
emphasized tailoring explanations to users’ backgrounds
and roles [
        <xref ref-type="bibr" rid="ref29 ref30">30, 29</xref>
        ], but empirical investigations of
explanation eficacy across literacy gradients in HR remain sparse.
Our results indicate that designers should carefully tailor
explanation formats to the target audience’s expertise level,
avoiding overly complex explanations or too granular
details. Our data suggest that XAI elements, when
implemented in HR dashboards, may reinforce subjective
confidence without reliably supporting accurate and responsible
decision-making. This risk is amplified by regulatory
demands for explainability and transparency in high-stakes
contexts, such as those imposed by the EU AI Act and
GDPR [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. These findings underline the importance of a
nuanced approach to deploying AI tools in sensitive
contexts, one that carefully balances explanation design with
the end user’s AI knowledge. First, our baseline interface
shows no clues to AI origins, data quality, or assumptions,
leaving users to judge recommendations without deeper
insights into underlying models or AI design decisions.
Second, while XAI annotations enhance perceived information
quality, they do not automatically translate into improved
decision-making based on the recommender system. By
contrasting subjective perceptions with objective
understanding across literacy levels, we uncover the complex
interplay between user expertise and explanation eficacy.
      </p>
      <p>
        Upon further examination, we discovered that the type of
explanation is crucial. Only additional important features—
inspired by feature importance models without considering
efect strengths—yielded an overall improvement in
objective understanding performance. In contrast,
counterfactual explanations and model‐criteria summaries hindered
users’ evaluations. Counterfactuals—“What would have to
change for the AI’s decision to difer?”—might impose
substantial cognitive demands because they frame reasoning
through a negated, hypothetical scenario rather than
presenting a direct rationale. This result challenges prior claims
about the universal efectiveness of counterfactuals [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ].
Likewise, model-criteria explanations—which merely
translate the AI’s decision rules into human‐like justification—
systematically failed, regardless of participants’ literacy.
      </p>
      <p>
        Finally, our data reveal an overconfidence efect among
high-literacy users: their strong self-assurance contradicts
their performance in accurately showing objective
understanding. This suggests that relying on self-reported AI
literacy may obscure critical performance gaps; future research
should incorporate objective measures of AI knowledge.
This mirrors work in metacognition and digital literacy,
suggesting that self-assessment may not be a reliable proxy for
AI literacy, i.e., proper comprehension, critical capacity, or
practical application [
        <xref ref-type="bibr" rid="ref23 ref24">24, 23</xref>
        ]. For HRM practice, this implies
that both AI literacy training and explanation design must
be empirically validated for efectiveness.
      </p>
      <p>We conclude that XAI is not a one-size-fits-all remedy
in riskful settings like HR. Explanations must be carefully
tailored to users’ expertise and meet the specific demands
of their domain. However, providing several explanations
at once to satisfy diferent AI literacy needs could raise the
complexity of HR interfaces even further. Our study
challenges the assumption that XAI elements are universally
beneficial in HR recommender systems. Their impact is
conditional on users’ AI literacy, the type of explanation, and
the context of use. Efective, responsible deployment of XAI
in HR, therefore, requires an integrated approach: robust,
context-sensitive literacy training, careful user-centered
explanation design, and ongoing evaluation of both subjective
perception and objective understanding outcomes.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>This research showed that HR managers’ AI literacy
fundamentally afects the efectiveness of explainable AI elements
in recruiting recommender systems. While XAI features
enhance perceived transparency and trust among more literate
users, they do not guarantee improved objective
understanding, sometimes even undermining it. These findings call
for a more nuanced, empirically grounded approach to the
design and implementation of XAI in HRM. Specifically,
we find that XAI is not a universal remedy to make AI, its
functions, and results accessible to non-technical
professionals. Explanation strategies must be tailored to users’
actual literacy and cognitive needs. AI literacy is critical and
investments in AI for HR should be coupled with targeted,
domain-specific literacy initiatives that inform and teach
on specific tools and utilities and their mechanisms, like
recommender systems. Finally, the design of explanations
should address diverse interpretative needs with flexible
formats to serve users with heterogeneous backgrounds, while
not overwhelming or misleading them, or giving them a
false sense of understanding.</p>
      <p>Future research should address several open questions:
How can AI literacy interventions best be integrated into
HR training programs, and what pedagogical approaches
are most efective for non-technical professionals? What
hybrid or adaptive explanation strategies (for example, layered
explanations, user-driven customization) can accommodate
diferent literacy levels without increasing interface
complexity? How do explanation efects evolve with repeated
exposure, feedback, or organizational learning? What are
the organizational and regulatory implications of
overconifdence or miscalibration in AI-literate HR professionals?
By pursuing these avenues, we can move toward HR
recommender systems that are not only technically robust and
legally compliant but also meaningfully transparent, fair,
and supportive of human expertise. All in all, these results
and the future outlook on open research emphasize that
applied AI—in any domain—needs explanations.
Explainability is a transdisciplinary process that involves technical
interfaces, pedagogical and psychological learning
capabilities, and social, vocational, or professional standards of
specific groups, like HR managers.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The research was part of the project “TRANKI – Standards
for transparent AI”, funded by the Hans Böckler Foundation
(Grant no.: 2022-797-2).</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used
OpenAI’s GPT-4.1 and o4-mini, as well as Grammarly, to
check grammar and spelling and to enhance the writing
style. After using these tools/services, the authors reviewed
and edited the content as needed and take full responsibility
for the publication’s content.</p>
    </sec>
    <sec id="sec-9">
      <title>A. Additional tables of indices statistics</title>
      <p>explain why data privacy must be considered when developing and using
artificial intelligence applications.
identify ethical issues surrounding artificial intelligence.
name weaknesses of artificial intelligence.
describe potential legal problems that may arise when using artificial
intelligence.
explain why data plays an important role in the development and application
of artificial intelligence.
give examples from my daily life (personal or professional) where I might be in
contact with artificial intelligence.
tell if the technologies I use are supported by artificial intelligence.
assess if a problem in my field can and should be solved with artificial
intelligence methods.
name applications in which AI-assisted natural language
processing/understanding is used.
critically evaluate the implications of artificial intelligence applications in at
least one subject area.</p>
    </sec>
    <sec id="sec-10">
      <title>B. Screenshots of AI Dashboards</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , M. Yu,
          <article-title>AI in Human Resource Management: Literature Review and Research Implications, Journal of the Knowledge Economy (</article-title>
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .1007/s13132-023-01631-z.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Malik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Budhwar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Kazmi</surname>
          </string-name>
          ,
          <source>Artificial intelligence (AI</source>
          )
          <article-title>-assisted HRM: Towards an extended strategic framework</article-title>
          ,
          <source>Human Resource Management Review</source>
          <volume>33</volume>
          (
          <year>2023</year>
          )
          <article-title>100940</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.hrmr.
          <year>2022</year>
          .
          <volume>100940</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Drage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mackereth</surname>
          </string-name>
          ,
          <string-name>
            <surname>Does AI Debias</surname>
          </string-name>
          <article-title>Recruitment? Race, Gender, and AI's ”Eradication of Difference”</article-title>
          ,
          <source>Philosophy &amp; Technology</source>
          <volume>35</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          . doi:
          <volume>10</volume>
          .1007/s13347-022-00543-1.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E. K.</given-names>
            <surname>Kelan</surname>
          </string-name>
          ,
          <article-title>Algorithmic inclusion: Shaping the predictive algorithms of artificial intelligence in hiring</article-title>
          ,
          <source>Human Resource Management Journal</source>
          <volume>34</volume>
          (
          <year>2024</year>
          )
          <fpage>694</fpage>
          -
          <lpage>707</lpage>
          . doi:
          <volume>10</volume>
          .1111/
          <fpage>1748</fpage>
          -
          <lpage>8583</lpage>
          .
          <fpage>12511</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Charlwood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Guenole</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Marler,</surname>
          </string-name>
          <article-title>HR analytics: An emerging field finding its place in the world alongside simmering ethical challenges</article-title>
          ,
          <source>Human Resource Management Journal</source>
          <volume>34</volume>
          (
          <year>2024</year>
          )
          <fpage>326</fpage>
          -
          <lpage>336</lpage>
          . doi:
          <volume>10</volume>
          .1111/
          <fpage>1748</fpage>
          -
          <lpage>8583</lpage>
          .
          <fpage>12435</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Klöpper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Köhne</surname>
          </string-name>
          ,
          <article-title>Shifting Structures - a systematic Literature Review on People Analytics and the Future of Work</article-title>
          , ECIS 2023 Research Papers (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <source>Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts</source>
          ,
          <source>Technical Report</source>
          <year>2021</year>
          /0106 (COD), Brussels,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>European</given-names>
            <surname>Parliament</surname>
          </string-name>
          ,
          <source>General Data Protection Regulation: GDPR</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <article-title>Exploring Gender Bias and Algorithm Transparency: Ethical Considerations of AI in HRM</article-title>
          ,
          <source>Journal of Theory and Practice of Management Science</source>
          <volume>4</volume>
          (
          <year>2024</year>
          )
          <fpage>36</fpage>
          -
          <lpage>43</lpage>
          . doi:
          <volume>10</volume>
          .53469/jtpms.
          <year>2024</year>
          .
          <volume>04</volume>
          (
          <issue>03</issue>
          ).
          <fpage>06</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Baranowska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Dennis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Graus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hacker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Saldivar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zuiderveen Borgesius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Biega</surname>
          </string-name>
          ,
          <article-title>Fairness and Bias in Algorithmic Hiring: A Multidisciplinary Survey</article-title>
          ,
          <source>ACM Transactions on Intelligent Systems and Technology</source>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .1145/ 3696457.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>K.</given-names>
            <surname>Simbeck</surname>
          </string-name>
          ,
          <source>HR analytics and Ethics</source>
          ,
          <source>IBM Journal of Research and Development</source>
          <volume>63</volume>
          (
          <year>2019</year>
          ) 9:
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          :
          <fpage>12</fpage>
          . doi:
          <volume>10</volume>
          . 1147/JRD.
          <year>2019</year>
          .
          <volume>2915067</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Köchling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Riazy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Wehner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Simbeck</surname>
          </string-name>
          , Highly Accurate,
          <article-title>But Still Discriminatory: A Fairness Evaluation of Algorithmic Video Analysis in the Recruitment Context</article-title>
          ,
          <source>Business &amp; Information Systems Engineering</source>
          <volume>63</volume>
          (
          <year>2021</year>
          )
          <fpage>39</fpage>
          -
          <lpage>54</lpage>
          . doi:
          <volume>10</volume>
          .1007/ s12599-020-00673-w.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Molnar</surname>
          </string-name>
          ,
          <source>Interpretable Machine Learning: A Guide for Making Black Box Models Explainable</source>
          , 2 ed.,
          <string-name>
            <surname>Selfpublishing</surname>
          </string-name>
          , Munich,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>U.</given-names>
            <surname>Bhatt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Xiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Weller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Taly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Puri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M. F.</given-names>
            <surname>Moura</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. Eckersley,</surname>
          </string-name>
          <article-title>Explainable machine learning in deployment</article-title>
          , in: M.
          <string-name>
            <surname>Hildebrandt</surname>
          </string-name>
          (Ed.),
          <source>Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</source>
          , Association for Computing Machinery, New York,
          <year>2020</year>
          , pp.
          <fpage>648</fpage>
          -
          <lpage>657</lpage>
          . doi:
          <volume>10</volume>
          .1145/3351095.3375624.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Speith</surname>
          </string-name>
          ,
          <article-title>A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods</article-title>
          , in: Association for Computing Machinery (Ed.),
          <source>FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency</source>
          , Association for Computing Machinery, New York,
          <year>2022</year>
          , pp.
          <fpage>2239</fpage>
          -
          <lpage>2250</lpage>
          . doi:
          <volume>10</volume>
          .1145/3531146.3534639.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Khalili</surname>
          </string-name>
          ,
          <article-title>Against the opacity, and for a qualitative understanding, of artificially intelligent technologies</article-title>
          ,
          <source>AI and Ethics</source>
          (
          <year>2023</year>
          ).
          <source>doi:10.1007/ s43681-023-00332-2.</source>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>K.</given-names>
            <surname>Bauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Hinz</surname>
          </string-name>
          ,
          <source>Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing, Information Systems Research</source>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1287/isre.
          <year>2023</year>
          .
          <volume>1199</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>M. C. Laupichler</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Aster</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Haverkamp</surname>
          </string-name>
          , T. Raupach,
          <article-title>Development of the “Scale for the assessment of non-experts' AI literacy” - An exploratory factor analysis</article-title>
          ,
          <source>Computers in Human Behavior Reports</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <article-title>100338</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.chbr.
          <year>2023</year>
          .
          <volume>100338</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Carolus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Koch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Straka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Latoschik</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Wienrich, MAILS - Meta AI literacy scale: Development and testing of an AI literacy Questionnaire based on well-founded Competency Models and psychological Change-</article-title>
          and
          <string-name>
            <surname>Meta-Competencies</surname>
          </string-name>
          ,
          <source>Computers in Human Behavior: Artificial Humans</source>
          <volume>1</volume>
          (
          <year>2023</year>
          )
          <article-title>100014</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.chbah.
          <year>2023</year>
          .
          <volume>100014</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>A Survey of Methods for Explaining Black Box Models</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>51</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          . doi:
          <volume>10</volume>
          .1145/3236009.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bodria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Naretto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rinzivillo</surname>
          </string-name>
          ,
          <article-title>Benchmarking and survey of explanation methods for black box models</article-title>
          ,
          <source>Data Mining and Knowledge Discovery</source>
          <volume>37</volume>
          (
          <year>2023</year>
          )
          <fpage>1719</fpage>
          -
          <lpage>1778</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10618-023-00933-9.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bhat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <article-title>Designing Interactive Explainable AI Tools for Algorithmic Literacy and Transparency</article-title>
          , in: Designing Interactive Systems Conference, ACM, Copenhagen Denmark,
          <year>2024</year>
          , pp.
          <fpage>939</fpage>
          -
          <lpage>957</lpage>
          . doi:
          <volume>10</volume>
          .1145/3643834.3660722.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lintner</surname>
          </string-name>
          ,
          <article-title>A systematic review of AI literacy scales</article-title>
          ,
          <source>NPJ science of learning 9</source>
          (
          <year>2024</year>
          )
          <article-title>50</article-title>
          . doi:
          <volume>10</volume>
          .1038/ s41539-024-00264-4.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>D. T. K. Ng</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. K. L. Leung</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. K. F. Chiu</surname>
            ,
            <given-names>S. K. W.</given-names>
          </string-name>
          <string-name>
            <surname>Chu</surname>
          </string-name>
          ,
          <article-title>Design and validation of the AI literacy questionnaire: The afective, behavioural, cognitive and ethical approach</article-title>
          ,
          <source>British Journal of Educational Technology</source>
          <volume>55</volume>
          (
          <year>2024</year>
          )
          <fpage>1082</fpage>
          -
          <lpage>1104</lpage>
          . doi:
          <volume>10</volume>
          .1111/bjet.13411.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Benlian,
          <article-title>AI Literacy for the top management: An upper echelons perspective on corporate AI orientation and implementation ability</article-title>
          ,
          <source>Electronic Markets</source>
          <volume>34</volume>
          (
          <year>2024</year>
          )
          <article-title>24</article-title>
          . doi:
          <volume>10</volume>
          .1007/ s12525-024-00707-1.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bassellier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Benbasat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. H.</given-names>
            <surname>Reich</surname>
          </string-name>
          ,
          <source>The Influence of Business Managers' IT Competence on Championing IT, Information Systems Research</source>
          <volume>14</volume>
          (
          <year>2003</year>
          )
          <fpage>317</fpage>
          -
          <lpage>336</lpage>
          . doi:
          <volume>10</volume>
          .1287/isre.14.4.317.24899.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>M.-A. Clinciu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Hastie</surname>
          </string-name>
          ,
          <article-title>A Survey of Explainable AI Terminology</article-title>
          ,
          <source>Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI</source>
          <year>2019</year>
          )
          <article-title>(</article-title>
          <year>2019</year>
          )
          <fpage>8</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>W19</fpage>
          -8403.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gunning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stefik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stumpf</surname>
          </string-name>
          , G.-
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <source>XAI-Explainable artificial intelligence</source>
          ,
          <source>Science robotics 4</source>
          (
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .1126/scirobotics. aay7120.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chowdhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Joel-Edgar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Dey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kharlamov</surname>
          </string-name>
          ,
          <article-title>Embedding transparency in artificial intelligence machine learning models: Managerial implications on predicting and explaining employee turnover</article-title>
          ,
          <source>The International Journal of Human Resource Management</source>
          <volume>34</volume>
          (
          <year>2023</year>
          )
          <fpage>2732</fpage>
          -
          <lpage>2764</lpage>
          . doi:
          <volume>10</volume>
          . 1080/09585192.
          <year>2022</year>
          .
          <volume>2066981</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cambria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Malandri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mercorio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mezzanzanica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nobani</surname>
          </string-name>
          ,
          <article-title>A survey on XAI and natural language explanations</article-title>
          ,
          <source>Information Processing &amp; Management</source>
          <volume>60</volume>
          (
          <year>2023</year>
          )
          <article-title>103111</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.ipm.
          <year>2022</year>
          .
          <volume>103111</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saranti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Molnar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Biecek</surname>
          </string-name>
          , W. Samek,
          <string-name>
            <surname>Explainable AI</surname>
          </string-name>
          Methods -
          <article-title>A Brief Overview</article-title>
          , in: A.
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Goebel</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Fong</surname>
          </string-name>
          , T. Moon,
          <string-name>
            <surname>K.- R. Müller</surname>
          </string-name>
          , W. Samek (Eds.), xxAI - Beyond
          <string-name>
            <surname>Explainable</surname>
            <given-names>AI</given-names>
          </string-name>
          : International Workshop, Held in
          <source>Conjunction with ICML 2020</source>
          , Springer, Cham,
          <year>2022</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>38</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -04083-
          <issue>2</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kühl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Meske</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nitsche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lobana</surname>
          </string-name>
          ,
          <article-title>Investigating the Role of Explainability and AI Literacy in User Compliance</article-title>
          ,
          <source>SSRN Electronic Journal</source>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .2139/ssrn.4558966.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mazzine Barbosa de Oliveira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Goethals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Brughmans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Martens</surname>
          </string-name>
          ,
          <article-title>Unveiling the Potential of Counterfactuals Explanations in Employability</article-title>
          ,
          <source>Technical Report</source>
          , arXiv,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2305.10069.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>M. C. Laupichler</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Aster</surname>
            ,
            <given-names>J.-O.</given-names>
          </string-name>
          <string-name>
            <surname>Perschewski</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Schleiss</surname>
          </string-name>
          ,
          <string-name>
            <surname>Evaluating AI</surname>
          </string-name>
          <article-title>Courses: A Valid and Reliable Instrument for Assessing ArtificialIntelligence Learning through Comparative SelfAssessment</article-title>
          ,
          <source>Education Sciences</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <fpage>978</fpage>
          . doi1:
          <fpage>0</fpage>
          . 3390/educsci13100978.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>