<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Supporting the Design of Phishing Education, Training and Awareness interventions: an LLM-based approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Francesco Greco</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giuseppe Desolda</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Viganò</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>King's College London</institution>
          ,
          <addr-line>London</addr-line>
          ,
          <country country="UK">U.K</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Bari “A. Moro”</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Phishing remains one of the most effective cyber threats in our digital world, affecting millions of organizations. Phishing education, training, and awareness programs are used to address employees' lack of knowledge about phishing attacks. However, despite being very expensive, these interventions are not always effective, mainly due to the lack of customization of training materials based on the employees' needs and profiles. In fact, creating customized training content for each employee and each context would require a huge effort from security practitioners and educators thus increasing costs even more. The proposal we present in this paper is to use Large Language Models to automate some steps in the design process of training content, which is tailored to the specific user profile.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;phishing education</kwd>
        <kwd>large language models</kwd>
        <kwd>warnings</kwd>
        <kwd>training</kwd>
        <kwd>simulated campaigns 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Phishing is currently one of the most significant cyber-threats, causing substantial losses
for companies each year on a global scale [25]. Despite the technological solutions that exist
to mitigate phishing attacks [
        <xref ref-type="bibr" rid="ref13 ref2">2, 29</xref>
        ], criminals are able to succeed due to the exploitation of
vulnerabilities that originate from various human factors [16]. Among the primary human
factors that can increase a user’s susceptibility to phishing, Lack of Knowledge and Lack of
Resources are of particular importance. The former refers to users missing specific
knowledge and experience to correctly deal with phishing attacks [18], while the latter
refers to the lack of educative resources that can effectively help users recognize phishing
attacks [16]. Despite the users being often considered the “weakest link” in the
cybersecurity of an organization [
        <xref ref-type="bibr" rid="ref25">41</xref>
        ], improving their awareness level can lead to making
the employees one of the most valuable defensive assets of an organization [
        <xref ref-type="bibr" rid="ref20">36</xref>
        ].
      </p>
      <p>
        Consequently, companies invest considerable resources [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] in increasing employee
awareness and educating them with Phishing Education, Training, and Awareness (PETA)
programs [
        <xref ref-type="bibr" rid="ref24">40</xref>
        ]. However, despite the general agreement among researchers on the
usefulness of anti-phishing training [26], its effectiveness can vary considerably [
        <xref ref-type="bibr" rid="ref20">36</xref>
        ]. This
may be attributed to human factors such as age, gender, technical expertise, and personal
traits [26]. To address the ineffectiveness of phishing training and education due to the
employees’ individual differences, a viable solution would be to provide them with
customized training material [
        <xref ref-type="bibr" rid="ref15">17, 31, 50</xref>
        ]. Furthermore, training material should be highly
engaging and interesting for employees [17, 50], while also being easy and fast to consume.
This is because users generally can dedicate little time to security aspects during their work
hours [
        <xref ref-type="bibr" rid="ref19">35</xref>
        ]. Improving the relevance and quality of training material can indeed result in
more effective phishing training programs [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This is likely to have the additional benefit
of reducing the likelihood of users circumventing the educational process (e.g., by ignoring
the training material altogether, or attempting to cheat in assessment quizzes).
      </p>
      <p>
        However, the design and implementation of PETA material is not a trivial process and
requires significant effort and human resources [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Typically, simulated phishing
campaigns are employed to administer embedded training material [
        <xref ref-type="bibr" rid="ref17 ref20 ref9">9, 33, 36, 51</xref>
        ]. Although
training campaigns are one of the most commonly used approaches, they are very expensive
to conduct, and may even result in an increase in phishing susceptibility of some employees
[
        <xref ref-type="bibr" rid="ref20">36</xref>
        ] or in a reduction in their click rate on legitimate links, potentially affecting their
productivity [
        <xref ref-type="bibr" rid="ref26">42</xref>
        ]. In light of these problems, we deem it necessary to find a solution to
address the lack of effective and affordable PETA resources.
      </p>
      <p>This study presents ongoing research aimed at supporting the lightweight creation of
effective PETA resources. The effectiveness of the resources will be achieved by tailoring
each resource to the specific user, considering their profile created using ad hoc
vulnerability assessment techniques (e.g. questionnaire). In this way, each user will be
exposed to short, targeted, and relevant resources that they are more likely to accept than
the traditional long and generic alternatives used today. The creation of these resources will
also be facilitated by the use of LLMs, which, taking into account the user profile and the
type of resource to which the user should be exposed (e.g., podcast, document, alert, etc.),
will produce PETA resources tailored to the users, covering only their weakness, without
exposing the user to aspects in which they are already confident.</p>
      <p>This work establishes a first basis for a significant contribution to the broader Italian
national project DAMOCLES (Detection And Mitigation Of Cyber attacks that expLoit human
vulnerabilitiES), which aims to develop a framework for the Italian Public Administration
to assess human factors in cyber incidents and mitigate their impact through security
awareness and customized user training. This latter point can indeed be addressed by using
technologies like LLMs to support the creation of training material tailored to the individual
weaknesses assessed.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <sec id="sec-2-1">
        <title>2.1. Phishing Education, Training, and Awareness (PETA)</title>
        <p>
          The term “PETA” (Phishing Education, Training, and Awareness) was recently brought
up by Sarker et al. [
          <xref ref-type="bibr" rid="ref24">40</xref>
          ] to refer to Security Education, Training, and Awareness (SETA)
interventions [23] in the domain of phishing. Katsikas et al. [28] define awareness, training,
and education as distinct concepts that together accomplish learning, starting with
awareness and culminating in education. However, these terms are often used
interchangeably in the literature [23]. Therefore, PETA is a broad concept that encompasses
any type of intervention designed to enhance users’ awareness and skills, including formal
learning (courses, seminars, etc.), simulated phishing campaigns, quizzes, serious games,
and anti-phishing warnings.
        </p>
        <p>Simulated phishing campaigns aim to mimic real-world phishing attacks. They are
typically part of a dedicated security awareness training program. These campaigns
generate simulated phishing emails that closely resemble actual phishing attempts.
Employees receive these emails to test their vigilance and response. Generally, embedded
training is employed in conjunction with simulated phishing campaigns to present
employees with landing pages that include educational material immediately after they
click on a fake phishing link.</p>
        <p>
          Anti-phishing warnings can constitute a valid intervention for increasing phishing
awareness of users. These tools are employed to alert users about potential threats by
blocking access to malicious websites [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] or emails [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and are typically found in browsers
(e.g., Google Chrome) or in email clients (e.g., Gmail).
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Limitations of current PETA interventions</title>
        <p>
          A review of the literature conducted by Sarker et al. [
          <xref ref-type="bibr" rid="ref24">40</xref>
          ] reveals a number of issues with
current PETA material. These include challenges in designing, implementing, and evaluating
PETA interventions.
        </p>
        <p>
          One prominent challenge is the lack of customization of training content [46]. This can
lead to employees often being disengaged and uninterested in the training material [
          <xref ref-type="bibr" rid="ref5">5, 50</xref>
          ].
Additionally, poorly customized training content can result in being repetitive [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], culturally
biased [17], or time-consuming [
          <xref ref-type="bibr" rid="ref18 ref9">9, 34</xref>
          ] for employees. A noteworthy example is
AntiPhishing Phil [43], a serious game for phishing education in which the user is asked to
examine URLs to determine if they are associated with malicious or legitimate websites. One
of the primary issues with the game was that a significant number of the presented websites
were associated with American companies and thus unfamiliar to users outside of the
United States. This resulted in some participants in the original study experiencing difficulty
in determining if some URLs were legitimate or not. It is similarly important for training
material to include different attack scenarios, in order to improve the users’ ability to detect
a wider spectrum of phishing attacks [
          <xref ref-type="bibr" rid="ref16">32, 50</xref>
          ]. Therefore, training material that addresses
only, e.g., how to spot phishing URLs will not adequately teach users to defend against more
advanced techniques such as spear phishing or persuasion cues [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>
          Another critical issue is related to the warnings employed for the protection of users
from phishing attacks. These warnings are generally ill-positioned [
          <xref ref-type="bibr" rid="ref18 ref9">9, 34, 52</xref>
          ], passive (i.e.,
do not block the user interaction flow) [19], and lead to users becoming habituated [
          <xref ref-type="bibr" rid="ref1 ref14 ref4">1, 4,
30</xref>
          ]. Moreover, the content of warnings usually lacks explanations [
          <xref ref-type="bibr" rid="ref3 ref6">3, 6</xref>
          ], which can result in
users not trusting the system and being less motivated to adopt safe behaviors [
          <xref ref-type="bibr" rid="ref6 ref8">6, 8, 48</xref>
          ].
        </p>
        <p>
          Finally, PETA programs tend to be highly expensive in terms of both economic and
human resources [
          <xref ref-type="bibr" rid="ref7">7, 46</xref>
          ]. Furthermore, the deployment of embedded training requires a
significant amount of manual human effort for the production of fake phishing emails, their
evaluation, the management of related tickets, the setting up of firewall rules, and so forth
[
          <xref ref-type="bibr" rid="ref3 ref7">3, 7</xref>
          ]. This can result in the training material, such as anti-phishing recommendations, being
outdated or incomplete [
          <xref ref-type="bibr" rid="ref22">38</xref>
          ]; it is instead crucial to include recent cyber-attacks and
detailed information about how attackers operate and the types of tactics they use [45].
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. LLMs and education</title>
        <p>
          Large Language Models (LLM) are gaining traction in the field of education because of their
ability to provide tailored feedback and suggestions, saving teachers time and effort in
creating personalized materials and tailored feedback [27]. There are some attempts in the
literature to use LLMs in fields as diverse as physics education [53] and medical education
[
          <xref ref-type="bibr" rid="ref23">39</xref>
          ]. Common use cases involve support to educators in assessing and grading written tests,
providing feedback to students, and generating educational content [54].
        </p>
        <p>The use of LLMs in education is not without challenges [49]. There are many valid
criticisms of these tools; one of the main problems is that they generate responses by
predicting the most likely next word without any grasping of the semantical level; their
stochastic nature makes the models possibly “hallucinate”, producing seemingly confident
responses that are not factual, in part due to incomplete or biased training data.</p>
        <p>
          Nonetheless, LLMs are undeniably performant also on human tasks [24]. For example,
medical students use these tools to explain complicated medical concepts in simple terms,
generate self-study questions, and create preliminary diagnoses and possible treatment
plans [
          <xref ref-type="bibr" rid="ref24">40</xref>
          ].
        </p>
        <p>Therefore, although the issue of hallucinations remains challenging to address, the
proposal presented in this paper may still prove viable. Moreover, hallucinations can be
limited by designing prompts that follow the established guidelines and best practices [14,
44]. For example, approaches like “chain-of-thought” have proved to help LLMs produce
more grounded outputs [13].</p>
        <p>The current limitations and future prospects of LLMs in education will constitute a
valuable source of discussion during the workshop.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. An LLM-based approach to mitigate challenges in PETA</title>
      <p>To address some of the key challenges in producing quality PETA material, i.e., high costs,
lack of customization, and ineffective warnings, we propose an approach that leverages LLMs
to help security practitioners and educators automate some tasks in the design process.</p>
      <p>
        Automating the creation of training materials has already been indicated as being
potentially beneficial to IT security teams across several dimensions, including reducing
deployment and maintenance efforts, and the amount of human hours required, ultimately
reducing costs for organizations [
        <xref ref-type="bibr" rid="ref24">40</xref>
        ]. Automation can also facilitate the delivery of tailored,
recurrent, and relevant training interventions, reducing the costs associated with manually
customizing training content [
        <xref ref-type="bibr" rid="ref16">32</xref>
        ].
      </p>
      <sec id="sec-3-1">
        <title>3.1. Addressing the Lack of customization in PETA content</title>
      </sec>
      <sec id="sec-3-2">
        <title>3.1.1. Customized Simulated Phishing Campaigns</title>
        <p>
          In order to create customized training content, it is first necessary to take into account the
different psychological and demographic factors of the employees. Recently, in the context
of the DAMOCLES Italian project, we proposed an approach to systematically conduct and
assess the individual vulnerabilities of employees in an organization [20]. This approach
has the goal of investigating, in the context of a simulated phishing campaign, the interaction
between attack techniques (persuasion principles [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and emotional triggers) and user
personality traits, to determine which characteristics of phishing emails maximize the
effectiveness of the attacks for specific employees. Therefore, once the employee’s profile
has been gathered in terms of the Big 5 model [
          <xref ref-type="bibr" rid="ref21">37</xref>
          ] (e.g., collected by administering the NEO
Five-Factor-Inventory-3 [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]), tailored phishing emails can be crafted and delivered to test
their susceptibility under challenging conditions. Obviously, the difficulty of the email is an
important factor to consider in a phishing campaign: for example, the level of challenge
could start low, by presenting users with phishing emails that are easy to detect and testing
the user with very difficult emails towards the end of the campaign [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>The role of the employee within the organization also plays a critical factor in the design
of tailored phishing emails. For example, it would be an obvious red flag for a CEO to receive
a phishing email sent by another “CEO” of their own company; therefore, such an attack
would certainly be recognizable, even if the most effective social engineering techniques are
used, based on his or her profile. Another factor that could be considered is the set of
websites that employees visit most commonly to generate phishing URLs that resemble
domains that are relevant to them. These could be collected either automatically by
analyzing which websites the employee most frequently visits during their work hours, or
by asking him or her to provide them spontaneously through a questionnaire. Moreover,
the URLs on the organization’s internal Domain Name System (DNS) server can be used to
include domains that resemble the legitimate ones used by the company [26]. Finally, the
employee’s demographic information, such as name and gender, is a valuable source of
information for creating spear-phishing emails that are more relevant, e.g., that do not
contain generic greetings and address them by name.</p>
        <p>LLMs can be used to automate the process of writing convincing emails which can
include different topics and/or different persuasion principles (e.g., see [21, 22]). It is worth
noting that commercially available LLMs like ChatGPT cannot be directly used to generate
phishing emails, as this is not considered an ethical activity even for white-hat purposes.
Therefore, less ethical tools such as Worm-GPT might be considered for generating phishing
emails.</p>
        <p>Once we have all the information about a specific employee, we can generate a phishing
email by filling out the following prompt and feeding it to the LLM:
“Pretend to be a security practitioner at [organization name] who is planning a simulated
phishing campaign. You must create a fake phishing email in HTML format that is tailored to
an employee of the organization. The email must be addressed to [employee name] (use
[employee pronouns]), who is [employee role] in the organization. The email must be about
[topic]; use Cialdini’s persuasion principle of [persuasion principle] and include sentences that
leverage [emotional trigger]. You must use fake [organization nationality] real names for the
sender. Create a phishing link URL to include in the email that mimics one of the following
legitimate links: [URLs list]; for example, a fake URL for the website ‘paypal.com’ could be
‘https://www.paypal-refund-claim.com’”.</p>
        <p>
          The persuasion principle refers to Cialdini’s theory of persuasion [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and can be one of
the following: authority, scarcity, reciprocation, social proof, liking, or consistency. The
emotional trigger can refer to one among the most leveraged emotions used in phishing
attacks: curiosity, fear, greed, anger, joy, confusion, or empathy. The choice of which
persuasion principle and emotional trigger to use is dictated by the employee’s personality
traits. The topic can vary to generate different emails covering different plausible scenarios
such as “request of password reset”, “account blocked”, “free giveaway”, “payment request”,
etc. It is worth noting that in this example the prompt is considering exactly one persuasion
principle and one emotional trigger at once, for simplicity; however, more complex attacks
may also include a combination of two or more persuasion principles (e.g., authority and
scarcity) and/or emotional triggers to create more effective phishing emails. An example of
the usage of this prompt is presented in Appendix A.
        </p>
        <p>To investigate the output of the model, it is possible to ask the LLM to explain how the
persuasion principles and emotional triggers are addressed by individual sentences or
words in the generated text; we can also ask the LLM to report the legitimate URL that was
mimicked with the phishing URL. To do this, we can extend the previous prompt with the
following text:
“For each persuasion principle and emotional trigger used in the email, produce an
explanation that points out the pieces of text used to employ them.</p>
        <sec id="sec-3-2-1">
          <title>Finally, report the legitimate URL that is being mimicked in the email.</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>Output format: [EMAIL] -----[EXPLANATION]”.</title>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.1.2. Customized Embedded Training</title>
        <p>In addition to customizing the emails in the simulated phishing campaign, the educational
content for conducting the embedding training must be generated and personalized taking
into account both the type of techniques used in the phishing attacks and the information
about the employee [26]. For example, the employee who falls victim to a simulated
phishing email must be presented with a customized landing page that:
•
•
•</p>
        <p>Debriefs them about the simulated attack, addressing them by their name.
Explains the social engineering techniques used in the email (e.g., authority principle
and urgency) and some tips on how to avoid them.</p>
        <p>Reports the fake phishing URL they clicked on and presents it next to the legitimate
one, highlighting the phishing cues that the victim should have noticed, such as the
top-level domain being misplaced, or URL spoofing.</p>
        <p>The educational level for generating tailored training material must also be considered
in the generation of the educational content: for example, users who are more familiar with
technical aspects of IT security might benefit from explanations that include technical
jargon and/or more details; therefore, the explanation could, e.g., include terms such as
“domain”, “homograph attack”, etc., which might be more relevant to them. This allows us
to generate explanations that are more appropriate to who receives the explanation [47].</p>
        <p>A possible prompt to generate such embedded training content would be:
“Pretend to be a cybersecurity educator who teaches employees how to recognize an email of
phishing. [organization] organized a simulated phishing campaign to test its employees'
susceptibility to phishing. Specifically, [employee name] clicked on a phishing link in one of the
fake emails and was redirected to a landing page containing training information. The email
used [social engineering principles] principles; moreover, the phishing link was [phishing</p>
        <sec id="sec-3-3-1">
          <title>URL], which mimicked the legit link [mimicked URL].</title>
        </sec>
        <sec id="sec-3-3-2">
          <title>Create a short explanation webpage that:</title>
        </sec>
        <sec id="sec-3-3-3">
          <title>1) debriefs them about the simulated attack</title>
        </sec>
        <sec id="sec-3-3-4">
          <title>2) explains the techniques that were used and some tips on how to avoid them</title>
        </sec>
        <sec id="sec-3-3-5">
          <title>3) reports both URLs (phishing and mimicked), highlighting the URL spoofing techniques that were used.</title>
        </sec>
        <sec id="sec-3-3-6">
          <title>Consider that the employee is a [employee role] and is [expertise level] of cybersecurity, and</title>
          <p>tailor the explanation accordingly.“</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>3.2. LLMs for improving anti-phishing warnings</title>
        <p>
          In order to constitute a valuable phishing awareness resource for users, warning dialogs
must include content that is relevant and educational to the user. Determining a priori
whether the warning content is helpful to the user is not an easy task, as it is heavily affected
by the user’s knowledge of phishing, security, and IT. Moreover, employees usually have
limited time to dedicate to reading warnings, as cybersecurity is usually a secondary task
for them [
          <xref ref-type="bibr" rid="ref19">35</xref>
          ].
        </p>
        <p>
          Therefore, if we want explanations in warning messages to be considered, they must be
designed to be readable, understandable, and alerting [15]. In addition, warning messages
should vary so that users do not easily become habituated to seeing the same warning under
different risk circumstances [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Since generating high-quality warning messages is an
onerous task, it is simply not feasible to manually create diverse content that also adapts to
each user’s knowledge level.
        </p>
        <p>A possible solution to this problem is to use LLMs to automatically generate warning
dialogs that include explanations that (i) are dynamically generated (thus helping to avoid
habituation), (ii) address the specific phishing threat (thus potentially improving
decisionmaking), and (iii) adapt to the user’s knowledge (thus being relevant and understandable).</p>
        <p>The explanation in a warning dialog should follow the best practices established in the
literature of warning design and have a standard structure such as those proposed in [15].
Specifically, it should be formed by three parts: 1) a description of the phishing feature that
is being explained, 2) an explanation of the hazard of phishing, and 3) potential
consequences of a successful attack. Hereafter, we propose a possible prompt to generate
such comprehensive warning dialogs:
“Construct a brief explanation message (max 50 words) directed to [employee expertise level]
that will follow this structure:</p>
        <sec id="sec-3-4-1">
          <title>1. description of the most relevant phishing feature</title>
        </sec>
        <sec id="sec-3-4-2">
          <title>2. explanation of the hazard</title>
        </sec>
        <sec id="sec-3-4-3">
          <title>3. consequences of a successful phishing attack</title>
        </sec>
        <sec id="sec-3-4-4">
          <title>For example, a message that explains that a URL in the email (PHISHING_URL) is imitating</title>
          <p>another legitimate one (SAFE_URL), would be:
‘The target URL (PHISHING_URL) is an imitation of the original one, (SAFE_URL). This site
might be intended to take you to a different place. You might be disclosing private
information.’”.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and future work</title>
      <p>
        While phishing remains a critical problem, user education, training, and awareness can
mitigate the success of these attacks and improve an organization’s susceptibility. In fact,
addressing human vulnerabilities can make employees an essential line of defense against
phishing attacks [
        <xref ref-type="bibr" rid="ref25">41</xref>
        ]. Since PETA interventions are often very expensive for organizations,
automation can help reduce the burden and produce high-quality training materials more
easily. In this paper, we have proposed LLMs as a tool to reduce manual effort and produce
highly customizable training material that can be tailored to user needs.
      </p>
      <p>LLMs are a relatively new technology that, despite their impressive performance on
various human tasks [24], still have clear limitations, such as suffering from hallucinations,
and thus need to be carefully supervised by human experts. However, we envision a future
in which human-AI collaboration is fundamental to support complex tasks that traditionally
belong to humans. Therefore, this approach may still be valuable for security practitioners
and educators to produce effective PETA interventions much more efficiently.</p>
      <p>In future work, we plan to implement the approach presented in this work and to include
it as a possible mitigation strategy against phishing attacks for public administration within
the DAMOCLES Italian project. The effectiveness of the proposed approach will be evaluated
by iteratively assessing an organization’s susceptibility to phishing over time. Specifically,
the effectiveness of an LLM-powered simulated phishing campaign will be evaluated by
measuring the employees’ click rate at time zero (e.g., if it is too low, the phishing emails are
probably too easy to detect); on the other hand, the effectiveness of the educational content
will be measured by monitoring the click-rate over time for the exposed users. Finally, it
will be of paramount importance to collect the feedback of employees who are exposed to
LLM-generated training content both during the design phase and once the system is
deployed.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>This work has been supported by the Italian Ministry of University and Research (MUR)and
by the European Union - NextGenerationEU, under grant PRIN 2022 PNRR "DAMOCLES:
Detection And Mitigation Of Cyber attacks that exploit human vuLnerabilitiES" (Grant
P2022FXP5B).</p>
      <p>This work is partially supported by the co-funding of the European Union - Next Generation
EU: NRRP Initiative, Mission 4, Component 2, Investment 1.3 - Partnerships extended to
universities, research centres, companies and research D.D. MUR n. 341 del 5.03.2022 - Next
Generation EU (PE0000014 – “Security and Rights In the CyberSpace – SERICS” - CUP:
H93C22000620001).</p>
      <p>The research of Francesco Greco is funded by a PhD fellowship within the framework of the
Italian “D.M. n. 352, April 9, 2022”- under the National Recovery and Resilience Plan,
Mission 4, Component 2, Investment 3.3 - PhD Project “Investigating XAI techniques to help
user defend from phishing attacks”, co-supported by “Auriga S.p.A.” (CUP
H91I22000410007).
[13] DAIR.AI. Chain-of-Thought Prompting - Prompt Engineering Guide, Url:
https://www.promptingguide.ai/techniques/cot Last Access 7 May 2024.
[14] DAIR.AI. Prompt Engineering Guide, Url: https://www.promptingguide.ai/ Last Access
24 Jan. 2024.
[15] Desolda, G., Aneke, J., Ardito, C., Lanzilotti, R. and Costabile, M.F. 2023. Explanations in
warning dialogs to help users defend against phishing attacks. In International</p>
      <sec id="sec-5-1">
        <title>Journal of Human-Computer Studies, 20.</title>
        <p>[16] Desolda, G., Ferro, L.S., Marrella, A., Catarci, T. and Costabile, M.F. 2021. Human Factors
in Phishing Attacks: A Systematic Literature Review. In ACM Computing Survey ACM,
35.
[17] Dixon, M., Arachchilage, N.A.G. and Nicholson, J., Engaging Users with Educational</p>
      </sec>
      <sec id="sec-5-2">
        <title>Games: The Case of Phishing. Conference on Human Factors in Computing Systems.</title>
        <p>Glasgow, Scotland Uk, 2019, pp. 1-6. 10.1145/3290607.3313026
[18] Dupont, G., The Dirty Dozen Errors in Maintenance. Proceedings of the 11th Meeting on</p>
        <p>Human Factors In Aviation Maintenance and Inspection.1997.
[19] Egelman, S., Cranor, L.F. and Hong, J., You've Been Warned: An Empirical Study of the
Effectiveness of Web Browser Phishing Warnings. SIGCHI Conference on Human
Factors in Computing Systems. 2008, pp. 1065–1074 10.1145/1357054.1357219
[20] Greco, F., Buono, P., Desiato, D., Desolda, G., Lanzilotti, R. and Ragone, G., Unlocking the
Potential of Simulated Phishing Campaigns: Measuring the Impact of Interaction
among Different Human Factors. DAMOCLES: Detection And Mitigarion of Cyber
attacks that exploit human vuLnerabilitiES workshop, co-located with AVI '24.</p>
        <p>Arenzano (Genoa), Italy, 2024, pp. 10.
[21] Greco, F., Desolda, G., Esposito, A. and Carelli, A., David versus Goliath: Can Machine
Learning Detect LLM-Generated Text? A Case Study in the Detection of Phishing</p>
      </sec>
      <sec id="sec-5-3">
        <title>Emails. The Italian Conference on CyberSecurity. Salerno, Italy, 2024, pp.</title>
        <p>[22] Hazell, J. Spear Phishing With Large Language Models, 2023. url:
https://arxiv.org/abs/2305.06972 Last Access.
[23] Hu, S., Hsu, C. and Zhou, Z., Security Education, Training, and Awareness Programs:
Literature Review, JCIS 62 (2022) pp. 752-764.
doi:10.1080/08874417.2021.1913671.
[24] HuggingFace. Open LLM Leaderboard, 2024. Url:
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard Last Access
13 Feb. 2024.
[25] IBM 2023. Security X-Force Threat Intelligence Index.
[26] Jampen, D., Gür, G., Sutter, T. and Tellenbach, B., Don’t click: towards an effective
antiphishing training. A comparative literature review, Hum. Cent. Comput. Inf. Sci. 10
(2020) pp. 33. doi:10.1186/s13673-020-00237-7.
[27] Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser,
U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T.,
Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., Weller,
J., Kuhn, J. and Kasneci, G., ChatGPT for good? On opportunities and challenges of
large language models for education, ChatGPT for good? On opportunities and
challenges of large language models for education 103 (2023) pp. 102274.
doi:10.1016/j.lindif.2023.102274.
[28] Katsikas, S.K., Health care management and information systems security: awareness,
training or education?, Int. J. Med. Inform. 60 (2000) pp. 129-135.
doi:10.1016/S1386-5056(00)00112-X.
[43] Sheng, S., Magnien, B., Kumaraguru, P., Acquisti, A., Cranor, L.F., Hong, J. and Nunge, E.,
Anti-Phishing Phil: the design and evaluation of a game that teaches people not to
fall for phish. Symposium on Usable privacy and security. Pittsburgh, Pennsylvania,
USA, 2007, pp. 88–99. 10.1145/1280680.1280692
[44] Shieh, J. Best practices for prompt engineering with OpenAI API OpenAI.
[45] TerranovaSecurity. Phishing Benchmark Global Report, 2021. Url:
https://www.terranovasecurity.com/resources/guides/gone-phishing-report2021 Last Access 21 Apr 2024.
[46] Tessian 2022. Phishing Awareness Training: How Effective is Security Training? In</p>
      </sec>
      <sec id="sec-5-4">
        <title>Advanced Email Threats.</title>
        <p>[47] Viganò, L. and Magazzeni, D., Explainable Security. IJCAI/ECAI 2018 Workshop on</p>
      </sec>
      <sec id="sec-5-5">
        <title>Explainable Artificial Intelligence. 2018, pp. 7. arXiv:1807.04178</title>
        <p>[48] Vilone, G. and Longo, L., Notions of explainability and evaluation approaches for
explainable artificial intelligence, Inf Fusion 76 (2021) pp. 89-106.
doi:10.1016/j.inffus.2021.05.009.
[49] Wang, S., Xu, T., Li, H., Zhang, C., Liang, J., Tang, J., Yu, P.S. and Wen, Q. Large Language</p>
      </sec>
      <sec id="sec-5-6">
        <title>Models for Education: A Survey and Outlook, 2024. url:</title>
        <p>https://arxiv.org/abs/2403.18105 Last Access.
[50] Wen, Z.A., Lin, Z., Chen, R. and Andersen, E., What.Hack: Engaging Anti-Phishing
Training Through a Role-playing Phishing Simulation Game. Conference on Human
Factors in Computing Systems. Glasgow, Scotland Uk, 2019, pp. 1-12.
10.1145/3290605.3300338
[51] Wright, R.T., Jensen, M.L., Thatcher, J.B., Dinger, M. and Marett, K., Research Note:
Influence Techniques in Phishing Attacks: An Examination of Vulnerability and
Resistance, Inf. Syst. 25 (2014) pp. 385-400.
[52] Wu, M., Miller, R.C. and Garfinkel, S.L., Do Security Toolbars Actually Prevent Phishing</p>
      </sec>
      <sec id="sec-5-7">
        <title>Attacks? SIGCHI Conference on Human Factors in Computing Systems. 2006, pp. 601–</title>
        <p>610. 10.1145/1124772.1124863
[53] Wulff, P., Physics language and language use in physics - What do we know and how AI
might enhance language-related research and instruction, Eur. J. Phys. 45 (2024) pp.
023001. doi:10.1088/1361-6404/ad0f9c.
[54] Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y. and Gašević,
D., Practical and ethical challenges of large language models in education: A
systematic scoping review, BJET 55 (2024) pp. 90-112. doi:10.1111/bjet.13370.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>A. Appendix: An example of an email created using an LLM</title>
      <p>The email was generated in Italian since ChatGPT recognized that the University of Bari
is an Italian institution. We asked subsequently to translate it to English to present it to our
English-speaking readers. In the following there is the translated email:</p>
      <sec id="sec-6-1">
        <title>Subject: Urgent: Reset Your Account Password</title>
      </sec>
      <sec id="sec-6-2">
        <title>Dear Dr. Francesco Greco,</title>
      </sec>
      <sec id="sec-6-3">
        <title>I hope this email finds you well. I am Martina Rossi from the Department of Computer</title>
      </sec>
      <sec id="sec-6-4">
        <title>Technology at the University of Bari.</title>
      </sec>
      <sec id="sec-6-5">
        <title>We are conducting an important maintenance operation on the university's computer systems, and we need your prompt assistance. As part of this process, we have identified a potential security risk related to your login credentials.</title>
      </sec>
      <sec id="sec-6-6">
        <title>To ensure the utmost security of your data and the university's systems, we kindly ask you to proceed with resetting your account password.</title>
      </sec>
      <sec id="sec-6-7">
        <title>Please click on the following link to complete the password reset process: [https://www.uniba-idp-cineca-passwordreset.com]</title>
      </sec>
      <sec id="sec-6-8">
        <title>Please note that this operation is essential to ensure the security of your personal data and the university's data. We kindly request you to act promptly.</title>
      </sec>
      <sec id="sec-6-9">
        <title>Thank you for your cooperation and understanding in this matter.</title>
      </sec>
      <sec id="sec-6-10">
        <title>Best regards,</title>
      </sec>
      <sec id="sec-6-11">
        <title>Martina Rossi</title>
      </sec>
      <sec id="sec-6-12">
        <title>Department of Computer Technology</title>
      </sec>
      <sec id="sec-6-13">
        <title>University of Bari</title>
      </sec>
      <sec id="sec-6-14">
        <title>Note: This email has been sent automatically. Please do not reply directly.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Akhawe</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Felt</surname>
            ,
            <given-names>A.P.</given-names>
          </string-name>
          ,
          <article-title>Alice in warningland: a large-scale field study of browser security warning effectiveness</article-title>
          . USENIX conference on Security. Washington, D.C.,
          <year>2013</year>
          , pp.
          <fpage>257</fpage>
          -
          <lpage>272</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Almomani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>B.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Atawneh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meulenberg</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Almomani</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <article-title>A Survey of Phishing Email Filtering Techniques</article-title>
          ,
          <source>IEEE Commun. Surv. Tutor</source>
          .
          <volume>15</volume>
          (
          <issue>2013</issue>
          ) pp.
          <fpage>2070</fpage>
          -
          <lpage>2090</lpage>
          . doi:
          <volume>10</volume>
          .1109/SURV.
          <year>2013</year>
          .
          <volume>030713</volume>
          .00020.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Althobaiti</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meng</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Vaniea</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>I</given-names>
            <surname>Don</surname>
          </string-name>
          <article-title>'t Need an Expert! Making URL Phishing Features Human Comprehensible</article-title>
          .
          <source>Conference on Human Factors in Computing Systems. Yokohama, Japan</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          .
          <fpage>10</fpage>
          .1145/3411764.3445574
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>B.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kirwan</surname>
            ,
            <given-names>C.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jenkins</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eargle</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Howard</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Vance</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>How Polymorphic Warnings Reduce Habituation in the Brain: Insights from an fMRI Study</article-title>
          .
          <source>ACM Conference on Human Factors in Computing Systems</source>
          . Seoul, Republic of Korea,
          <year>2015</year>
          , pp.
          <fpage>2883</fpage>
          -
          <lpage>2892</lpage>
          .
          <fpage>10</fpage>
          .1145/2702123.2702322
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Arachchilage</surname>
            ,
            <given-names>N.A.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Love</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Beznosov</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <article-title>Phishing threat avoidance behaviour: An empirical investigation</article-title>
          ,
          <source>Comput. Hum. Behav</source>
          .
          <volume>60</volume>
          (
          <issue>2016</issue>
          ) pp.
          <fpage>185</fpage>
          -
          <lpage>197</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.chb.
          <year>2016</year>
          .
          <volume>02</volume>
          .065.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Bravo-Lillo</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cranor</surname>
            ,
            <given-names>L.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Downs</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Komanduri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sleeper</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Improving Computer Security Dialogs. International Conference on Human-Computer Interaction. Berlin, Heidelberg,
          <year>2011</year>
          , pp.
          <fpage>18</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Brunken</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buckmann</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hielscher</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sasse</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <article-title>"To Do This Properly, You Need More Resources": The Hidden Costs of Introducing Simulated Phishing Campaigns</article-title>
          .
          <source>32nd USENIX Security Symposium</source>
          <year>2023</year>
          , pp.
          <fpage>4105</fpage>
          -
          <lpage>4122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Buono</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Desolda</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greco</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Piccinno</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Let warnings interrupt the interaction and explain: designing and evaluating phishing email warnings</article-title>
          .
          <source>CHI Conference on Human Factors in Computing Systems. Hamburg Germany</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
          <fpage>10</fpage>
          .1145/3469886
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Caputo</surname>
            ,
            <given-names>D.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pfleeger</surname>
            ,
            <given-names>S.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Freeman</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          and Johnson, M.E., Going Spear Phishing:
          <article-title>Exploring Embedded Training</article-title>
          and Awareness,
          <string-name>
            <surname>S&amp;P</surname>
          </string-name>
          12 (
          <year>2014</year>
          ) pp.
          <fpage>28</fpage>
          -
          <lpage>38</lpage>
          . doi:
          <volume>10</volume>
          .1109/MSP.
          <year>2013</year>
          .
          <volume>106</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Cialdini</surname>
            ,
            <given-names>R.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Influence</surname>
          </string-name>
          : The Psychology of Persuasion. revised ed, Harper Collins.
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Costa</surname>
            ,
            <given-names>P.T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>McCrae</surname>
            ,
            <given-names>R.R.</given-names>
          </string-name>
          ,
          <article-title>Four ways five factors are basic</article-title>
          ,
          <source>Pers. Individ. Dif</source>
          .
          <volume>13</volume>
          (
          <issue>1992</issue>
          ) pp.
          <fpage>653</fpage>
          -
          <lpage>665</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0191</fpage>
          -
          <lpage>8869</lpage>
          (
          <issue>92</issue>
          )
          <fpage>90236</fpage>
          -
          <lpage>I</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>CybSafe</surname>
          </string-name>
          .
          <article-title>The ultimate people-centric guide to simulated phishing, 2023</article-title>
          . Url: https://www.cybsafe.com/value/simulated-phishing
          <source>/ Last Access 21 Apr</source>
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Khonji</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iraqi</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Phishing Detection</surname>
          </string-name>
          :
          <article-title>A Literature Survey</article-title>
          ,
          <source>IEEE Commun. Surv. Tutor</source>
          .
          <volume>15</volume>
          (
          <issue>2013</issue>
          ) pp.
          <fpage>2091</fpage>
          -
          <lpage>2121</lpage>
          . doi:
          <volume>10</volume>
          .1109/SURV.
          <year>2013</year>
          .
          <volume>032213</volume>
          .00009.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Wogalter</surname>
            ,
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Habituation</surname>
          </string-name>
          , Dishabituation, and Recovery Effects in Visual Warnings, Habituation, Dishabituation, and
          <source>Recovery Effects in Visual Warnings</source>
          <volume>53</volume>
          (
          <year>2009</year>
          ) pp.
          <fpage>1612</fpage>
          -
          <lpage>1616</lpage>
          . doi:
          <volume>10</volume>
          .1177/154193120905302015.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Kirlappos</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sasse</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <article-title>Security Education against Phishing: A Modest Proposal for a Major Rethink</article-title>
          ,
          <string-name>
            <surname>S&amp;P</surname>
          </string-name>
          10 (
          <year>2012</year>
          ) pp.
          <fpage>24</fpage>
          -
          <lpage>32</lpage>
          . doi:
          <volume>10</volume>
          .1109/MSP.
          <year>2011</year>
          .
          <volume>179</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [32]
          <fpage>KnowB4</fpage>
          .
          <article-title>Whitepaper: Building an Effective and Comprehensive Security Awareness Program Url: https://info.knowbe4.com/wp-building-effective-comprehensive-sat Last Access 21 Apr 2024</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Kumaraguru</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cranshaw</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acquisti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cranor</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blair</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Pham</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <article-title>School of phish: a real-world evaluation of anti-phishing training</article-title>
          .
          <source>Symposium on Usable Privacy and Security</source>
          . Mountain View, California, USA,
          <year>2009</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
          <fpage>10</fpage>
          .1145/1572532.1572536
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [34]
          <string-name>
            <surname>Kumaraguru</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rhee</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acquisti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cranor</surname>
            ,
            <given-names>L.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Nunge</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <article-title>Protecting people from phishing: the design and evaluation of an embedded training email system</article-title>
          .
          <source>SIGCHI Conference on Human Factors in Computing Systems</source>
          . San Jose, California, USA,
          <year>2007</year>
          , pp.
          <fpage>905</fpage>
          -
          <lpage>914</lpage>
          .
          <fpage>10</fpage>
          .1145/1240624.1240760
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [35]
          <string-name>
            <surname>Kumaraguru</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheng</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acquisti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cranor</surname>
            ,
            <given-names>L.F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <article-title>Teaching Johnny not to fall for phish</article-title>
          ,
          <source>Trans. Internet Technol</source>
          .
          <volume>10</volume>
          (
          <issue>2010</issue>
          ) pp.
          <fpage>1</fpage>
          -
          <lpage>31</lpage>
          . doi:
          <volume>10</volume>
          .1145/1754393.1754396.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [36]
          <string-name>
            <surname>Lain</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kostiainen</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Čapkun</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Phishing in Organizations:
          <article-title>Findings from a LargeScale and Long-Term Study</article-title>
          ,
          <source>in: Proceedings of the 2022 IEEE Symposium on Security and Privacy</source>
          ,
          <string-name>
            <surname>SP</surname>
          </string-name>
          , Year, pp.
          <fpage>842</fpage>
          -
          <lpage>859</lpage>
          . doi:
          <volume>10</volume>
          .1109/SP46214.
          <year>2022</year>
          .
          <volume>9833766</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [37]
          <string-name>
            <surname>McCrae</surname>
            ,
            <given-names>R.R.</given-names>
          </string-name>
          and Costa, P.T.J.,
          <article-title>The five-factor theory of personality</article-title>
          .
          <source>Handbook of personality: Theory and research</source>
          , The Guildford Press, New York, NY, US.
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [38]
          <string-name>
            <surname>Mossano</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vaniea</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aldag</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Düzgün</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mayer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Volkamer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <article-title>Analysis of publicly available anti-phishing webpages: contradicting information, lack of concrete advice and very narrow attack vector</article-title>
          ,
          <source>in: Proceedings of the IEEE European Symposium on Security and Privacy Workshops</source>
          ,
          <source>EuroS&amp;PW '20</source>
          , IEEE,
          <year>Year</year>
          , pp.
          <fpage>130</fpage>
          -
          <lpage>139</lpage>
          . doi:
          <volume>10</volume>
          .1109/EuroSPW51379.
          <year>2020</year>
          .
          <volume>00026</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [39]
          <string-name>
            <surname>Pandey</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Mishra</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <source>Large Language Models in Medical Education and Quality Concerns, JQHE</source>
          <volume>6</volume>
          (
          <year>2023</year>
          ) pp.
          <fpage>3</fpage>
          . doi:
          <volume>10</volume>
          .23880/jqhe-16000319.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [40]
          <string-name>
            <surname>Sarker</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jayatilaka</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haggag</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Liu,
          <string-name>
            <given-names>C.</given-names>
            and
            <surname>Babar</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.A.</surname>
          </string-name>
          ,
          <article-title>A Multi-vocal Literature Review on challenges and critical success factors of phishing education, training and awareness</article-title>
          ,
          <source>JSS</source>
          <volume>208</volume>
          (
          <year>2024</year>
          ) pp.
          <fpage>111899</fpage>
          . doi:https://doi.org/10.1016/j.jss.
          <year>2023</year>
          .
          <volume>111899</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [41]
          <string-name>
            <surname>Sasse</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brostoff</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Weirich</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Transforming the 'Weakest Link'
          <article-title>- a Human/Computer Interaction Approach to Usable and Effective Security, Transforming the 'Weakest Link' - a Human/Computer Interaction Approach to Usable and Effective Security 19 (</article-title>
          <year>2001</year>
          ) pp.
          <fpage>122</fpage>
          -
          <lpage>131</lpage>
          . doi:
          <volume>10</volume>
          .1023/A:
          <fpage>1011902718709</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [42]
          <string-name>
            <surname>Sheng</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holbrook</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumaraguru</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cranor</surname>
            ,
            <given-names>L.F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Downs</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <article-title>Who falls for phish? a demographic analysis of phishing susceptibility and effectiveness of interventions</article-title>
          .
          <source>SIGCHI Conference on Human Factors in Computing Systems. Atlanta</source>
          , Georgia, USA,
          <year>2010</year>
          , pp.
          <fpage>373</fpage>
          -
          <lpage>382</lpage>
          .
          <fpage>10</fpage>
          .1145/1753326.1753383
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>