<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploiting Human Vulnerabilities: A Practical Analysis of Social Engineering Attacks and Countermeasures</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ardi Benusi</string-name>
          <email>ardi.benusi@fshn.edu.al</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Geri Selgjekaj</string-name>
          <email>geri.selgjekaj@fshnstudent.info</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cybersecurity, Social Engineering</institution>
          ,
          <addr-line>Phishing, Deep Fake, AI in Cybersecurity, Identity Management, Data Privacy, Security</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Tirana, Faculty of Natural Sciences</institution>
          ,
          <addr-line>Bulevardi Zogu I, Tiranë 1001</addr-line>
          ,
          <country country="AL">Albania</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Tirana, Faculty of Natural Sciences</institution>
          ,
          <addr-line>Bulevardi Zogu I, Tiranë 1001</addr-line>
          ,
          <country country="AL">Albania</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Social engineering attacks exploit human vulnerabilities rather than technical flaws, making them one of the most effective methods for breaching security systems. This study provides a practical analysis of social engineering attacks, examining the psychological manipulation techniques used by attackers and their real-world implications. Through case studies and hands-on experimentation, we identify common attack vectors such as phishing, pretexting, and baiting, assessing their success rates and impact. Furthermore, we evaluate existing countermeasures, including awareness training, behavioral interventions, and technical defenses, to determine their effectiveness in mitigating these threats. The findings highlight the urgent need for a multidisciplinary approach that combines cybersecurity measures with human-centered awareness strategies. This research aims to contribute to the development of more resilient defense mechanisms against social engineering attacks.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Social engineering involves manipulating people using various tactics to achieve specific outcomes.
By exploiting human psychology, social engineers aim to persuade their targets to act in ways they
might not have under normal circumstances. Traditional social engineering techniques like phishing,
pretexting, and baiting are well known and extensively documented. However, the rise of deepfake
technology, a form of AI-generated synthetic media, has added a new layer of complexity to these
attacks. By producing highly realistic audio, video, and text, deepfakes allow attackers to mimic
trusted individuals or craft deceptive scenarios with remarkable authenticity.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>cited include:
directives.</p>
      <p>To effectively carry out social engineering, several core principles are often employed. While the
specific principles and their labels may differ depending on the source, some of the most frequently
•</p>
      <p>Authority is based on the tendency of individuals to comply with figures perceived as
having power or control. Social engineers exploit this principle by posing as authoritative figures,
such as managers, government representatives, or other roles that command respect or influence
within a given context. By assuming such identities, they manipulate targets by following their
•</p>
      <p>Intimidation operates by instilling fear or applying pressure to coerce an individual into
complying with a specific demand. The target, feeling threatened or overwhelmed, is more likely to
act in accordance with the social engineer's wishes to avoid perceived negative consequences. This
tactic exploits the natural human response to perceived danger or authority.</p>
      <p>• Consensus-based exploits the human tendency to conform to group behavior, leveraging
the desire to follow what others are doing. In such attacks, the attacker might claim that everyone
in a team or department has already performed a specific action, like clicking a link. This principle,
sometimes referred to as "social proof," relies on the psychological inclination to align with the
perceived actions or opinions of the majority.</p>
      <p>• Scarcity is employed in social engineering by creating the perception that a resource,
opportunity, or item is in limited supply, thereby increasing its perceived value. For example, a social
engineer might claim that an offer is available only for a short time or that there are only a few items
left, pressuring the target to act impulsively.</p>
      <p>• Familiarity-based attacks exploit the natural human tendency to trust or feel positively
toward individuals or organizations we recognize or have an affinity for. A social engineer might
impersonate someone you know, a trusted colleague, to increase the likelihood of compliance. By
leveraging pre-existing feelings of goodwill or recognition, the attacker makes their request or action
seem more legitimate and less suspicious.</p>
      <p>• Trust-based techniques rely on establishing a personal or emotional connection with the
target to impose a sense of reliability and confidence. Unlike familiarity, which relies on pre-existing
recognition or comfort, trust is actively cultivated by the social engineer through rapport-building,
empathy, or shared interests. By creating this bond, the manipulator increases the likelihood that the
target will comply with their requests, as they perceive the interaction as genuine and trustworthy.</p>
      <p>• Urgency is a tactic used in social engineering to create a sense of immediate pressure,
compelling the target to act quickly without thorough consideration. By presenting a situation as
time-sensitive or critical, such as a limited-time offer, an impending deadline, or a perceived
emergency, the manipulator exploits the target's instinct to respond swiftly, often bypassing rational
decision-making. This approach increases the likelihood of compliance by inducing stress or fear of
missing out.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Deepfake Technology</title>
      <p>DeepFakes leverage generative adversarial networks (GANs) to produce highly realistic synthetic
content. While they first gained attention in entertainment, they have since been misused for harmful
activities such as disinformation, fraud, and social engineering. By replicating voices, facial features,
and writing styles, deepfakes pose a serious threat to human trust and security.</p>
      <sec id="sec-3-1">
        <title>3.1. Deepfake and Social Engineering Attacks</title>
        <p>The use of this technology allows attackers to impersonate executives, government officials,
or even family members with alarming accuracy. For instance, cybercriminals can generate
a deepfake audio clip of a "CEO" instructing an employee to transfer funds or reveal sensitive
data. This form of social engineering exploits trust, making it difficult to detect fraud.
DeepFakes have the power to create false events or statements, leading to confusion and a
breakdown of trust. For example, a digitally altered video showing a political leader making
provocative comments could spark social turmoil.</p>
        <p>DeepFakes can exploit emotional vulnerabilities by creating fake distress calls or messages
from loved ones, pressuring victims to act impulsively to protect their loved ones.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Real-world incident</title>
        <p>CEO Fraud: A UK-based energy firm lost more than $200,000 after attackers used deepfake
audio to impersonate the CEO. This is a real-world incident where AI-generated voice was
used to impersonate a CEO and successfully stole in terms of hundreds of thousands.
2. Political Disinformation: A deepfake video of a political candidate went viral during an
election campaign, influencing public opinion.
3. A deepfake video of Ukrainian President was circulated, falsely showing him surrendering to
enemy forces.
4. Deepfake videos of CEO have been used in cryptocurrency scams, showing him "promoting"
fake investment opportunities.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Social Engineering Exploitation by DeepFake</title>
      <sec id="sec-4-1">
        <title>4.1. Authority</title>
        <p>Many people tend to follow requests from perceived authority figures. Deepfakes create realistic
impersonations of leaders or managers. The psychological effect on humans is greatly amplified by
the accuracy of impersonation.</p>
        <p>Deepfakes exploit a fundamental aspect of human psychology: our tendency to trust and obey
authority figures. This phenomenon, rooted in social conditioning and cognitive biases, makes people
more susceptible to manipulation when they believe they are interacting with a legitimate leader,
manager, or other trusted individuals.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Urgency and Fear</title>
        <p>Deepfakes can create scenarios that evoke urgency or fear, such as a fake emergency requiring
immediate action. It can be used to manipulate emotions by creating fabricated emergencies that
pressure victims into acting quickly. These scams exploit human psychology when faced with fear
or urgency, because people are less likely to verify details and more likely to comply.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Trust and Familiarity</title>
        <p>Deepfakes exploit trust by mimicking familiar voices or faces, making it difficult for victims to be
alerted to authenticity. When people hear a familiar voice or see a convincing video of someone they
know whether it’s a CEO, a government official, or a family member, they instinctively assume it’s
real. This illusion of authenticity lowers skepticism and makes victims more likely to comply with
requests, whether it’s transferring money, sharing confidential information, or following misleading
instructions.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Simulation with DeepFace open source application</title>
      <p>To show the potential of image swapping using live streaming applications, a simulation is done
using deepfake tools. The program uses machine learning models for tasks like face detection in the
frame, facial point detection, and face replacement. These models require intensive computations,
which graphical accelerator can handle tens of times faster than on an ordinary processor.</p>
      <p>The architecture is based on the Model – View – Controller pattern. Camera source generates
the frames and sends it to the next module for processing preserving the final frame per second
between slower and faster modules. The final output module outputs the stream to the screen with
some delays, which is needed to synchronize the sound.</p>
      <p>A source camera supporting at least 5 frame per second (FPS) is used with a resolution of 1920 x
1080. The source frame looks like in Figure 2, where the real image is blurred to protect the identity
of the person behind the camera.</p>
      <p>To preserve the quality of the streaming and thus create a more credible picture, a graphic
card like GeForce RTX 306 family is used. Face Aligner uses a 180 FPS to align the face before
swapping. Face alignment has been in use for quite some time with growing concerns about
misusing it for fraud.</p>
      <p>After the face alignment, a face swap like in Figure 4 is done taking a picture of a movie
actress, utilizing the InsightFace model for facial recognition and manipulation. With this
application, anyone can change faces in images or videos, creating engaging content. If fun was the
only interest people would take from the swapping process, then the security issues would be
something of the past. As it was described in real-world security incidents, face swapping has been
exploited by black hackers and hacktivists for financial or political gains.</p>
      <p>The next step is integrating the swapped image into the streaming output. Bilinear
interpolation with rct color transfer is used with 227 FPS frame adjuster to produce the
swapped video as shown in Figure 5. Stream output can be easily uploaded or shared in a
local/online web server.</p>
      <p>Audio for public individuals can be easily found on the internet. To train the system at max 20
minutes medium quality audio material form the target person may be used to match the video with
audio.</p>
    </sec>
    <sec id="sec-6">
      <title>5.1. Possible misuse</title>
      <p>Using faked or compromised accounts, the content may be injected and appear as legitimate. People
following their favorite actress will accept any false content as true and possibly fall victims of scams,
or phishing attacks.</p>
      <p>The model does not need to be trained. There are public face models that can swap any face
without training. However, if a particular face is going to be used for swapping a celebrity, it requires
one day to train the model using an RTX GPU. That involves gathering four to five thousand samples
of the source face with different lighting, different facial expressions, head direction, eyes direction,
being far or closer to the camera. A filtering of not more than two thousand is enough.</p>
      <p>Spreading political or religious content using deep fake may influence a lot of people, which can
lead to misinformation, fraud, and harm to individuals.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Protection</title>
      <p>Social engineering combined with phishing creates a powerful attack if used wisely by threat actors.
Because psychological methods involve contact between individuals, attackers use a series of
techniques to gain trust, such as:

</p>
      <p>Not asking frequent and long questions, but short ones to collect basic information from
several users to maintain trust.</p>
      <p>The request or question must be reasonable. For example, a question like "Is the director
authorized" is more reasonable than the question "Can you check the director's printer to see
if he printed the board meeting?"

</p>
      <p>Flirting can facilitate the process of gathering information.</p>
      <p>Collecting as much information as possible without arousing suspicion.</p>
      <sec id="sec-7-1">
        <title>6.1. Training and Awareness</title>
        <p>Educating individuals about technologies and their potential misuse is critical. Training programs
should focus on recognizing alerts and verifying suspicious requests, emails, phone calls, video calls
or messages. Information security agencies must frequently publish on their social network channels
video training on how to protect against social engineering attacks and especially on new emerging
threats. Cybersecurity hygiene must be extended and reachable to wider groups of people who are
not necessarily IT.</p>
      </sec>
      <sec id="sec-7-2">
        <title>6.2. Solution using technology</title>
        <p>Technology can help in addressing security issues. Detection AI tools can analyze media for signs of
manipulation. Protection of user accounts from being compromised using multi-factor
authentication could prevent unauthorized and misused access. Cloud and AI companies should
advocate responsible AI usage and have implemented policies to prevent the misuse of their tools. A
deepfake detecting tool would depend on the final quality of the picture like flickering face, abruptly
clipping face mask, irregular colors. In one case, deepware.ai did detect fake pictures, but in a second
case a good quality picture was used, where the program did not detect any fake. Training a model
with higher face resolution gives a streaming output of better quality and makes it undetected by
DeepFake detectors. We cannot depend solely on technology, as adversaries can also exploit it.</p>
      </sec>
      <sec id="sec-7-3">
        <title>6.3. Regulatory Frameworks</title>
        <p>Governments agencies and organizations must establish stronger policies to address the ethical and
legal implications of DeepFake technology.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>7. Conclusion</title>
      <p>Deepfake technology marks a major advancement in social engineering attacks, taking advantage of
human weaknesses with remarkable accuracy. As these threats grow more advanced, addressing
them requires a comprehensive strategy that integrates awareness, technological solutions, and
regulatory measures. By examining the connection between AI and human psychology, we can
create stronger defenses against this evolving challenge. Security awareness combined with
technology can, as in other social engineering attacks prevent these attacks from happening.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[4] Deepware. (2023). Deepware Scanner: AI-powered deepfake detection [Computer software].</p>
      <p>URL: https://deepware.ai/scanner
[5] European Parliamentary Research Service. (2020). The ethics of artificial intelligence: Issues
and initiatives (PE 634.452). European Parliament. URL:
https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)6344
52_EN.pdf
[6] Heavybit. (2023, September 28). AI, data privacy, and security in IAM [Blog post]. Heavybit</p>
      <p>Library. URL: https://www.heavybit.com/library/article/ai-data-privacy-security-iam
[7] iPerov. (2022). DeepFaceLive: Real-time face swap for PC streaming or video calls [Computer
software]. GitHub. https://github.com/iperov/DeepFaceLive
[8] Kim, P. (2018). The hacker playbook 3: Practical guide to penetration testing. Secure Planet</p>
      <p>
        LLC.
[9] Maras, M. H., &amp; Alexandrou, A. (2019). Determining authenticity of video evidence in the
age of artificial intelligence and deepfakes. International Journal of Evidence &amp; Proof, 23(4),
255–273. URL: https://doi.org/10.1177/1365712718807226.
[10] Mirsky, Y., &amp; Lee, W. (2021). The creation and detection of deepfakes: A survey. ACM
Computing Surveys, 54(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 1–41. https://doi.org/10.1145/3425780.
      </p>
      <p>doi: 10.1145/3425780.
[11] Whitman, M. E., &amp; Mattord, H. J. (2021). Principles of information security (6th ed.).</p>
      <p>Cengage Learning.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Albanian</given-names>
            <surname>Cyber Academy 4th Edition</surname>
          </string-name>
          ,
          <year>2020</year>
          . URL: https://aksk.gov.al/en/albanian-cyberacademy
          <string-name>
            <surname>-</surname>
          </string-name>
          4th-edition/
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Chapple</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seidl</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Greenley</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>CompTIA Security+ study guide with over 500 practice test questions: Exam SY0</article-title>
          -
          <volume>701</volume>
          (9th ed.). Sybex.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Chesney</surname>
          </string-name>
          , Robert and Citron, Danielle Keats, Deesp Fakes:
          <article-title>A Looming Challenge for Privacy, Democracy, and National Security (July 14,</article-title>
          <year>2018</year>
          ).
          <source>107 California Law Review</source>
          <volume>1753</volume>
          (
          <year>2019</year>
          ), U of Texas Law, Public Law Research Paper No. 692, U of Maryland Legal Studies Research Paper No.
          <year>2018</year>
          -
          <volume>21</volume>
          , Available at SSRN: https://ssrn.com/abstract=3213954 or http://dx.doi.org/10.2139/ssrn.3213954
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>