<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>D. Granata);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>DISARM: Twitter Case Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniele Granata</string-name>
          <email>daniele.granata@uniparthenope.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberto Nardone</string-name>
          <email>roberto.nardone@uniparthenope.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Naples ”Parthenope”, Isola C4, Centro Direzionale</institution>
          ,
          <addr-line>80143 Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Social networks have become a major source of information, but their popularity brings significant challenges, including disinformation and security threats. This paper presents an architecture leveraging the DISARM framework, a real-time analytics system capable of integrating data from multiple sources, detecting anomalies, and assessing risks. The proposal combines advanced analytical techniques with a dedicated Policy Engine to promptly monitor, detect, and report suspicious activities, assessing their potential risks. The validation of the proposal was conducted through a case study focused on Twitter, which was chosen for its extensive usage and susceptibility to disinformation campaigns. Using diferent datasets, we demonstrate that the architecture efectively detects anomalies such as bot-driven activities, promotional campaigns, and suspicious behaviors in real time.</p>
      </abstract>
      <kwd-group>
        <kwd>Disinformation detection</kwd>
        <kwd>Real-time analytics</kwd>
        <kwd>Twitter analysis</kwd>
        <kwd>SIEM</kwd>
        <kwd>DISARM framework</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
the proposed real-time monitoring architecture, mapping its tactics and countermeasures to concrete
detection rules and automated responses within a Security Information and Event Management (SIEM)
system and a Security Orchestration Automation and Response (SOAR) platform. This allows us to
detect suspicious patterns—such as rapid message amplification or coordinated low-profile account
activity—and trigger counteractions consistent with DISARM’s response definitions. Moreover, we
have conducted diferent experiments in diferent scenarios, using Twitter as the social media platform
for analysis.</p>
      <p>The rest of the paper is organized as follows. Section 2 presents the state-of-the-art contributions,
describing various datasets used for detecting disinformation and fake news. Section 3 introduces the
proposed architecture based on DISARM, detailing the detection rules and corresponding responses.
Section 4 explores real-world scenarios using diferent datasets and, finally, Section 5 summarizes the
conclusions and outlines future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Disinformation detection systems are part of a broader efort to ensure security and integrity in various
domains. Research in securing smart grids and health information sharing provides insights into
securing digital ecosystems, which could be adapted to safeguard social media platforms against
malicious content and disinformation campaigns [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Social media ofers easy access and rapid
dissemination of news, but it also facilitates the spread of fake news and disinformation, which can
have harmful societal impacts. Detecting disinformation is challenging due to its misleading nature
and the need to analyze social engagement data, which is often large, unstructured, and noisy. Some
authors [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] reviewed detection methods, challenges, and future research directions to improve fake news
identification on social media. In literature, there are some datasets enumerating (in an anonymised
way) data from the social network (e.g., Twitter, Facebook, etc) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The same authors propose a dataset1
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], which includes an overview of social media, news, and spatiotemporal information. This data set
helps analysts study how and when fake news evolves on the Web using events and timestamps.
      </p>
      <p>
        Other examples involve fake news incidents, like FakeNewsIndia [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which comprises 4,803 records
reported by six fact-checking websites in India from June 2016 to December 2019. It includes associated
data such as 5,031 tweets and 866 YouTube videos linked to these incidents. The dataset enables impact
evaluation on Twitter and YouTube using engagement-based metrics, with machine learning models
predicting content popularity more efectively on YouTube than on Twitter. Another relevant dataset is
BuzzFace [9], a comprehensive collection of Facebook data refined into four categories: mostly true,
mostly false, a mixture of true and false, and no factual content. It incorporates Facebook comments,
reactions, and content accessible via the platform’s Graph API, along with additional features such as
article body text, images, links, and plugin comments from Facebook and Disqus. Embedded tweets
included in the dataset provide opportunities for expansion across other social media platforms. With
over 1.6 million text items, it is significantly larger than other datasets.
      </p>
      <p>Most researchers are focusing on detecting fake news and disinformation threats using a machine
learning approach and sentiment analysis. As an example, Khalil et al. [10] provide a detailed analysis
of the social media-related dataset and, therefore, the associated detection model. The authors underline
how the text representation impacts the accuracy of deep learning models and how the hand-crated
features are important for obtaining accurate results.</p>
      <p>Another complementary direction is leveraging Security Information and Event Management (SIEM)
systems to monitor disinformation campaigns. Various platforms and technologies have been developed
to secure digital systems from potential vulnerabilities. For example, efective SIEMs have been designed
with the support of Digital Twins of the system [11]. While SIEM is traditionally used for security
threats, its capability to analyze logs and network trafic and detect anomalies could be extended to
track patterns of coordinated disinformation. Existing studies exploring this intersection are limited,
but this area holds potential for future research and practical applications. To address this gap, we
1https://github.com/KaiDMML/FakeNewsNet
propose an architecture that integrates multiple information sources, detects anomalies (e.g., fake news),
and implements eficient response mechanisms. The following section provides a detailed explanation
of this proposed architecture.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Architecture</title>
      <p>Our architecture is designed to detect disinformation-related anomalies on social networks in real time
by directly leveraging the DISARM framework. While DISARM was initially proposed as a structured
vocabulary for describing disinformation incidents, we extend its use to guide both detection and
response mechanisms within our system. By leveraging the structured approach of DISARM, the
architecture efectively identifies irregular patterns and potential security threats through diferent
analytical techniques. The resulting framework seamlessly integrates with existing data streams,
enabling continuous monitoring and real-time analysis of incoming information.</p>
      <p>
        As illustrated in Figure 1, data is sourced from social network APIs (e.g., YouTube API, Twitter). A
custom script collects structured information from these APIs, such as posts, replies, and video metadata,
and publishes the social media data to Kafka. As stated above, the DISARM framework [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is divided
into Red and Blue frameworks. The red framework collects all the techniques, tactics, and procedures
aimed at compromising disinformation, while the Blue Framework focuses on security countermeasures
and response policies. In this case, the Red Framework is used to collect all the relevant techniques used
to identify related threats and, therefore, Security Information and Event Management (SIEM) rules
that mitigate these threats. The Open Search consumes the messages stored in Kafka and checks which
rules are verified on the scraped data, publishing the data. On the other hand, Security Orchestration
Automation and Response (SOAR) stores the related response policies based on the Blue DISARM
Framework and suggests some automated responses triggered on the security events. This mechanism
is based on the fact that DISARM is the common knowledge base between SIEM rules and security
responses. Accordingly, when a specific security event is verified, the SOAR platform (i.e., Shufle) can
apply the appropriate countermeasures to mitigate the threats. In this part, security operators check
the reports produced from SIEM and SOAR components. The highest abstraction level is the Security
Incident Response Planning (SIRP) that collects the incidents from security operators (even belonging
to diferent infrastructures) and sends feedback back to security operators, considering diferent data
sources. The figure 2 summarizes the overall process.
      </p>
      <p>It is necessary to underline that the proposed architecture is structured into two layers. The upper
layer handles the API component, SIEM, SOAR, and SIRP in an agnostic way, while the lower layer
focuses on specific technologies like DISARM, OpenSearch, and Shufle. The next subsection will
explain the rules identified through the DISARM framework, covering both detection mechanisms in
SIEM and security responses in SOAR.</p>
      <sec id="sec-3-1">
        <title>3.1. DiSARM-based detection rules</title>
        <p>The creation of rules for anomaly detection is based on the DISARM framework. This process, integrated
within the Policy Engine component, involves: i) Analyzing DISARM Tactics for specific objectives
(e.g., detecting disinformation sources on social networks); ii) Identifying diferent threats (i.e. threat
modeling [12]) afecting disinformation of the platform; iii) Correlating each threat with the DISARM
tactic; iv) Identifying SIEM Rules that mitigate the related threats. The table 1 identifies a part of the
rule list.</p>
        <p>For example, the Identification of Trending Topics or Hashtags (T0080.003 DISARM ID) is a tactic that
can lead to security issues such as the rapid spread of suspicious content. To detect this phenomenon, we
can define a threshold for identifying anomalies in sharing behavior. Formally, let (, ) represent the
number of shares of content  within a time interval of  minutes. An anomaly is flagged if:
(, ) &gt; 
where  is the predefined sharing threshold. In other words, if a content  is shared more than  times
within  =  minutes, it is considered anomalous behavior that may indicate the rapid dissemination of
suspicious content.</p>
        <p>This approach clearly defines the detection criteria and facilitates the implementation of automated
monitoring algorithms to promptly identify potential security threats. Another example can be related
to a low-profile user. Let a low-profile user be defined as a user with fewer than   followers, where  
is a predefined threshold for the number of followers (e.g.,   = 100). Additionally, let the engagement
rate of the user be defined by   , where   represents the number of interactions per hour. An alert
should be triggered when the following condition is met:
  &lt;</p>
        <p>and   &gt;  
T0090.001
T0081.005
T0084
T0019</p>
        <sec id="sec-3-1-1">
          <title>Presence picious (keywords, tags, links)</title>
          <p>Reuse Existing Con- Frequent content
tent modifications</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>DISARM Tactic Associated Threat Implementation in SIEM</title>
          <p>Identify Trending Top- Rapid spread of sus- Set thresholds for rapid spread
ics/Hashtags picious content (e.g., content shared more than
100 times in 10 minutes).</p>
          <p>Create Anonymous Abnormal en- Configure alerts when low-profile
Accounts gagement from users (e.g., less than 100
followlow-profile users ers) receive high engagement (e.g.,
more than 100 interactions per
hour).</p>
          <p>Identify Existing of sus- Create a list of known fake news
Conspiracy Narra- content keywords and trigger alerts when
tives/Suspicions hash- they are detected in posts.
Generate Information
Pollution</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Flooding of irrelevant or misleading data</title>
          <p>Trigger alerts when a post is
modified multiple times (e.g., more
than 3 times in 24 hours).</p>
          <p>Monitor for high volumes of posts
with repeated irrelevant keywords
and set alerts for excessive similar
content (e.g., more than 50 posts
with identical patterns in 1 hour).
where   is a predefined threshold for engagement (e.g.,   = 100).</p>
          <p>Note that threshold values can be refined through behavioral analysis of the platform, but fixed
values are used here for illustration.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. DiSARM-based related responses</title>
        <p>Based on alerts detected by the SIEM system, a corresponding set of related responses has been compiled
using information from the DISARM framework. The table 2 maps identified threats to their associated
DISARM IDs and provides the appropriate countermeasures, along with their corresponding DISARM
countermeasure IDs, to mitigate or respond to these threats efectively.</p>
        <sec id="sec-3-2-1">
          <title>Threat Threat</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>DISARM ID</title>
        </sec>
        <sec id="sec-3-2-3">
          <title>Rapid spread of suspi- T0080.003</title>
          <p>cious content</p>
        </sec>
        <sec id="sec-3-2-4">
          <title>Abnormal engagement T0090.001</title>
          <p>from low-profile users</p>
        </sec>
        <sec id="sec-3-2-5">
          <title>Presence of suspicious T0081.005</title>
          <p>content (keywords,
hashtags, links)</p>
        </sec>
        <sec id="sec-3-2-6">
          <title>Frequent content modifi- T0084</title>
          <p>cations</p>
        </sec>
        <sec id="sec-3-2-7">
          <title>Flooding of irrelevant or misleading data</title>
          <p>T0019</p>
        </sec>
        <sec id="sec-3-2-8">
          <title>Countermeasure</title>
        </sec>
        <sec id="sec-3-2-9">
          <title>DISARM ID</title>
          <p>C00126
C00070
C00126
C00074
C00074</p>
        </sec>
        <sec id="sec-3-2-10">
          <title>Countermeasure Specification</title>
          <p>Send the triggered alert for rapid spread of
content to security operators.</p>
          <p>Block Access to Disinformation Resources
Use automated systems to detect and flag
suspicious keywords and phrases to
security operators and send alert to security
operators
Monitor and track frequency of content
modifications to detect disinformation
attempts
Identify and delete or rate limit identical
content</p>
          <p>As evidence, a way to respond to the detection of Rapid spread of suspicious content is to send the
triggered alert to security operators who can monitor the presence of threats and, accordingly, block
malicious behaviors. Similarly, identifying abnormal engagement from low-profile users helps detect
suspicious activities. For example, when accounts with fewer than 100 followers receive significant
interactions, such as over 100 in an hour, the system intervenes by blocking access to disinformation
resources. This approach disrupts potential coordinated or automated eforts, minimizing the impact
of orchestrated attacks. Another example is related to the flooding of irrelevant or misleading data
undermines the reliability of information on a platform. Systems identify instances of repetitive posts,
such as 50 or more identical messages shared within an hour, and take steps to delete redundant
content or impose rate limits. This ensures that the platform remains focused on credible information,
preventing the overshadowing of legitimate content by disinformation or spam. It is important to note
that, for brevity, only a selection of threats and their corresponding countermeasures are presented
here; however, the complete list is available upon request.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Twitter Case Study</title>
      <p>To evaluate the efectiveness of the rules and implement the anomaly detection workflow within a
specific architecture, we simulated the social network data using diferent datasets. The usage of a
dataset simplifies data collection and aligns with the article’s focus: proposing rules and response
policies to combat disinformation. It is worth noticing that the same results can be obtained by scraping
data using a common API provided by a social network platform. To do this, we conducted two
experiments using datasets with diferent characteristics (as discussed in Section 2):
• Tweet-Level Analysis: Using a dataset of raw tweets, requiring more complex processing.
• High-Level Analysis: Using aggregated data from social network APIs, which simplifies
implementation.</p>
      <p>Each approach has advantages and trade-ofs, as discussed in the following paragraphs. It is important
to note that all social media data has been anonymised by the dataset authors.</p>
      <sec id="sec-4-1">
        <title>4.1. Experiment 1: Tweet-Level Analysis (Raw Data)</title>
        <p>Twitter News Dataset [13] contains 5,234 news events collected from Twitter in 2014. The table below
summarizes the dataset:</p>
        <sec id="sec-4-1-1">
          <title>File Name</title>
          <p>events.csv
tweets.csv
cluster_labels.txt
time_resolutions.txt</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>Description</title>
          <p>Contains details about events, including:
• event ID: Numeric identifier of the event (1 to 5,234).
• date: Event date in YYYY-MM-DD format.
• total keywords: Number of keywords associated</p>
          <p>with the event.
• total tweets: Number of tweets related to the event.
• keywords: Keywords for the event, separated by</p>
          <p>semicolons.</p>
          <p>Contains tweet IDs and their corresponding event IDs:
• tweet ID: Numeric identifier of the tweet (usable via</p>
          <p>Twitter’s REST API).</p>
          <p>• event ID: Identifier linking the tweet to its event.</p>
          <p>Provides cluster labels for events, ranging from 0 to 19.</p>
          <p>Provides temporal resolutions for events, expressed in
minutes.</p>
          <p>This data set is useful for analyzing news events and understanding the dynamics of Twitter
dissemination. For each profile, the data set provides valuable information, such as the number of followers
and tweets. Some rules (described above in table 1) have been implemented in OpenSearch to show the
applicability of our approach.</p>
        </sec>
        <sec id="sec-4-1-3">
          <title>4.1.1. Rule1: Rapid spread of suspicious content</title>
          <p>As an example, considering the chosen dataset, we implemented Rule 1, which detects if any tweet has
been shared at least 10 times within a 30-minute window.</p>
          <p>It is important to note that this rule, for simplicity, uses the predefined time window provided by
the dataset as input. However, in a production system, the analysis would be performed in real-time,
dynamically determining the best time windows for detection as well as the optimal thresholds.</p>
          <p>Date
2009-05-01
21:30:00
2009-05-03
00:30:00
2009-05-03
17:30:00
2009-05-03
20:30:00
2009-05-09
22:00:00
2009-05-16
20:30:00
2009-05-18
04:00:00
2009-05-21
23:30:00
2009-05-22
02:30:00</p>
        </sec>
        <sec id="sec-4-1-4">
          <title>Author</title>
          <p>Wolverine811
007wisdom
TheOrigin953
bplusgrl445
ukdjgrl210
JennE669
lnBpun
wowlew
wowlew</p>
        </sec>
        <sec id="sec-4-1-5">
          <title>Tweet</title>
          <p>You guys HAVE to go to this site! - UNLIMITED FREE
RINGTONES!!
”All that we are is the result of what we have thought” 10
Buddah ... so think positive fabulous twitterverse
you guys are going to LOVE me! DVD QUALITY of 10
wolverine streaming online! no need to download or
pay to watch http://tinyurl.com/cp5yhr
Hey Twitter’ers! Im new to this but ive seen my friends 13
do it Plz Follow me n I’ll follow u back
FREE UNLIMITED RINGTONES!!! - 13
http://tinyurl.com/freeringring - USA ONLY
Awesome 4 iphone
TWitter!!! Finally joined Follow me i’ll follow u :d 13
just a really really boring day
isPlayer Has Died! Sorry
isPlayer Has Died! Sorry</p>
        </sec>
        <sec id="sec-4-1-6">
          <title>Repetitions</title>
          <p>13
16
10
10</p>
          <p>As shown by the table 4, some tweets exhibited typical spam characteristics, including promotional
language, urgency, and an external link. This pattern suggests automated posting or a coordinated
campaign aimed at mass distribution.</p>
          <p>We also implemented a response strategy (as described in Table 2). In this case, we developed a trigger
in OpenSearch that uses Shufle to schedule email alerts for the social network security administrator.</p>
        </sec>
        <sec id="sec-4-1-7">
          <title>4.1.2. Rule2: Abnormal engagement from low-profile users</title>
          <p>Another example relates to Rule 2, which uses OpenSearch to identify users with few followers (e.g.,
less than 100 followers) who exhibit high interaction activity on the social media platform within a
short period.</p>
          <p>The rule cannot be fully implemented since the dataset does not include additional information, such
as followers or user details. In this case, we can only identify the most active users based on 30-minute
time slots. Without follower counts, we do not have enough data to distinguish between influencers
and potential malicious activities. Another issue is that evaluating user engagement requires analyzing
interactions, not just tweets but also replies, likes, and retweets. The dataset does not provide the
necessary data to trigger the rule.
For the second experiment, we used the Twitter News Dataset 2020 [14] that provides pre-aggregated
data (e.g., as Twitter API ofers). Unlike the raw dataset, these APIs provide: i) Engagement metrics
(likes, shares, comments); ii) Influence scores (e.g., number of verified interactions); iii) User metadata
(account creation date, verification status). This pre-aggregation minimizes the need for custom data
processing, enabling direct use of the API statistics available in the dataset.</p>
          <p>This dataset is useful for diferent research purposes, particularly in understanding user engagement,
sentiment analysis, and content trends on Twitter. Researchers can leverage tweet content to identify
prevalent topics, assess public sentiment, and explore user interactions through retweets, replies, and
likes.</p>
          <p>Additionally, the temporal aspect of this dataset allows for studies of how tweets evolve over time
and how conversations spread within particular time intervals. The inclusion of tweet URLs enables
the retrieval of original posts, which is valuable for further context or multimedia content analysis.</p>
          <p>This dataset is especially beneficial for analyzing social media behavior, misinformation detection,
sentiment analysis, and trend forecasting. By examining tweet text and engagement features, researchers
can gain insights into how information spreads across platforms, how user profiles interact with diferent
content, and how digital discourse unfolds in real-time.</p>
          <p>Attribute
tweet id
tweet url
content
retweet count
reply count
like count
created at</p>
        </sec>
        <sec id="sec-4-1-8">
          <title>Description</title>
          <p>Unique identifier for each tweet. Useful for referencing and retrieval.</p>
          <p>Direct link to the tweet, enabling further verification and context.</p>
          <p>Full text of the tweet.</p>
          <p>Number of retweets, indicating tweet popularity and potential
virality.</p>
          <p>Number of replies to the tweet, reflecting user engagement and
interaction.</p>
          <p>Number of likes, showing user approval and tweet popularity.</p>
          <p>Timestamp of when the tweet was posted.</p>
          <p>As illustrated in the dataset, analyzing the frequency of retweets within a specified time frame is
simplified by the retweet_count and created_at attributes. This allows for an eficient investigation
of trends in content virality and engagement.</p>
        </sec>
        <sec id="sec-4-1-9">
          <title>4.2.1. Rule1: Rapid spread of suspicious content</title>
          <p>To verify Rule 1, we applied a query to identify users who post the highest number of tweets within a
short time frame (e.g., 30 minutes). In this case, we do not detect duplicates (as in the previous scenario)
because the dataset contains only unique tweets, with no repetitions.</p>
          <p>As a result, the table 6 presents the most active users in each 30-minute interval, sorted by the total
number of tweets they published.</p>
          <p>The data analysis reveals that some users post tweets at a very high frequency, with some exceeding
80 tweets in 30 minutes. Users such as ZeetAli3, techinjektion, and kumamonz_masa were among the
most frequent posters in certain time frames. These results could indicate malicious behaviours like:
i) High tweet frequency (large number of tweets in a short period); ii) Repetitive content (indicating
bots or advertising campaigns); iii) Coordinated activity (multiple accounts tweeting similar content
simultaneously).</p>
        </sec>
        <sec id="sec-4-1-10">
          <title>4.2.2. Rule2: Abnormal engagement from low-profile users</title>
          <p>To identify rule 2, we defined the number of interactions as:
  =    +   + 
(1)</p>
          <p>This rule analyzes user engagement in 30-minute intervals, focusing on users with fewer than 100
followers. It ranks the top two users per interval based on total engagement, calculated as the sum of
retweets, likes, replies, and quotes. Only users with at least one engagement are included. The rule
also retrieves the selected users’ follower counts and basic profile information. By applying this rule
to Open Search, we have identified users who, despite having fewer followers, have high engagement
levels, grouped in 30-minute intervals. Results are shown in table 7.</p>
        </sec>
        <sec id="sec-4-1-11">
          <title>Date &amp; Time</title>
          <p>Aug 13, 2022, 09:30
Aug 13, 2022, 10:00
Aug 13, 2022, 10:30
Aug 13, 2022, 11:00</p>
        </sec>
        <sec id="sec-4-1-12">
          <title>Username</title>
          <p>MaboTofusauce
AnoshhaKhan
Already_Taken_9
elecjazz
tomo_21148
memezon5
SinghSandhu_
specialcash1376</p>
          <p>This rule helps to spot lesser-known users who receive a lot of engagement despite having a small
following. This could mean:
• Viral Content from Small Accounts – A post might have struck a chord with a wide audience,
getting shared well beyond the user’s usual reach.
• Manipulation or Spam – The engagement could be artificial, driven by bots or coordinated
eforts to inflate interactions.
• Algorithmic or Organic Boost – The platform might have pushed the content to more people,
or it naturally gained traction through interactions.</p>
          <p>Understanding these cases can help distinguish between organic growth, platform dynamics, and
potential manipulation. To respond to this threat, a blocking the access to the resources is suggested (in
table 2). In this case, we can send an alert to security administrators and suggest blocking.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Works</title>
      <p>The spread of disinformation presents a significant challenge for society, especially within social
networks. Distinguishing between accurate and misleading information has become increasingly dificult.
Despite existing policy frameworks like ABCDE and DISARM, there remains a gap in operational tools
for practical implementation. This paper proposed an architecture based on the DISARM framework,
specifically designed to monitor and counter disinformation in real-time. By integrating detection
mechanisms with security response policies, our approach ofers an automated and scalable solution for
identifying and mitigating disinformation campaigns. Through experiments using Twitter datasets, we
demonstrated the efectiveness of our detection rules and response strategies in real-world scenarios,
showing that our method can be applied to active social media environments. The results confirm the
importance of an integrated solution combining social media monitoring, automated detection, and
timely responses. Future work will focus on broadening the scope of evaluation to include diverse
types of data and platforms, ensuring that the proposed approach remains adaptable and robust across
various social networks.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work has been partially supported by project SERICS (PE00000014) - Spoke 2 “Misinformation
and Fakes” (CUP D43C22003050001) under the MUR National Recovery and Resilience Plan, which is
funded by the European Union - NextGenerationEU.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The authors used ChatGPT as a tool for grammar checking and proofreading of the manuscript. The
content, ideas, and arguments presented in the paper were developed solely by the authors, who assume
full responsibility for the final text.
[9] G. Santia, J. Williams, Buzzface: A news veracity dataset with facebook user commentary and egos,
Proceedings of the International AAAI Conference on Web and Social Media 12 (2018) 531–540. URL:
https://ojs.aaai.org/index.php/ICWSM/article/view/14985. doi:10.1609/icwsm.v12i1.14985.
[10] M. Khalil, M. Azzeh, Fake news detection models using the largest social media ground-truth
dataset (TruthSeeker), International Journal of Speech Technology 27 (2024) 389–404.
[11] L. Coppolino, R. Nardone, A. Petruolo, L. Romano, A. Souvent, Exploiting digital twin technology
for cybersecurity monitoring in smart grids, in: Proceedings of the 18th international conference
on availability, reliability and security, 2023, pp. 1–10.
[12] D. Granata, M. Rak, Systematic analysis of automated threat modelling techniques: Comparison
of open-source tools, Software quality journal 32 (2024) 125–161.
[13] J. Kalyanam, M. Quezada, B. Poblete, G. Lanckriet, Prediction and characterization of high-activity
events in social media triggered by real-world news, PloS one 11 (2016) e0166694.
[14] DataGuy, G. Amoako, twitter-news, 2022. URL: https://www.kaggle.com/dsv/4086173. doi:10.
34740/KAGGLE/DSV/4086173.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pamment</surname>
          </string-name>
          ,
          <article-title>The EU's Role in Fighting Disinformation: Crafting A Disinformation Framework, Carnegie Endowment for International Peace</article-title>
          .,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Terp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Breuer</surname>
          </string-name>
          ,
          <article-title>Disarm: a framework for analysis of disinformation campaigns</article-title>
          ,
          <source>in: 2022 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/CogSIMA54611.
          <year>2022</year>
          .
          <volume>9830669</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Coppolino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nardone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Petruolo</surname>
          </string-name>
          , L. Romano,
          <article-title>Increasing the cybersecurity of smart grids by prosumer monitoring</article-title>
          ,
          <source>IEEE Transactions on Industrial Informatics</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lax</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nardone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <article-title>Enabling secure health information sharing among healthcare organizations by public blockchain</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          <volume>83</volume>
          (
          <year>2024</year>
          )
          <fpage>64795</fpage>
          -
          <lpage>64811</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sliva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          , H. Liu,
          <article-title>Fake news detection on social media: A data mining perspective</article-title>
          ,
          <source>ACM SIGKDD Explorations Newsletter</source>
          <volume>19</volume>
          (
          <year>2017</year>
          )
          <fpage>22</fpage>
          -
          <lpage>36</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kalyanam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Quezada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Lanckriet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Poblete</surname>
          </string-name>
          ,
          <article-title>Early prediction and characterization of high-impact world events using social media</article-title>
          .,
          <source>arXiv preprint arXiv:1511</source>
          .
          <year>01830</year>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mahudeswaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lee</surname>
          </string-name>
          , H. Liu,
          <article-title>Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media</article-title>
          ,
          <source>Big data 8</source>
          (
          <year>2020</year>
          )
          <fpage>171</fpage>
          -
          <lpage>188</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dhawan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bhalla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Arora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kaushal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kumaraguru</surname>
          </string-name>
          ,
          <article-title>Fakenewsindia: A benchmark dataset of fake news incidents in india, collection methodology and impact assessment in social media</article-title>
          ,
          <source>Computer Communications</source>
          <volume>185</volume>
          (
          <year>2022</year>
          )
          <fpage>130</fpage>
          -
          <lpage>141</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>