<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CEUR
Workshop
Proceedings
published</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Simulating Online Behaviours and Threat Patterns for Training against Influence Operations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ulysse Oliveri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexandre Dey</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guillaume Gadek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>IRISA</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Inria</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Univ Rennes</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rennes</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Airbus Defence</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Space Cyber Programmes</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rennes</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Airbus Defence</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Space</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elancourt</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>C&amp;ESAR'25: Computer &amp; Electronics Security Application Rendezvous</institution>
          ,
          <addr-line>Nov. 19-20, 2025, Rennes</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>202</volume>
      <fpage>5</fpage>
      <lpage>12</lpage>
      <abstract>
        <p>Social media platforms have enabled large-scale influence campaigns, designed by threat actors to manipulate public opinion. These campaigns use coordinated accounts to spread fake information or amplify information (e.g., disinformation, astroturfing), swaying opinions and paralysing decision-making. To mitigate these impacts, nongovernmental and governmental entities train in simulated informational environments emulating social network platforms and their exchanges. During the trainings, the animation team must implement specific informational Tactics Techniques and Procedures (TTPs) to achieve customized educational objectives. The simulation of TTPs requires credible social networks, which must notably contain diverse user types (bots, trolls, casual users, influencers, etc.) and recreate social interactions to generate both normal behaviours and malicious behaviours. This paper introduces a framework designed to generate personalized social networks graphs for training sessions, tailored specifically to the needs of the trainers. This framework allows the modelling of referenced influence operations in order to reproduce specific attacks such as astroturfing or corrupted influencers to increase the training credibility, the pedagogical impact, and capitalising on existing knowledge. We illustrate the coherence of these simulations through two case studies, which aim at reproducing astroturfing attacks and corrupt influencers tactics. We show that our simulation of these tactics coherently reproduces the documented attacks, and we assess the results through topology metrics and information difusion metrics.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Training</kwd>
        <kwd>Information campaigns</kwd>
        <kwd>Adversarial Simulation</kwd>
        <kwd>Social Networks</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The democratisation of social media platforms during the 21st century have enabled large-scale influence
campaigns, designed by threat actors to manipulate the public opinion. These campaigns use coordinated
accounts to spread fake information or to greatly amplify information operations (e.g disinformation
campaigns, astroturfing), in order to sow chaos, and to paralyse decision-making.</p>
      <p>Attackers have been continuously improving their methods to take advantage of online social
networks, greatly boosting their efectiveness and impact. In response, defenders have organised their
Tactics, Techniques, and Procedures (TTPs) through frameworks, such as the DISARM framework 1.</p>
      <p>Moreover, in an efort to mitigate and detect these campaigns, entities such as journalists (e.g.
fact-checking service), brand monitoring services, company security teams, and government agencies
are following training sessions to stay ahead and efectively combat influence operations.</p>
      <p>In this context, an animation team (trainers) creates a scenario that depicts the educational goals of
the training and decides which targeted informational strategies (i.e. TTPs) need to be implemented
in a controlled setting. These TTPs are to be detected and mitigated by the player team (trainees).</p>
      <p>To efectively simulate these techniques, it is essential to replicate a credible social network ecosystem.
The ecosystem should encompass a variety of user types, such as bots, casual users, influencers, and trolls.
Furthermore, the simulation should also accurately model the mechanisms of information difusion, in
order to shape how the diverse topics discussed in a platform interact with each others, and to enable the
possibility to have viral topics. This modelling is crucial to provide an understanding of the mechanisms
exploited by the red team (attackers) for their attacks.</p>
      <p>In this paper, we describe a framework that aims to simulate documented attacks such as astroturfing,
botnets attacks, or butterfly attacks 2, increasing training sessions likelihood and credibility. The
framework generates credible social network graphs along a customizable information difusion model. The
social graph is created taking into account user parameters such as network composition (distribution
between user types), density of the network (how much the users are connected), and interactions probability
matrix (how often a type of user interacts with another). We summarize our contributions as follows:
• We use the DISARM Matrix to identify useful Tactics, Techniques and Procedures (TTPs) for training
sessions, enhancing its educational impacts and providing feedback on the topological efects and
changes on the information difusion caused by the TTPs.
• To simulate these attacks, our framework generates a customizable social graph with end-user inputs
such as user account types, network density, and community linkages. It then simulates information
difusion on the generated graph, modelling interactions between regular users and malicious accounts
on micro-blogging platforms like X and Mastodon.
• We validated our approach by simulating two TTPs and measured their impact on the social network
graph from both topological and information difusion perspectives. Furthermore, we also verified
the impacts of the attacks on the trending hashtags panel of a self-hosted Mastodon instance, to verify
the coherence of our simulation with a real social network recommendation system.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Social Graph Generation</title>
        <p>
          Reproducing credible social network dynamics is essential to provide users with an adequate training
platform. However, creating these graphs is a non-trivial task. Social graphs are composed of various
types of users exhibiting diverse online behaviours. The users difer in how they connect with each
other [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], in their online activities such as post frequency and reaction frequency [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], and their temporal
interaction patterns (i.e., the hours or days at which they interact in the social network) [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. The reasons
for interaction also vary significantly [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Bots, for instance, may repeatedly send identical messages to
automatically promote a product or flood a network [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Trolls might target others on specific subjects to
undermine their opponents [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Meanwhile, influencers might aim to market products to their followers.
        </p>
        <p>
          In observed social graphs, the distribution of user connections (i.e., the number of followers) follows
a power-law function [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. This characteristic implies that a small number of users are highly connected,
while the majority have relatively few connections.
        </p>
        <p>This phenomenon, often referred to as the "long tail" distribution, means that a few nodes (users)
dominate the network’s connectivity, while most nodes have limited connectivity. This power-law
distribution is a fundamental property of most real-world networks, including online social networks:
it significantly shapes their structure and dynamics.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Information Difusion in Social Graphs</title>
        <p>Information difusion in social graphs describes the process by which information spreads through a
network of interconnected users, driven by user behaviours and network topology.</p>
        <p>
          On this task, literature is usually organised around two approaches, Predictive and Explanatory
models[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>
          Predictive models aim at predicting the future state of the network after a spark of information appears
in the graph. For example, predicting the weight of a piece of information, after a specific user has shared
it, is useful for monitoring the network [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Foundational work include the Independant Cascade Model
[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], which tries to predict the propagation probability of a piece of information in the graph, or the
topic-aware model [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] that relates the probability of the propagation to the topic being spread.
2https://disarmframework.herokuapp.com/technique/134/view
        </p>
        <p>
          The explanatory branch includes works trying to mimic nature in itself, with epidemic models
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] such as the SIR (Susceptible-Infected-Recovered) model [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], with the analogy of illness being
information, and forest fire models [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], which mimics the process of information difusion akin to how
ifre spreads through a forest. In this analogy, the fire represents information, and the trees symbolise
the users. Information spreads sequentially from a user to their neighbours (or followers), transitioning
from states tree to burning (active in the graph) to burnt (refuses to share information).
        </p>
        <p>When a user reaches a defined probability threshold for information spreading, they transition to
the state of burning, i.e, the user becomes active. Conversely, if the threshold is not met, the user adopts
the state of burnt, indicating that they do not further propagate the information.</p>
        <p>Modelling information difusion grants the possibility to emulate informational events such as new
topics emerging in the simulated world, and to reproduce influence operation techniques during the
training sessions.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Influence operations techniques</title>
        <p>
          Attackers have developed various operating procedures to enhance their eficiency and impact by
leveraging online social networks. Defenders model these operating procedures through Tactics,
Techniques, and Procedures (TTPs). Diverse frameworks organise these TTPs, which are diverse in
which attackers they want to model [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], what level of information they provide [
          <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
          ] or what thematic
they focus (e.g, cognitive techniques) [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. In this context, we focus on the DISARM framework, which
has the right level of granularity over the procedures that need to be reproduced in a training session.
        </p>
        <p>
          Using the framework, we group the relevant tactics as follows:
• Massive Content Creation: This involves coordinated inauthentic behaviours3, astroturfing [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ],
and disinformation campaigns [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
• Exploitation of Recommendation Systems: Attackers increase their reach by promoting divisive
content and employing emotional triggers to manipulate user engagement [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
• False Authority: Fake accounts claiming to be subject experts (e.g., epidemiologists during COVID-19)
to manipulate opinion or cause confusion4 or corrupted influencers.
        </p>
        <p>
          Among these classes, tactics such as the massive use of inauthentic accounts often exhibit specific
temporal activity [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], with accounts engaging at suspect times compared to genuine behaviour. Moreover,
these accounts are used in a structured manner, showing coordination and planning by an attacker.
        </p>
        <p>Disinformation operations significantly alter the structure and dynamics of the social graph, including
distorting patterns of influence, artificially inflating the visibility of certain narratives, and disrupting
the organic flow of information. These changes may allow defenders to focus their means on important
attacks, and metrics analysing in real-time the network are a necessity.</p>
        <p>
          As an example, some metrics measures the topological density of certain communities, detecting
abnormal "co-follows, co-retweet, and co-favourite" networks in short time spans5. These abnormalities
are detected using metrics such as the betweenness centrality of the accounts, their degree centrality,
and the exploitation of communities detection and qualification [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], which are or can be used to detect
signs of coordination between these accounts.
        </p>
        <p>
          Threat actors also coordinate in time, interacting in the social network within short time frames [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
Measuring this gives defenders additional clues on coordination and/or automation. From the attacker
point of view, this coordination is needed to manipulate the platform’s recommendation system, which
increases the visibility of "influential" contents [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] (i.e., here, with high numbers of favourites and retweets
in a short time span). This aims to amplify the threat actor’s narrative exposure to their target audiences.
        </p>
        <p>Measuring this exposure is quite important, as spending high amounts of time and resources to
mitigate failed or limited attacks may not be worth it, due to the limited defender resources. To assess
the full impact of the attacks within social networks, a combination of these metrics is required.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Contribution</title>
      <p>This paper relies on a controllable social network generation framework, taking as input user parameters
such as the user account type repartition, intra-community density and how connected are each
community. The framework also features an information difusion model, integrating key variables
representing the user experience on social networks. Finally, we illustrate the framework usefulness
by highlighting the possibility (1) to reuse prior documented attacks and their TTPs to capitalise on
existing knowledge and (2) to enhance the immersiveness and credibility of training sessions.</p>
      <sec id="sec-3-1">
        <title>3.1. Influence operations tactics to simulate</title>
        <p>Our research focuses on tactics that not only alter the topological structure of social graphs but also
manipulate the information space. By leveraging the DISARM framework, two critical tactics have been
identified that are pivotal to reproduce with our framework:
• Establish Social Assets: This tactic involves the deployment of coordinated inauthentic behaviours,
including the creation of bots, trolls, or malicious communities. These entities are strategically used
to infiltrate existing networks through methods such as massive engagement or butterfly attacks.
• Establish Legitimacy: This involves the fabrication of fake experts or opinion leaders to exert
influence over communities. Additionally, it encompasses techniques such as corrupting influencers within
genuine communities, and astroturfing to create a false consensus, thereby swaying public opinion.</p>
        <p>
          These attacks fundamentally alter the social graph’s topology by creating new, often deceptive,
relations or connections between communities. Moreover, the behaviour of the accounts involved in
these coordinated attacks deviates significantly from that of the normal accounts [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], highlighting a
gap in the conventional information dissemination literature.
        </p>
        <p>
          Our framework aims to bridge this gap by integrating insights from documented attacker behaviours
(how they connect, at what time they interact) [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. By recreating specific attacks used in informational
campaigns, we provide the community with a robust tool for capitalising on structured knowledge about
attackers’ TTPs.
        </p>
        <p>This approach not only enhances the credibility of the training sessions but also equips defenders
with a deeper understanding of the potential threats. In order to reproduce these techniques, tactics, and
procedures in a dynamic social graph, the framework relies on a graph generation module that produces
special user types configurations (trolls, bots, influencers...) controlled by the trainer. These distributions
aim at producing tailored user communities to produce specific setups used in the training.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Social Graph Generation</title>
        <sec id="sec-3-2-1">
          <title>3.2.1. Community Generation</title>
          <p>In platforms such as X (formerly Twitter), the connection between users (nodes of the graph) is defined
by the "follow" relationship. This relationship may or may not be reciprocal, significantly impacting
the dissemination of information within the network.</p>
          <p>In fact, users tend to see more content shared by the accounts they follow, shaping their information
exposure and interaction patterns. Furthermore, interactions on X include posting (where a node emits
a text post, initiating a thread), replying (contributing to the thread), retweeting (sharing the post or
the reply with followers), and favouring (indicating support or approval to a post or a reply).</p>
          <p>
            In observed social graphs, the distribution of user connections (i.e., the number of followers) follows
a power-law function [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], which means that a small portion of the users (also called the influencers)
concentrate the main proportion of the followers.
          </p>
          <p>
            In this context, our framework first defines power-law functions parameters for each type of user
identified across literature [
            <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1, 3, 4, 2</xref>
            ]. These types, each with diferent level of impact on the graph include:
• Influencers (Luminary, celebrities, expert, opinion leaders, trendsetters, bloggers, potential
influencers).
• Casual users or Consumers.
• Trolls (Left-troll, Right-troll, Fearmongers, News Feed, Hashtag Gamer).
• Bots (Spam Bots, Content Bots, Engagement Bots).
          </p>
          <p>Once these power-laws are defined, the framework proceeds as follows:
1. Follow probabilities: A probability matrix of dimension _ * _ is created, according
to the source users types and target users types.For example, casual users have a low probability to
follow bots accounts, or casual users have a high chance to follow an influencer.
2. Follower assignation: Each node iteratively follows others with a given edge probability. Then,
 edges are sampled, where  is drawn from a power-law distribution.
3. Alignment with end-user density parameter: Random edges are pruned to match the user-specified
community density, defined as the ratio of existing edges to all possible edges in a complete graph.
4. Topic assignation: A topic distribution - a set of pre-defined topics each associated with a scale
from 0 to 1 indicating a user’s interest in that topic - is assigned to all nodes. While the topics remain
consistent across all nodes, the level of interest for each topic is set-up for individual communities,
with small variations from node to node belonging to the same community.</p>
          <p>The system applies this algorithm for generating multiple sets of nodes sharing the same topics, called
here communities (as seen in Figure 2a). Malicious communities and accounts such as the ones used
in astroturfing, or butterfly attacks are created using the same algorithm.</p>
          <p>An illustration of this community generation module can be seen in Figure 2a where the framework
produces 3 communities (0, 1, 2) of 100 nodes each with a diverse user distribution (see Figure 1).</p>
          <p>In real social networks, users have the ability to engage with new topics that may not be of interest
for their community. In addition, to reproduce the tactics used by attackers (e.g. astroturfing), it is crucial
to develop methods to connect separate communities within the network.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Bridges between communities</title>
          <p>Understanding what are the topics amongst diverse communities, how they interact and who are the
main contributors is essential to detect influence operations.</p>
          <p>In our system, users can define the inter-community density between diferent communities. This
parameter dictates the proportion of connections or edges that exist between two distinct communities,
with respect to the maximum connections possible in a complete graph.</p>
          <p>To attribute these edges, the system adheres to the same rules that were used during the generation
of individual communities. These rules involve assigning relationships based on user types, as detailed
in the community generation process (see Section 3.2.1). Following this approach, users from diferent
communities become interconnected. This linkage facilitates the difusion of information across
the entire parametrised graph, enabling a dynamic and interactive network structure.</p>
          <p>(a) Example of generated communities of users,</p>
          <p>coloured by user type.
(b) Same communities as in Subfigure 2a. End-user
parametrises the communities 0, 1, and 2 to connect
with specific inter-density parameters. It triggers
the creation of bridges between communities,
enabling accounts interacting over other communities’
topics of interest.</p>
          <p>In our example, the end-user configures an inter-community matrix that establishes connections
between communities, using one to five percent of the total possible edges from one community to
another (see Figure 2b). This allows diverse communities to influence each others through their following
accounts, mimicating the real social networks.</p>
          <p>To efectively model a dynamic social network, one needs to reproduce a coherent information difusion
model. This model captures the temporal evolution of interactions and the spread of information among
individuals, reflecting real-world behaviours and network dynamics accurately. By incorporating factors
such as the frequency of interactions, the influence of key individuals, and the varying strengths of
relationships, the model can simulate how information propagates through the network over time. Additionally,
it should account for external influences and changing network structures to provide a comprehensive
understanding of the underlying mechanisms driving information difusion in social networks.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Information Difusion</title>
        <p>In this subsection, we instantiate an information difusion model, taking into accounts all the past
presented variables (user account types, follower network, topic distribution), and new ones such as the
circadian pattern (the rhythm of activity during a day, taking into account sleep hours and work hours).</p>
        <sec id="sec-3-3-1">
          <title>3.3.1. Algorithm variables</title>
          <p>
            Our information difusion model draws inspiration from the forest-fire adaptation [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ] principle,
integrating circadian behavioural patterns, topic dynamics, and user control into the simulation. Unlike
the original model, where an account becomes "burnt" if it does not share information, our proposal
introduces a probability of being burnt based on the number of times a node refuses to share information
on a topic. In addition of the forest-fire model, this paper complements the interaction probability
between an emitter and receivers (followers) by incorporating several key variables, separated in two
categories (i.e, probability of a post (1), and probability of an interaction with a given post (2)):
• Behaviour Significance (1) (2) : This describes how likely a particular type of user is to engage or
interact within the graph. It highlights the interaction tendencies of diferent user types.
• Circadian Rhythm (1) (2): This indicates the current time within the simulation, reflecting how
time afects user interactions and behaviours. It is used to simulate if the user is connected or not.
• Interest of the Topic (1) (2): This measures the value or relevance of the target user topic within
the source user distribution, indicating how interested the source user is in that topic. For root users,
the topic interest is sampled from the user distribution.
• Target Behaviour Significance (2) : This refers to the type of user that a given user is interacting
with. For root users, this value is not present, as they are the originators of interactions.
• Virality Parameter over the Topic (1) (2): This is a user-specific parameter that controls how
much a topic influences the social graph. It allows end-users to adjust the impact of certain topics
on interactions, to recreate attackers TTPs.
• Behaviour Similarity with Other Repliers (2): If all other repliers are of the same type as the
receiver, this similarity boosts the probability of interaction.
• Topic Similarity with Other Repliers (2): If the receiver notices that other repliers have similar
interests in a topic, it increases the probability of interaction.
          </p>
          <p>Moreover, this paper incorporates various interaction types commonly seen in real social networks,
such as posts, replies, retweets, and favourites. These interactions are assigned probabilities that are
specifically tailored to each user type.</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>3.3.2. Algorithm description</title>
          <p>Initialisation The algorithm begins with the end-user specifying a simulation period (e.g., 3 days
from August, the 6th), that will be iterated hour by hour over the period during the simulation. When
the end-user wants to model a specific TTP, it is possible to start it at a specific time, allowing users
to measure the before/after of the simulation.</p>
          <p>For our simulation, the framework processes the chosen topics in parallel, essential to model the
crosstopic exposure impact. For each topic, our algorithm starts by selecting k influential nodes, based on:
• Topology: degree centrality and betweenness centrality of the node;
• Interest of the topic: the system weights the influence by the node interest over the topic.
These accounts become active (burning in forest-fire model), and are then allowed to proceed with
content creation.</p>
          <p>Post Creation For each step of the simulation, the active nodes over the given topic are selected. Then,
for each node, the module sequentially calculates a probability of
1. Being logged-in (based on the circadian rythm), allowing the user to post and see new contents.
2. If the user is logged in, the user has a probability to create a post (based on the variables in subsection
3.3.1).</p>
          <p>If there is a post created, a max breadth and max depth of the future conversation to be generated
is sampled from the user type  (the same as the one used to sample the number of follows). To clarify,
for influential nodes (such as opinion experts, luminaries...), the future conversations will be deeper
(chains of conversations) and broader (the number of replies for each reply will be greater).
Engagement Creation For this part, we categorise as engagement each of the interactions belonging
to the set {favourite, retweet, reply}.</p>
          <p>After a user creates a post, the module has an approach to greedily sample from the entire network
to favour interaction diversity. However, for computational reasons, the system removes 98% of the
sampled nodes which do not follow the initial poster. This allows users which do not directly follow
another user to interact with their content, similar to what happens in real social networks. In this initial
step, the burnt nodes are also removed.</p>
          <p>For each of the remaining nodes, the framework calculates the probability of interaction with the
poster. This results in an interaction type (favourite, retweet, reply).</p>
          <p>After replying, the algorithm samples a random probability to continue the conversation recursively
on this message, provided depth and breadth constraints are still satisfied, which enables back-and-forth
exchanges.</p>
          <p>Finally, each new interaction in the graph (post, reply, favourite, retweet) updates the user’s topic
distribution, reinforcing their interest on the given topic, and lowering their interest on others. This
variation allows the players to track the efect of continuous exposure to topics.</p>
          <p>
            As an illustration, Figure 3 shows that, in a three days simulation, the number of messages per
conversation follows a power-law distribution akin to what is observed on real social networks such
as X [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ]. However, the power-law for our simulation is a little bit shifted to the right. In fact, in X, the
recommendation system "discards" from the user perspective the messages with zero interactions, and
highlights posts with engagement. In our case, the goal is to assess this simulation by deploying it on
a self-hosted Mastodon, which does not have a main timeline sorted with a recommendation system.
Thus, the goal here is to generate mainly messages with interactions so that our unfiltered Mastodon
timeline (a.k.a., live feed) looks similar to the X timeline.
          </p>
          <p>Network updates Social networks are dynamic, with new creations of links between users (new
follows), and links being removed (unfollow). To mimic this mechanism, the following heuristics are
introduced at each step of the simulation for both actions.
• For follow actions, if user  is exposed to at least ∈N posts or replies from user , and user a has
reacted ∈N times to them, the user  has shown a non-negligible interest for this user. Hence, at
the end of a step, a follow link is created from user  to user .
• For unfollow actions, if user  has not reacted to the last ∈N posts he saw from user , user  unfollows
the user .</p>
          <p>In this section, we presented a framework which allows to recreates a tailored social network experience,
creating communities, links between communities, and modelling the difusion of information.</p>
          <p>To demonstrate the credibility of our simulation framework, this paper presents the simulation of
two distinct Tactics, Techniques, and Procedures (TTPs).</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Case Study</title>
      <p>In this section, the described system is assessed with the simulation of two examples of frequent Tactics,
Techniques, and Procedures (TTPs) documented in the literature.</p>
      <p>These TTPs can easily be implemented by an end-user, mainly consists in following six steps:
Steps to generate communities to simulate a Specific TTP using the proposed system
1. Step 1: Chose the specific TTP to implement</p>
      <p>Ex: ttp = "astroturfing"
2. Step 2: Define malicious community distribution and number of accounts to produce.</p>
      <p>Ex: type_distribution={"Bot":0.2, "Troll":0.1 ...}
total_nodes=500
3. Step 3: Define malicious circadian pattern (probability to be logged in the platform
for each hour of the day)</p>
      <p>Ex: {"0h":0.4, "1h":0.3,"2h":0.5,"3h":0.3 ...}
4. Step 4: Define topic distribution and decide what adversarial narrative to push to
the network.</p>
      <p>Ex: topic_distribution = {"Elections":0.4, "Security":0.2 ...}
narrative = "France should increase military budget ..."
5. Step 5: Define intra and inter-community densities.</p>
      <p>Ex: Intra-density = 0.3</p>
      <p>Inter-density = {"community_0":{"community_1":0.1 ...}
6. Step 6: Decide the time where the attackers start to be active</p>
      <p>
        Ex: ttp_start_time = datetime(year=2025, month=8,day=7, hour=12)
4.1. T0099.001: "Astroturfing"
4.1.1. Sources
This type of attack is widely documented, and thoroughly analysed in [
        <xref ref-type="bibr" rid="ref17 ref18 ref23">23, 17, 18</xref>
        ]. To recreate this attack,
we draw inspiration from the analysis of these authors. For example, these papers help us to recreate
the interaction patterns of the accounts belonging to an astroturfing campaign [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. In addition, the
proportion of malicious accounts with respect to genuine accounts is also given by [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Moreover, these
malicious accounts use special semantics, such as the intensive use of keywords in their messages to
manipulate recommendation system, as highlighted in Viginum’s6 report.
      </p>
      <sec id="sec-4-1">
        <title>4.1.2. Technical Instantiation</title>
        <p>
          As presented in the SocialForge data generation system [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], the training setup is initialized with a
scenario containing two narratives and two factions, France, and Louraly the attacker.
        </p>
        <p>Reflecting these factions, two account communities are created per faction, with the former having
"normal" communities, and the latter having communities belonging to an astroturfing campaign.</p>
        <p>
          The total of generated accounts is 650, where ~20% - 131 accounts - (as observed in [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]) of these
are considered malicious. These communities difer on the compositions and the density (astroturfing
campaigns have highly-interconnected accounts). The initial density is set to 0.10 for normal communities,
and 0.30 for astroturfing communities. Having real-world data on these numbers is quite hard, as having
the full landscape of an astroturfing campaign is intractable due to their sheer sizes.
        </p>
        <p>For the composition of the communities, the astroturfing communities contains a greater proportion
of bots and trolls, as illustrated in Table 1.</p>
        <p>In the training scenario, factions should push narratives, defined as strategic ideas to push to a target
audience. For France, the narrative is "France should increase military budget to reduce the threats
on its vital interests". For the adversarial communities, it is the contrary, as the narrative is: "French
people don’t want the French government to increase defence spendings. They should focus on internal
problems and not push towards war".
6https://www.sgdsn.gouv.fr/files/files/Publications/20250204_NP_SGDSN_VIGINUM_Rapport_public_Elections_roumanie_
risques_france_VFF.pdf</p>
        <p>These factions aim at pushing the narratives at all costs, including linking the provided topics to these
narratives.</p>
        <p>
          In addition to the topology of the communities and the semantics used in the messages, both
communities difer in the time patterns of their activities on social platforms (i.e., diferent circadian
cycles). For example, normal communities follow the natural sleep and work rhythm, with reduced
activity during the night and work hours. However, as shown in [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], astroturfing campaigns are
especially active during these hours.
        </p>
        <p>To have a comparison ground, and to assess the attack’s impact on the social network simulation,
the attackers start their campaign only from the middle of the simulation, after one day and twelve hours.</p>
        <p>A measure of the coherence of the provided simulation is composed by a set of metrics composed of:
1. Typical social networks metrics such as the distribution of the engagement (number of retweets,
favourites, and replies) over time, and topological influence scores.
2. Modularity score before and after the attack, from the Louvain Algorithm. This is used to measure how
the communities have merged, showing that the astroturfing are meshing with the normal communities.
3. Narrative exposure to the other communities, measuring how the astroturfing narrative is heard
by the other communities.</p>
        <p>Furthermore, to validate the simulation system, we retrieve the trending hashtags of a self-hosted
Mastodon 7 instance before and after the campaign. This aims at capturing the idea whether the campaign
is wide-spread, and whether the recommendation system reflects the manipulation.</p>
        <p>In fact, as reported in Viginum’s report about Romania’s elections, accounts part of an astroturfing
campaigns manipulate recommendation systems by posting huge amounts of content that share
specificities (e.g., keywords). As such, the astroturfing accounts are provided with a list of ten keywords
that should appear later in the "trending" section of Mastodon.
4.1.3. Results
Social network metrics The simulation highlights the diferences in circadian activity patterns
between normal communities and astroturfing communities, as shown in Figure 4. The key findings are:
• Normal communities activity follows a typical circadian rhythm, peaking during standard waking
hours.
• Astroturfing communities , however, exhibit heightened activity during of-peak hours (e.g., late
at night or early in the morning), outside of the usual active periods of normal communities.</p>
        <p>
          These results align with the observations of [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], who noted similar discrepancies in activity timing
between these groups.
        </p>
        <p>Relation dynamism during the simulation During the simulation, the "follow" relationship is
updated based on past interactions with a user. Figure 5 shows that the account follow / unfollow is relatively
stable, until the astroturfing attack starts at 12:00 on the second day. After that, there is a drastic increase
in the number of follows, with the astroturfing massively following the normal account communities.</p>
        <p>Automatic metrics such as modularity (given by the Louvain Algorithm) also show a drastic decrease
from 0.60 to 0.20. This decrease means that communities are less segmented from a topological point
of view and there is greater proximity between them.</p>
        <p>Narrative exposure One of the metrics an attacker wants to track is how much their adversarial
narrative has reached to the target, and in case of an astroturfing campaign, that their target is trapped
within an echo chamber over the attackers’ narrative, giving the target a false sense of consensus.</p>
        <p>This exposure enhances the efectiveness of the attack, aiming at causing kinetic impacts (tangible
real-world damage, altering beliefs, eroding public trust, etc.), which grows likelier as the number of
reached user increases. Figure 6 illustrates that during the of-hours, normal communities are flooded
with messages from the astroturfing communities. As shown in the figure, a large percentage of the
interactions are done by or towards astroturfing accounts. During normal hours, we see that these
interactions account for 40% to 60%, which is consistent with a short time frame attack goal.</p>
        <p>Moreover, as shown in Table 7a, the simulation influenced the trending hashtags, making their
message available to the whole platform. In real life, the reach of this modification can be as great as
the number of users in the networks; possibly millions.</p>
        <p>
          However, simply increasing the total reach does not linearly augment the probabilities of altering
the target behaviour, reaching operational objectives (i.e., causing kinetic impacts). In fact, as reported
in [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ], even large-scale attacks can fail if they do not genuinely resonate with or efectively penetrate
the target audience in a meaningful way. This specific part paves the way for further research on the
cognitive aspect of this framework.
4.2. T0100.003 "Co-opt Influencers"
Our framework can be considered a sandbox for TTP simulation; in this section, we propose an overview
of another TTP implementation, Co-opt Influencers. This TTP is deployed to concretize and raise
awareness on the possible impacts of already corrupted influencers within online social networks,
especially towards the youth.
4.2.1. Sources
This kind of attack is documented in Viginum Technical reports on interferences observed during
Romania’s 2025 presidential elections. "Co-opt Influencers", used along astroturfing techniques in the
report, consists of using renowned influencers by corrupting them or using them as useful idiots to push a
political agenda over their following base. This tactic is particularly efective as proved during Romania’s
Elections. Furthermore, it is regularly combined with Astroturfing to create “fake experts” or “fake
influencers,” that is, artificially amplified accounts designed to project crafted expertise on specific topics.
By leveraging a perceived authority, these accounts can shape discourse, influence public opinion, and
reinforce particular narratives in a way that appears authentic but actually is strategically orchestrated.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2.2. Technical Instantiation</title>
        <p>For this part, the focus will be on how corrupted influencers can influence entire communities. To do so,
the initial setup presented at Subsection 4.1 is restored, with the removal of the astroturfing communities
in order to keep only the "normal" communities.</p>
        <p>Among them, 14 (~2.6% of the accounts from the network) of the existing influencers are randomly
selected (from Luminary, Celebrities, Opinion Leaders and Experts) to be the corrupted influencers,
reproducing what happened in Romania’s elections (based on Viginum technical report). Finally, their
topic interest is set to privilege "Elections", aiming to share their point of view across the whole graph.
For the entire topic distribution, the initialisation setup can be seen in Table 2.</p>
        <p>
          In order to track the reach of the messages pushed by the corrupted influencers, a first step is to
monitor the co-retweet, co-favourites and replies network across the simulation, similarly as in [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
Furthermore, it is essential to monitor the rate of change in interest levels to assess whether the topic
is more and more relevant within the network. Should interest increase, the topic disseminated by the
designated influencers is likely to acquire additional spreaders, thereby amplifying its overall reach.
        </p>
        <p>Akin to the precedent astroturfing TTP, the corrupted influencers become active only after a 36 hours
delay, i.e., the second day at noon.
4.2.3. Results
For this TTP, the goal is to observe how influencers, which are central in the graph, can influence the
entire network towards specific topics. In real life, these influencers can be paid, used as useful idiots,
or persuaded to spread propaganda over foreign states narratives.</p>
        <p>With this limited number of influencers, Table 3 shows that shortly after the start of the attack, a large
portion of the network starts to interact with them, highlighting their reach capabilities.</p>
        <p>Over the first day, almost all nodes in the other communities interacted at least once with the influencers’
messages. This result means that these accounts can drive the chosen topics and make it trending over the
platform, and give it a reach it did not have before. These interactions will drive up the interest rate of the
specific chosen topic, which will cause additional engagement increase. Through Subfigure 8a, the average
interest of the normal accounts grows way faster than the other interest rates after the start of the attack.</p>
        <p>This increase in interest rate mechanically increases the number of interactions on these topics, as
shown with Subfigure 8b. This subfigure also shows that while the number of interactions over Elections
topic increases greatly after the start of the attack (the number of interaction mainly stays low before the
attack), other topics are not impacted and stay at the same levels before and after the attack. This shows
that the influencers drive the other communities over this particular topic, highlighting their impact.</p>
        <p>These results show that a small number of influencers can influence entire communities, as highlighted
in real life with Viginum’s report about the Romanian elections. The cognitive process of narrative
(a) Rate of interest over topics across time. For the simu- (b) Number of interactions -posts, replies, favourites,
lation, the threshold of activation is fixed at 0.60 retweets - (log scale) over topics across time.
conversion was not modelled. In practice, however, influencers can persuade individuals to adopt their
cause, potentially leading to kinetic decision-making.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Limitations</title>
      <p>We showed in this paper that our simulation is credible with the objectives of training against reproduced
TTPs. However, we did not take into account all the social efects within the social networks.</p>
      <p>Namely, for the decision to follow and unfollow, we did not introduce polarity over topics. In fact, one
can be really interested in a topic but in an opposite way of others. This two-faced coin can be primordial
in reproducing coherent follow updates (as aligned users tend to follow themselves more frequently
than unaligned ones). Similarly, evolution in topic interest is also based on more socioeconomic variables,
as well as cultural characteristics from individuals.</p>
      <p>Furthermore, we simulated only five topics over a three days simulation, but real social networks
contain way more parallel topics of discussion. Cross-interest between topics is also way higher in the
real life, with topics impacting each others massively. We illustrated this capacity with light drops of
interest on the second TTP, especially with the topic Immigration.</p>
      <p>Moreover, reproducing credible full-scale attacks is far from trivial, as defenders do not have the full
overview over what the attackers did, and especially what they wanted to do.</p>
      <p>Finally, we did not evaluate the quality of the text generation in this work, but text is very important
for conveying information in a specific manner. Attackers understand that importance, and play on
sentiments such as fear or surprise to elicit reactions from users, maximising the impact of their campaigns.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper, we presented a system used to simulate credible social networks, used to train specialised
analysts against influence operations.</p>
      <p>These analysts require the simulation of specific Tactics, Techniques, and Procedures (TTPs) to
capitalise on existing knowledge and mitigate their impact. Enabling this simulation, we presented in
this work a system generating realistic dynamic and controllable social graphs, allowing an end-user to
parametrise TTPs to be reproduced. The system encompasses three distinct modules: the initial
community generation, the community linking, and the information dissemination modules. The first module
generates user accounts (nodes in a graph) of diferent types and their follower distribution, according
to end-user parameters. The second module serves the purpose of defining how communities are linked,
influencing how the simulation will unfold. Finally, the last module aims to simulate information difusion
in a graph, adding multiple variables tailored to the specific needs of training against influence operations.</p>
      <p>To illustrate the coherence of the simulation, we have compiled two distinct case studies. The first
one simulated an astroturfing campaign that involved two bot-heavy communities attacking two normal
communities. The second example is used to illustrate and analyse how paid influencers can influence
entire communities, similar to COVID-19 pandemic disinformation campaigns. Through our results,
we show that our simulations are coherent with the literature observations of reported attacks, and that
the simulation is credible with respect to real platforms. This credibility allows trainers to use this system
to enhance the trainees’ immersion and thus, the training efect.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Mistral in order to perform grammar and spelling
checks. After using these tools/services, the authors reviewed and edited the content as needed and
take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>I. Morteo</surname>
          </string-name>
          ,
          <article-title>TO CLARIFY THE TYPIFICATION OF INFLUENCERS: A REVIEW OF THE LITERATURE</article-title>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mazza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Avvenuti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cresci</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Tesconi, Investigating the diference between trolls, social bots</article-title>
          , and humans on Twitter,
          <source>Computer Communications</source>
          <volume>196</volume>
          (
          <year>2022</year>
          )
          <fpage>23</fpage>
          -
          <lpage>36</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S0140366422003711. doi:
          <volume>10</volume>
          .1016/j.comcom.
          <year>2022</year>
          .
          <volume>09</volume>
          .022.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Oentaryo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murdopo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Prasetyo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.-P.</given-names>
            <surname>Lim</surname>
          </string-name>
          , On Profiling Bots in Social Media, in: Social Informatics: 8th International Conference, SocInfo
          <year>2016</year>
          , Bellevue, WA, USA, November
          <volume>11</volume>
          -
          <issue>14</issue>
          ,
          <year>2016</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>I</given-names>
          </string-name>
          , Springer-Verlag, Berlin, Heidelberg,
          <year>2016</year>
          , pp.
          <fpage>92</fpage>
          -
          <lpage>109</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -47880-
          <issue>7</issue>
          _6. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -47880-
          <issue>7</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L. G.</given-names>
            <surname>Mojica</surname>
          </string-name>
          , Modeling Trolling in Social Media Conversations,
          <year>2016</year>
          . URL: http: //arxiv.org/abs/1612.05310, issue: arXiv:
          <fpage>1612</fpage>
          .05310 arXiv:
          <fpage>1612</fpage>
          .05310 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Linvill</surname>
          </string-name>
          , ,
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Warren</surname>
          </string-name>
          , Troll Factories: Manufacturing Specialized Disinformation on Twitter,
          <source>Political Communication</source>
          <volume>37</volume>
          (
          <year>2020</year>
          )
          <fpage>447</fpage>
          -
          <lpage>467</lpage>
          . URL: https://doi.org/10.1080/ 10584609.
          <year>2020</year>
          .
          <volume>1718257</volume>
          . doi:
          <volume>10</volume>
          .1080/10584609.
          <year>2020</year>
          .
          <volume>1718257</volume>
          , publisher: Routledge _eprint: https://doi.org/10.1080/10584609.
          <year>2020</year>
          .
          <volume>1718257</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.-L.</given-names>
            <surname>Barabasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Albert</surname>
          </string-name>
          ,
          <article-title>Emergence of scaling in random networks</article-title>
          ,
          <source>Science</source>
          <volume>286</volume>
          (
          <year>1999</year>
          )
          <fpage>509</fpage>
          -
          <lpage>512</lpage>
          . URL: http://arxiv.org/abs/cond-mat/9910332. doi:
          <volume>10</volume>
          .1126/science.286.5439.509, arXiv:cond-mat/9910332.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <source>A Survey on Information Difusion in Online Social Networks: Models and Methods, Information</source>
          <volume>8</volume>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .3390/info8040118.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kempe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          , E. Tardos,
          <article-title>Influential Nodes in a Difusion Model for Social Networks</article-title>
          , in: D.
          <string-name>
            <surname>Hutchison</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Kanade</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Kittler, a</article-title>
          . et (Eds.),
          <source>Automata, Languages and Programming</source>
          , volume
          <volume>3580</volume>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2005</year>
          , pp.
          <fpage>1127</fpage>
          -
          <lpage>1138</lpage>
          . URL: http://link.springer.com/10.1007/11523468_91. doi:
          <volume>10</volume>
          .1007/11523468_91, series Title: Lecture Notes in Computer Science.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Barbieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bonchi</surname>
          </string-name>
          , G. Manco,
          <string-name>
            <surname>Topic-Aware Social</surname>
          </string-name>
          Influence Propagation Models,
          <source>in: 2012 IEEE 12th International Conference on Data Mining</source>
          ,
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          , Brussels, Belgium,
          <year>2012</year>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>90</lpage>
          . URL: http://ieeexplore.ieee.org/document/6413913/. doi:
          <volume>10</volume>
          .1109/ICDM.
          <year>2012</year>
          .
          <volume>122</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-K.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <article-title>Information spreading on dynamic social networks</article-title>
          ,
          <source>Communications in Nonlinear Science and Numerical Simulation</source>
          <volume>19</volume>
          (
          <year>2014</year>
          )
          <fpage>896</fpage>
          -
          <lpage>904</lpage>
          . URL: https://www.sciencedirect. com/science/article/pii/S100757041300378X. doi:
          <volume>10</volume>
          .1016/j.cnsns.
          <year>2013</year>
          .
          <volume>08</volume>
          .028.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <article-title>An application of the theory of probabilities to the study of a priori pathometry</article-title>
          .-
          <string-name>
            <surname>Part</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <source>Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character</source>
          <volume>92</volume>
          (
          <year>1916</year>
          )
          <fpage>204</fpage>
          -
          <lpage>230</lpage>
          . URL: https://royalsocietypublishing.org/doi/10.1098/rspa.
          <year>1916</year>
          .
          <volume>0007</volume>
          . doi:
          <volume>10</volume>
          .1098/rspa.
          <year>1916</year>
          .
          <volume>0007</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Saini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Goel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Panda</surname>
          </string-name>
          ,
          <article-title>Modeling information difusion in online social networks using a modified forest-fire model</article-title>
          ,
          <source>Journal of Intelligent Information Systems</source>
          <volume>56</volume>
          (
          <year>2021</year>
          )
          <fpage>355</fpage>
          -
          <lpage>377</lpage>
          . URL: https://link.springer.
          <source>com/10.1007/s10844-020-00623-8</source>
          . doi:
          <volume>10</volume>
          .1007/s10844-020-00623-8.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B.</given-names>
            <surname>Nimmo</surname>
          </string-name>
          ,
          <article-title>Anatomy of an Info-War: How Russia's Propaganda Machine Works</article-title>
          , and How to Counter It,
          <year>2015</year>
          . URL: https://www.stopfake.org/en/ anatomy
          <article-title>-of-an-info-war-how-russia-s-propaganda-machine-works-and-how-to-counter-it/</article-title>
          , section: Context.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>François</surname>
          </string-name>
          , Actors, Behaviors, Content:
          <string-name>
            <surname>A Disinformation ABC</surname>
          </string-name>
          (????).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Blazek</surname>
          </string-name>
          ,
          <article-title>SCOTCH: a framework for rapidly assessing influence operations</article-title>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>K. M. Carley</surname>
          </string-name>
          ,
          <article-title>Social cybersecurity: an emerging science</article-title>
          ,
          <source>Computational and Mathematical Organization Theory</source>
          <volume>26</volume>
          (
          <year>2020</year>
          )
          <fpage>365</fpage>
          -
          <lpage>381</lpage>
          . URL: https://doi.org/10.1007/s10588-020-09322-9. doi:
          <volume>10</volume>
          .1007/s10588-020-09322-9, number:
          <fpage>4</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schoch</surname>
          </string-name>
          , F. B. Keller, S. Stier,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Coordination patterns reveal online political astroturfing across the world</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <article-title>4572</article-title>
          . URL: https://www.nature.com/articles/ s41598-022-08404-9. doi:
          <volume>10</volume>
          .1038/s41598-022-08404-9, publisher: Nature Publishing Group.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Vargas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Emami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Traynor</surname>
          </string-name>
          ,
          <article-title>On the Detection of Disinformation Campaign Activity with Network Analysis</article-title>
          ,
          <source>in: Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop</source>
          , ACM, Virtual Event USA,
          <year>2020</year>
          , pp.
          <fpage>133</fpage>
          -
          <lpage>146</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3411495.3421363. doi:
          <volume>10</volume>
          .1145/3411495.3421363.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bellogín</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Cantador</surname>
          </string-name>
          ,
          <article-title>Analysing the Efect of Recommendation Algorithms on the Spread of Misinformation</article-title>
          , in: ACM Web Science Conference, ACM, Stuttgart Germany,
          <year>2024</year>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>169</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3614419.3644003. doi:
          <volume>10</volume>
          .1145/3614419.3644003.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>G.</given-names>
            <surname>Gadek</surname>
          </string-name>
          ,
          <article-title>Détection d'opinions, d'acteurs-clés et de communautés thématiques dans les médias sociaux, phdthesis</article-title>
          , Normandie Université,
          <year>2018</year>
          . URL: https://theses.hal.science/tel-02064171.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21] PROMPT / Narrative Intelligence for Information Integrity / PROMPT,
          <year>2025</year>
          . URL: https://disinfo-prompt.eu/.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>L.</given-names>
            <surname>Manikonda</surname>
          </string-name>
          , G. Beigi, H. Liu,
          <string-name>
            <surname>S. Kambhampati</surname>
          </string-name>
          , (PDF)
          <article-title>Twitter for Sparking a Movement, Reddit for Sharing the Moment: #metoo through the Lens of Social Media</article-title>
          ,
          <year>2018</year>
          . URL: https://www.researchgate. net/publication/323931993_Twitter_
          <article-title>for_Sparking_a_Movement_Reddit_for_Sharing_the_ Moment_metoo_through_the_Lens_of_Social_Media</article-title>
          . doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1803</year>
          .
          <volume>08022</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J.</given-names>
            <surname>Schler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Bonchek-Dokow</surname>
          </string-name>
          ,
          <article-title>Profiling Astroturfers on Facebook: A Complete Framework for Labeling, Feature Extraction, and</article-title>
          <string-name>
            <surname>Classification</surname>
          </string-name>
          ,
          <source>Machine Learning and Knowledge Extraction</source>
          <volume>6</volume>
          (
          <year>2024</year>
          )
          <fpage>2183</fpage>
          -
          <lpage>2200</lpage>
          . URL: https://www.mdpi.com/2504-4990/6/4/108. doi:
          <volume>10</volume>
          .3390/make6040108, publisher: Multidisciplinary Digital Publishing Institute.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>U.</given-names>
            <surname>Oliveri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Gadek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Costé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lolive</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Delhay-Lorrain</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. Grilheres,</surname>
          </string-name>
          <article-title>SocialForge: Simulating the Social Internet to Provide Realistic Training Against Influence Operations</article-title>
          ,
          <source>in: Proceedings of the Annual Meeting of the Association for Computational Linguistics</source>
          <year>2025</year>
          ,
          <string-name>
            <given-names>Industry</given-names>
            <surname>Track</surname>
          </string-name>
          , Association for Computational Linguistics, Vienna, Austria,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>G.</given-names>
            <surname>Eady</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Paskhalis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zilinsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bonneau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nagler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Tucker</surname>
          </string-name>
          ,
          <article-title>Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior</article-title>
          ,
          <source>Nature Communications</source>
          <volume>14</volume>
          (
          <year>2023</year>
          )
          <article-title>62</article-title>
          . URL: https://www.nature.com/articles/s41467-022-35576-9. doi:
          <volume>10</volume>
          .1038/s41467-022-35576-9, publisher: Nature Publishing Group.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>