<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Cyber Intelligence and Social Media Analytics: Current Research Trends and Challenges</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Serena Tardelli</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Avvenuti</string-name>
          <email>marco.avvenuti@unipi.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guglielmo Cola</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Cresci</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tiziano Fagni</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Margherita Gambini</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo Mannocci</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Mazza</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Caterina Senette</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maurizio Tesconi</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Pisa</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Information Engineering, University of Pisa</institution>
          ,
          <country country="IT">Italy -</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Institute of Informatics and Telematics, National Research Council (IIT-CNR)</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy -</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Online Social Networks (OSNs) are a rich source of data for Cyber Security and Cyber Intelligence applications, as they can reveal valuable insights into users' behaviors, preferences, and opinions. Analyzing OSN data poses significant challenges, such as dealing with misinformation campaigns, protecting users' privacy, and extracting relevant information from large and heterogeneous datasets. The Cyber Intelligence (CI) unit of the IIT-CNR has been conducting cutting-edge research on these topics, using state-of-the-art techniques from artificial intelligence, machine learning, natural language processing, and computer vision. In this paper, we present some of the main activities of the CI group and the technologies we have developed and applied to various CI areas. In addition, we present our involvement in projects that leverage artificial intelligence technologies for the development and implementation of Cyber Security techniques and systems based on social media and online social networks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;cyber intelligence</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>machine learning</kwd>
        <kwd>deep learning</kwd>
        <kwd>social media intelligence</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>and analyze the big data generated by users on social
networks. The unit focuses on several research topics
The Internet has become a dominant platform for commu- related to social media analysis, including: botnet and
nication, work and entertainment in the modern world. fake news detection; content analysis for hate speech
deHowever, this also exposes a lot of personal and sensitive tection and extremist account identification; moderation
information to the public, which can be used for surveil- intervention, evaluation, and planning; coordinated
belance and prevention purposes in various domains of in- havior and conspiracy theory difusion analysis; metrics
terest (such as terrorism, crime, etc.). Social networks are development to monitor the “health” of social
ecosysespecially popular among people who use them to inter- tems.
act and share their views on diverse topics. The massive
amount of data generated by these interactions, often re- 1.1. Objectives
ferred to as “big data”, demands sophisticated processing
and analysis techniques to produce intelligence infor- The research activities of the CI unit are mainly focused
mation that can support efective decision-making for on Cyber Intelligence and Social Media Analytics. These
specific areas of interest. To create advanced descriptive interdisciplinary fields combine methods and techniques
and predictive models, state-of-the-art machine learning from computer science, data science, social sciences, and
and artificial intelligence (deep learning) techniques are security studies. Our group is interested in exploring
often employed, resulting in highly accurate software various aspects of these fields, such as:
solutions that can assist system users.</p>
      <p>The Cyber Intelligence (CI) unit aims to develop inno- • Data collection from diverse sources, such as the
vative solutions for the analysis of social media data, in Web and social media platforms.
order to extract useful information for various purposes,
such as security, prevention, and moderation. The CI unit
applies advanced techniques of machine learning and
artificial intelligence, especially deep learning, to process
• Analysis of large datasets (“big data”) to extract
useful insights and investigate the dynamics and
patterns of online behaviors, such as the
interactions and influence among diferent actors (e.g.,
individuals, groups, organizations) in various
domains (e.g., politics, health, security).
• Development of novel algorithms, tools, and
models that can describe or predict patterns in the
• Understanding the opportunities and challenges
of using online data for intelligence purposes,
such as situational awareness, threat detection
and prevention, decision support, and strategic
communication.
data, using state-of-the-art techniques from ma- to mislead and manipulate users. IOs can take various
chine learning and deep learning. shapes, target individuals or online groups, and have
a variety of goals. In this line of research, we
investigate, analyze, and characterize online misbehavior in its
many forms, including fake accounts [9, 10], colluding
users (e.g., paid trolls) [11], and automation (e.g., social
bots) [12, 13, 14, 15]. Using Machine Learning, Deep
Learning, and Social Network Analysis techniques, we
develop and implement cutting-edge tools and models
able to detect these strategies and mitigate the influence
of malicious actors who disseminate and amplify harmful
information.</p>
      <sec id="sec-1-1">
        <title>Analysis and detection of information disorder. In</title>
        <p>formation disorder is a term that encompasses various
Social sensing for emergency management systems. forms of misleading, inaccurate, or false information that
This line of research focuses on leveraging human sens- are intentionally or unintentionally spread online. It can
ing and AI for emergency management, especially in the have serious consequences for individuals, communities,
aftermath of disasters. The aim is to develop systems and societies, such as undermining trust in democratic
that collect social crisis data from sources such as Twit- institutions, fueling polarization and hate speech, and
ter [1], and use AI tools to enrich them with information endangering public health and safety. In this research
about the damage, location, and needs of the afected area, we study information disorder in its many forms,
people. Additional adoption of geoparsing models can such as misinformation, disinformation [21, 22, 23], fake
help link textual mentions of places to their geographical news [24], malinformation, infodemic [25, 26, 27, 28], or
coordinates [2, 3]. The goal is to design decision sup- propaganda [29, 30], depending on the source, intent, and
port systems that can monitor and manage catastrophic impact of the information.
events (natural or man-made) and help authorities in the
early stages of the event [4, 5, 6, 7, 8].</p>
      </sec>
      <sec id="sec-1-2">
        <title>Detection of malicious automated accounts. In</title>
        <p>formation or influence operations (IOs) on Social
Media have been frequently carried out on social media</p>
        <sec id="sec-1-2-1">
          <title>The Cyber Intelligence (CI) research group of the IIT-CNR</title>
          <p>has been working on Cyber Security for years, focusing
on research topics related to Social Media Analytics and
Cyber Intelligence. The group has developed and refined
skills in data collection from the most popular Social
Media platforms, using both crawling techniques through
native Web services and scraping techniques when
oficial services are not available to access the data of interest.</p>
          <p>The collected data is stored and analyzed with big data
technologies and exploited by applying advanced
artificial intelligence techniques such as Machine Learning
and Deep Learning, to create predictive or descriptive
models that can support or automate specific tasks. We
provide a list of some of the most significant research
activities that the CI unit has conducted or is conducting
as follows.
• Creation of advanced and complex data
visualization interfaces.</p>
          <p>Analysis and detection of online financial and
cryptocurrency discussions. Our research investigates
the online ecosystem related to cryptocurrencies and
financial markets, with a focus on detecting and analyzing
manipulation and fraud attempts. We leveraged a range
of methods and data sources, such as social media, price Conspiracy theories can be also a part of online extremist
data, and blockchain transactions, to study, explore, and content. Online conspiracy theories are claims that
chaldetect diferent phenomena, such as: (i) online cryptocur- lenge the oficial or mainstream narratives of events or
rency manipulation (e.g., pump-and-dump, thefts, etc) by phenomena. They often involve elaborate plots, hidden
malicious actors who seek to profit from the volatility agendas, secret societies, or powerful elites. They can
and anonymity of the market [31, 10]; (ii) financial spam have serious consequences for individuals and society
to influence the market or scam unsuspecting users and (e.g., spreading misinformation, eroding trust, inciting
viother fraudulent practices that exploit the popularity of olence, undermining democracy). As such, our research
certain companies or topics to promote less important or also examines how conspiracy theories spread on social
dubious ones [32, 33, 34, 14]. Our research aims to con- media platforms, focusing on how to detect them and the
tribute to the understanding of the challenges, dynamics, users who propagate them.
and impacts of these phenomena, as well as to develop
techniques, tools, and solutions for their detection and
prevention.</p>
        </sec>
        <sec id="sec-1-2-2">
          <title>Content moderation. Content moderation is the pro</title>
          <p>cess of monitoring and regulating online content created
and shared by users on social media platforms. Content
Deep fake detection. Deep fake is a term that refers moderation can help prevent the spread of harmful or
to the use of artificial intelligence to create realistic but illegal content, such as hate speech, violence,
misinformafake images, videos, text, or audio of people or events. tion, spam, etc. However, content moderation also poses
Deep fake technology can be used for multiple purposes, some challenges and risks, such as infringing on users’
such as entertainment, and education. However, it also freedom of expression, and privacy, as well as exposing
poses serious challenges for society, such as undermin- moderators to psychological harm. Therefore, there is a
ing trust in information sources, violating privacy and need for a set of strategies and practices that aim to
reconsent, and facilitating misinformation and manipula- duce the negative impacts of content moderation on both
tion. Therefore, it is important to develop models to users and moderators. We survey and experiment with
mitigate and prevent potential abuse of deep fake con- multiple strategies (i.e., interventions) to evaluate the
tent. As CI unit, we study and implement novel strategies efects and efectiveness of moderation interventions on
to detect deepfake multimedia content, such as images, social media platforms (e.g., Reddit, Twitter), both at the
videos, and texts. Since the text generative models are platform level and at the individual user level. We analyse
increasing both in number and accuracy in resembling a user reactions to moderation interventions, focusing on
human-written text, we investigate the optimal approach the characteristics that might influence user reactions to
(in terms of data availability and training time) to detect interventions (e.g., user’s personality, political leaning),
texts written by all typologies of generative techniques, thus providing new knowledge and tools for mitigating
either old (e.g., RNN, Markov Chains) or new (e.g., GPT2, widespread issues in online platforms [42, 43, 44].
GPT3, GPT4, and ChatGPT), with a focus on deepfake
texts written for social media [35, 36, 37].</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. Research Projects</title>
      <p>Online extremist content detection. Any online
content that promotes or incites violence, hatred,
discrimination or radicalization based on ideological,
religious, political or ethnic grounds can have a negative
impact on individuals and societies, as it can foster
intolerance, polarization and radicalization. Removing and
prevent online extremism content while respecting
human rights and freedom of expression is a complex and
multifaceted challenge that needs a collaborative and
holistic approach. As CI unit, we focus on various aspects
in this area. For instance, we are interested in studying
metrics for the identification of radicalization pathways,
extremist users[29], and texts containing violent and
hateful language (such as racial, political, etc.) [38, 39]. In
addition, we focus on political polarization, examining
how users’ political orientation (political leaning) and
opinion (stance detection) vary according to the most
salient topics in the country’s political agenda [40, 41].</p>
      <sec id="sec-2-1">
        <title>The CI unit has been working on several research projects</title>
        <p>in the past years, covering diferent aspects of
computational social science, web science, social media analysis,
and cyber intelligence. In this section, we briefly
describe some of the main ongoing research projects and
their objectives and outcomes.</p>
      </sec>
      <sec id="sec-2-2">
        <title>SERICS (DETERRENCE). The DETERRENCE project</title>
        <p>is part of the SERICS Foundation - Security and Rights in
CyberSpace (www.serics.eu/). SERICS is funded - under
the National Recovery and Resilience Plan, supported by
the European Union - NextGenerationEU. The
Foundation includes 10 Spokes.</p>
        <p>Our activity is expressed within Spoke 2,
Disinformation and Fake News through the project called
DETERRENCE - DEcision supporT SystEm foR cybeR
intelligENCE coordinated by the CI unit.</p>
        <p>The main objective of the DETERRENCE project is to and services, creating an interdisciplinary research
study the Information Disorder phenomenon on social community. In this project, CI is mainly involved as
media with the aim of designing a proof of concept of leader of the Social Media Observatory, which aims to
decision support tools (DSS) to monitor and mitigate its develop a set of tools to facilitate listening campaigns on
impact, both for individuals and for society in general. social media as well as the interpretation of retrieved
Principal project activities will be devoted to: (i) detect data. Developed tools include libraries to ease real-time
and investigate, through network science methods, coor- data collection and the analysis of information difusion
dinated online behaviors especially on large-scale cam- on Twitter [45]. This project has led to the creation
paigns, also identifying automated behavior; (ii) develop of two ongoing spin-of projects: SoBigData PPP and
techniques for detecting the next generation of fake ac- SoBigData.it.
counts and content, including malicious accounts, as well
as deepfake texts and multimedia content; (iii) investi- SoBigData PPP. The SoBigData RI Preparatory Phase
gate the dynamics of communities and social networks Project is a SoBigData++ spinof with the goal of
movthat can be potentially exposed to cognitive bias which ing the RI forward from the simple awareness of ethical
in turn could cause or amplify noxious phenomena such and legal challenges in social mining to the development
as gender discrimination, racism, and cyberbullying. of concrete tools that operationalize ethics with
valuesensitive design, incorporating values and norms for
privacy protection, fairness, transparency, and pluralism.</p>
        <p>INTERROGATE The INTERROGATE project
(artIficial INtelligence Text Enrichment foR impROving biG
dATa procEssing) is funded as part of the High Training SoBigData.it. The SoBigData.it project aims at
projects promoted by the Italian Fondo per lo sviluppo e strengthening the technological, scientific, and ethical
la coesione and the Tuscany Region. The project is based aspects of the Italian RI for Social Mining and Big Data
on the premise that in today’s world, every company Analytics. The goal is to enhance interdisciplinary and
has access to an enormous volume of data (Big Data). innovative research on the multiple aspects of social
comThe traditional analysis of this data is based on the re- plexity by combining data and model-driven approach.
lational model, storing data in tables (structured data). The CI unit’s main contribution is to investigate specific
However, today only about 20% of the data available for societal topics through data science, with a particular
the companies is in the form of structured data, while the emphasis on analyzing Societal Debates and
Misinformaremaining 80% is unstructured and usually available as tion across diverse domains, such as politics, health, and
free text. One way to benefit from these huge amounts ifnance.
of textual data is to use text mining techniques to
extract value from the data. The aim of the INTERROGATE
project, coordinated by the CI unit, is to define a Big 4. Conclusions
Data architecture, based on open-source solutions, that
allows complex Text Mining models to be applied to large
amounts of data in a scalable way. These models, based
on the most advanced AI techniques (deep learning), will
enrich textual resources with new structured information,
in order to enable novel and powerful search,
aggregation, and analysis functionalities.</p>
        <sec id="sec-2-2-1">
          <title>SoBigData++ Research Infrastructure. SoBig</title>
          <p>Data++ is a European project funded under the Horizon
2020 Framework Programme, which is the largest
research and innovation program in the history of the
European Union. The project’s goal is to create a “Social
Mining &amp; Big Data” ecosystem: a research infrastructure
that enables ethical and scientific exploration and
application of social data mining to study multiple aspects
of social life. SoBigData builds on several established
national infrastructures and opens new avenues for
research in multiple fields, such as mathematics, AI, and
human, social and economic sciences. It allows for easy
comparison, reuse and integration of big data, methods
Cyber and social media intelligence is a vital but
dificult domain in the information disorder era. It needs a
comprehensive and diverse approach that considers the
technical, social, and ethical dimensions of social media
data. It also needs a constant change and innovation to
match the changing social media environment. This
research field has a wide scope and relies mainly on public
data that are enhanced with indicators of various aspects
such as coordination, polarization, propaganda, etc. As
such, the main ethical risk in this research is the
potential deanonymization of the datasets, which could expose
users’ sensitive information. To mitigate this risk, data
protection and privacy-preserving techniques must be
adopted to ensure that data is used and shared
responsibly and ethically.</p>
          <p>The CI unit’s mission is to develop and deliver
cuttingedge cyber intelligence tools, solutions and systems for
decision-making and research. By applying network
science, artificial intelligence, machine learning, and deep
learning, the unit aims to identify and mitigate the next
generation of online ecosystem disruption and harmful [8] M. Avvenuti, S. Bellomo, S. Cresci, L. Nizzoli,
phenomena. M. Tesconi, Towards better social crisis data with
hermes: Hybrid sensing for emergency
management system, Pervasive and Mobile Computing 67
Acknowledgments (2020) 101225.
[9] S. Cresci, R. Di Pietro, M. Petrocchi, A. Spognardi,
We thank for the support by project SERICS (PE00000014) M. Tesconi, Fame for Sale: eficient detection of
under the NRRP MUR program funded by the EU – NGEU; fake Twitter followers, Decision Support Systems
by the European Union – Horizon 2020 Program under 80 (2015) 56–71.
the scheme “INFRAIA-01-2018-2019 – Integrating Activ- [10] M. Mazza, G. Cola, M. Tesconi, Ready-to-(ab) use:
ities for Advanced Communities”, Grant Agreement n. From fake account traficking to coordinated
inau871042, “SoBigData++: European Integrated Infrastruc- thentic behavior on twitter, Online Social Networks
ture for Social Mining and Big Data Analytics”; and by and Media 31 (2022) 100224.
project DESIRE 2.0 (DissEmination of ScIentific REsults) [11] M. Mazza, M. Avvenuti, S. Cresci, M. Tesconi,
Invesfunded by IIT-CNR. tigating the diference between trolls, social bots,
and humans on twitter, Computer Communications
References 196 (2022) 23–36.
[12] S. Cresci, R. Di Pietro, M. Petrocchi, A. Spognardi,
[1] S. Cresci, S. Minutoli, L. Nizzoli, S. Tardelli, M. Tesconi, DNA-inspired online behavioral
modelM. Tesconi, Enriching Digital Libraries with Crowd- ing and its application to spambot detection, IEEE
sensed Data: Twitter Monitor and the SoBigData Intelligent Systems 31 (2016) 58–64.
Ecosystem, in: Digital Libraries: Supporting Open [13] S. Cresci, R. Di Pietro, M. Petrocchi, A. Spognardi,
Science: 15th Italian Research Conference on Dig- M. Tesconi, Social Fingerprinting: detection of
ital Libraries, IRCDL 2019, Pisa, Italy, January spambot groups through DNA-inspired behavioral
31–February 1, 2019, Proceedings 15, Springer, 2019, modeling, IEEE Transactions on Dependable and
pp. 144–158. Secure Computing 15 (2018) 561–576.
[2] M. Avvenuti, S. Cresci, L. Nizzoli, M. Tesconi, GSP [14] S. Tardelli, M. Avvenuti, M. Tesconi, S. Cresci,
Char(Geo-Semantic-Parsing): Geoparsing and Geotag- acterizing Social Bots Spreading Financial
Disinforging with Machine Learning on Top of Linked Data, mation, in: Social Computing and Social Media.
in: The Semantic Web: 15th International Confer- Design, Ethics, User Behavior, and Social Network
ence, ESWC 2018, Heraklion, Crete, Greece, June Analysis: 12th International Conference, SCSM
3–7, 2018, Proceedings, Springer, 2018, pp. 17–32. 2020, Held as Part of the 22nd HCI International
[3] L. Nizzoli, M. Avvenuti, M. Tesconi, S. Cresci, Conference, HCII 2020, Copenhagen, Denmark,
Geo-semantic-parsing: Ai-powered geoparsing by July 19–24, 2020, Proceedings, Part I 22, Springer,
traversing semantic knowledge graphs, Decision 2020, pp. 376–392.</p>
          <p>Support Systems 136 (2020) 113346. [15] L. Mannocci, S. Cresci, A. Monreale, A. Vakali,
[4] M. Avvenuti, S. Cresci, A. Marchetti, C. Meletti, M. Tesconi, MulBot: Unsupervised Bot Detection
M. Tesconi, EARS (Earthquake Alert and Report Based on Multivariate Time Series, in: 2022 IEEE
System): A real time decision support system for International Conference on Big Data (Big Data),
earthquake crisis management, in: Proceedings of IEEE Computer Society, 2022, pp. 1485–1494.
the 20th ACM SIGKDD International Conference [16] M. Mazza, S. Cresci, M. Avvenuti, W.
Quattrociocon Knowledge Discovery and Data Mining, ACM, chi, M. Tesconi, Rtbust: Exploiting temporal
pat2014, pp. 1749–1758. terns for botnet detection on twitter, in:
Proceed[5] M. Avvenuti, S. Cresci, M. N. La Polla, C. Meletti, ings of the 10th ACM conference on web science,
M. Tesconi, Nowcasting of Earthquake Conse- 2019, pp. 183–192.
quences Using Big Social Data, IEEE Internet Com- [17] M. Cinelli, S. Cresci, W. Quattrociocchi, M. Tesconi,
puting (2017) 37–45. P. Zola, Coordinated inauthentic behavior and
in[6] M. Avvenuti, S. Cresci, F. D. Vigna, M. Tesconi, On formation spreading on twitter, Decision Support
the need of opening up crowdsourced emergency Systems 160 (2022) 113819.</p>
          <p>management systems, Ai &amp; Society 33 (2018) 55–60. [18] L. Nizzoli, S. Tardelli, M. Avvenuti, S. Cresci,
[7] M. Avvenuti, S. Cresci, F. D. Vigna, T. Fagni, M. Tesconi, Coordinated Behavior on Social Media
M. Tesconi, CrisMap: a Big Data Crisis Mapping in 2019 UK General Election, in: Proceedings of the
System Based on Damage Detection and Geop- International AAAI Conference on Web and Social
arsing, Information Systems Frontiers 20 (2018) Media, volume 15, 2021, pp. 443–454.
993–1011. [19] M. Mazza, G. Cola, M. Tesconi, Modularity-based
approach for tracking communities in dynamic so- $FAKE: Evidence of Spam and Bot Activity in Stock
cial networks, arXiv preprint arXiv:2302.12759 Microblogs on Twitter, in: Proceedings of the
In(2023). ternational AAAI Conference on Web and Social
[20] S. Tardelli, L. Nizzoli, M. Tesconi, M. Conti, Media, volume 12, 2018.</p>
          <p>P. Nakov, G. D. S. Martino, S. Cresci, Temporal [33] S. Cresci, F. Lillo, D. Regoli, S. Tardelli, M. Tesconi,
Dynamics of Coordinated Online Behavior: Sta- Cashtag Piggybacking: uncovering spam and bot
bility, Archetypes, and Influence, arXiv preprint activity in stock microblogs on Twitter, ACM
TransarXiv:2301.06774 (2023). actions on the Web (TWEB) (2019). (forthcoming).
[21] F. Alam, S. Cresci, T. Chakraborty, F. Silvestri, [34] S. Tardelli, M. Avvenuti, M. Tesconi, S. Cresci,
DeD. Dimitrov, G. D. S. Martino, S. Shaar, H. Firooz, tecting inorganic financial campaigns on Twitter,
P. Nakov, A survey on multimodal disinformation Information Systems 103 (2022) 101769.
detection, arXiv preprint arXiv:2103.12541 (2021). [35] T. Fagni, F. Falchi, M. Gambini, A. Martella,
[22] R. Di Pietro, S. Raponi, M. Caprolu, S. Cresci, M. Tesconi, TweepFake: About detecting deepfake
R. Di Pietro, S. Raponi, M. Caprolu, S. Cresci, Infor- tweets, Plos one 16 (2021) e0251415.
mation disorder, New Dimensions of Information [36] M. Gambini, T. Fagni, F. Falchi, M. Tesconi, On
Warfare (2021) 7–64. pushing deepfake tweet detection capabilities to
[23] R. Di Pietro, S. Raponi, M. Caprolu, S. Cresci, the limits, in: 14th ACM Web Science Conference
R. Di Pietro, S. Raponi, M. Caprolu, S. Cresci, New 2022, 2022, pp. 154–163.</p>
          <p>dimensions of information warfare, Springer, 2021. [37] M. Gambini, T. Fagni, C. Senette, M. Tesconi,
[24] M. Cinelli, S. Cresci, A. Galeazzi, W. Quattrociocchi, Tweets2stance: Users stance detection exploiting
M. Tesconi, The limited reach of fake news on zero-shot learning algorithms on tweets, arXiv
twitter during 2019 european elections, PloS one preprint arXiv:2204.10710 (2022).</p>
          <p>15 (2020) e0234689. [38] F. Del Vigna, A. Cimino, F. Dell’Orletta, M.
Petroc[25] A. Calamusa, S. Tardelli, M. Avvenuti, S. Cresci, chi, M. Tesconi, Hate me, hate me not: Hate speech
I. Federigi, M. Tesconi, M. Verani, A. Carducci, Twit- detection on facebook, 2017.
ter Monitoring Evidence of COVID-19 Infodemic in [39] T. Fagni, L. Nizzoli, M. Petrocchi, M. Tesconi, Six
Italy, European Journal of Public Health 30 (2020) things I hate about you (in italian) and six
classifickaa165–066. cation strategies to more and more efectively find
[26] E. Ferrara, S. Cresci, L. Luceri, Misinformation, them, in: Proceedings of the Third Italian
Confermanipulation, and abuse on social media in the ence on Cyber Security, 2019.
era of covid-19, Journal of Computational Social [40] T. Fagni, M. Tesconi, Profiling twitter users using
Science 3 (2020) 271–277. autogenerated features invariant to data
distribu[27] P. Zola, G. Cola, A. Martella, M. Tesconi, Italian tion., 2019.</p>
          <p>top actors during the covid-19 infodemic on twitter, [41] T. Fagni, S. Cresci, Fine-grained prediction of
politiInternational Journal of Web Based Communities cal leaning on social media with unsupervised deep
18 (2022) 150–172. learning, Journal of Artificial Intelligence Research
[28] A. F. Al-Qahtani, S. Cresci, The COVID-19 scam- 73 (2022) 633–672.</p>
          <p>demic: A survey of phishing attacks and their coun- [42] A. Trujillo, S. Cresci, One of many: Assessing
usertermeasures during COVID-19, IET Information level efects of moderation interventions on r/the
Security 16 (2022) 324–345. donald, arXiv preprint arXiv:2209.08809 (2022).
[29] L. Nizzoli, M. Avvenuti, S. Cresci, M. Tesconi, Ex- [43] S. Cresci, A. Trujillo, T. Fagni, Personalized
intertremist propaganda tweet classification with deep ventions for online moderation, in: Proceedings of
learning in realistic scenarios, in: Proceedings of the 33rd ACM Conference on Hypertext and Social
the 10th ACM Conference on Web Science, 2019, Media, 2022, pp. 248–251.</p>
          <p>pp. 203–204. [44] A. Trujillo, S. Cresci, Make reddit great again:
as[30] K. Hristakieva, S. Cresci, G. Da San Martino, sessing community efects of moderation
intervenM. Conti, P. Nakov, The spread of propaganda tions on r/the_donald, Proceedings of the ACM on
by coordinated communities on social media, in: Human-Computer Interaction 6 (2022) 1–28.
14th ACM Web Science Conference 2022, 2022, pp. [45] P. Zola, G. Cola, M. Mazza, M. Tesconi, Interaction
191–201. strength analysis to model retweet cascade graphs,
[31] L. Nizzoli, S. Tardelli, M. Avvenuti, S. Cresci, Applied Sciences 10 (2020) 8394.</p>
          <p>M. Tesconi, E. Ferrara, Charting the Landscape of
Online Cryptocurrency Manipulation, IEEE Access
8 (2020) 113230–113245.
[32] S. Cresci, F. Lillo, D. Regoli, S. Tardelli, M. Tesconi,</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>