=Paper=
{{Paper
|id=Vol-3762/513
|storemode=property
|title=Dawn of LLM4Cyber: Current Solutions, Challenges, and New Perspectives in Harnessing LLMs for Cybersecurity
|pdfUrl=https://ceur-ws.org/Vol-3762/513.pdf
|volume=Vol-3762
|authors=Luca Caviglione,Carmela Comito,Erica Coppolillo,Daniela Gallo,Massimo Guarascio,Angelica Liguori,Giuseppe Manco,Marco Minici,Simone Mungari,Francesco Sergio Pisani,Ettore Ritacco,Antonino Rullo,Paolo Zicari,Marco Zuppelli
|dblpUrl=https://dblp.org/rec/conf/ital-ia/CaviglioneCCG0L24
}}
==Dawn of LLM4Cyber: Current Solutions, Challenges, and New Perspectives in Harnessing LLMs for Cybersecurity==
Dawn of LLM4Cyber: Current Solutions, Challenges, and
New Perspectives in Harnessing LLMs for Cybersecurity
Luca Caviglione1 , Carmela Comito2 , Erica Coppolillo2,5 , Daniela Gallo2,3 , Massimo Guarascio2 ,
Angelica Liguori2,* , Giuseppe Manco2 , Marco Minici2,6 , Simone Mungari2,5,7 ,
Francesco Sergio Pisani2 , Ettore Ritacco4 , Antonino Rullo2 , Paolo Zicari2 and Marco Zuppelli1
1
Institute for Applied Mathematics and Information Technologies, Via de Marini 6, Genova, 16149, Italy
2
Institute for High Performance Computing and Networking, via P. Bucci 8-9/C, Rende, 87036, Italy
3
University of Salento, Piazza Tancredi, 7, Lecce, 73100, Italy
4
University of Udine, Via Palladio, 8, Udine, 33100, Italy
5
University of Calabria, via P. Bucci, Rende, 87036, Italy
6
University of Pisa, via Lungarno Pacinotti, Pisa, 56126, Italy
7
Revelis s.r.l., Viale della Resistenza, Rende, 87036, Italy
Abstract
Large Language Models (LLMs) are now a relevant part of the daily experience of many individuals. For instance, they can be
used to generate text or to support working duties, such as programming tasks. However, LLMs can also lead to a multifaceted
array of security issues. This paper discusses the research activity on LLMs carried out by the ICAR-IMATI group. Specifically,
within the framework of three funded projects, it addresses our ideas on how to understand whether data has been generated
by a human or a machine, track the use of information ingested by models, combat misinformation and disinformation, and
boost cybersecurity via LLM-capable tools.
Keywords
Large Language Models, Watermarking, Cybersecurity, Fake news, Event log analysis
1. Introduction this makes LLMs a double-edged sword since they can
be exploited to generate realistic yet malicious content,
Large Language Models (LLMs) allow to generate a wide such as fake news or text supporting misinformation
array of contents. For instance, they can be used to create campaigns. At the same time, LLMs have also proven to
textual documents, pieces of music, as well as source be effective in supporting various cyber-security duties,
code. A feature very relevant for their success is the for instance, to analyze logs or network traffic [1].
ability of mimicking the human behavior. Unfortunately, In an attempt to fully understand the potential of LLMs
in terms of offensive capabilities as well as the opportu-
Ital-IA 2024: 4th National Conference on Artificial Intelligence, orga- nities that should be seized to advance in the security of
nized by CINI, May 29-30, 2024, Naples, Italy the Internet, researchers of the Institute for High Perfor-
*
Corresponding author. mance Computing and Networking - ICAR and of the
$ luca.caviglione@ge.imati.cnr.it (L. Caviglione);
Institute for Applied Mathematics and Information Tech-
carmela.comito@icar.cnr.it (C. Comito); erica.coppolillo@icar.cnr.it
(E. Coppolillo); daniela.gallo@icar.cnr.it (D. Gallo); nologies - IMATI of the National Research Council of
massimo.guarascio@icar.cnr.it (M. Guarascio); Italy - CNR have intensified their efforts to investigate
angelica.liguori@icar.cnr.it (A. Liguori); the pros and cons of LLMs. This research effort is estab-
giuseppe.manco@icar.cnr.it (G. Manco); marco.minici@icar.cnr.it lished within the framework of three research projects.
(M. Minici); simone.mungari@icar.cnr.it (S. Mungari);
The first is funded by the Consortium named "SEcurity
francescosergio.pisani@icar.cnr.it (F. S. Pisani);
ettore.ritacco@uniud.it (E. Ritacco); antonino.rullo@icar.cnr.it and RIghts In the CyberSpace - SERICS", and aims at us-
(A. Rullo); paolo.zicari@icar.cnr.it (P. Zicari); ing LLMs to increase the security posture of networking
marco.zuppelli@ge.imati.cnr.it (M. Zuppelli) and computing systems. For instance, an LLM can be
0000-0001-6466-3354 (L. Caviglione); 0000-0001-9116-4323 used to synthesize behaviors starting from logs of con-
(C. Comito); 0000-0002-4670-8157 (E. Coppolillo);
tainerized microservices or to generate automatic textual
0009-0009-3245-7738 (D. Gallo); 0000-0001-7711-9833
(M. Guarascio); 0000-0001-9402-7375 (A. Liguori); replies to deceive e-mail scammers [2]. The second re-
0000-0001-9672-3833 (G. Manco); 0000-0002-9641-8916 (M. Minici); search action is funded by the project "Watermarking
0000-0002-0961-4151 (S. Mungari); 0000-0003-2922-0835 Hazards and novel perspectives in Adversarial Machine
(F. S. Pisani); 0000-0003-3978-9291 (E. Ritacco); learning - WHAM!", and is devoted to quantifying the
0000-0002-6030-0027 (A. Rullo); 0000-0002-9119-9865 (P. Zicari);
limits and opportunities of watermarking schemes when
0000-0001-6932-3199 (M. Zuppelli)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License applied to AI artifacts. As an example, data can be hidden
Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
to recognize deep fakes, to understand whether a model or to enforce IP can also be envisioned for generative
has been cloned, or to track usages in Machine-Learning- models, with a particular focus on large language models.
as-a-Service deployments [3]. Even worse, problem of There are essentially two scenarios that are relevant in
exploiting unauthorized content during training or in this respect. The first scenario is relative to the oppor-
deployment needs to be specifically addressed. The third tunity to mark generated text in a way that it can be
research action is funded by the project "Limiting MIsin- easily recognized. Watermarking can be employed in
formation spRead in online environments through multi- this context to embed the watermark within the output
modal and cross-domain FAKe news detection - MIRFAK", of the LLM and, thus, distinguish between the data gen-
which aims at developing an innovative content verifi- erated by a human and those produced by a machine.
cation tool, delivering solutions for news verification on The objective here is to enforce IP protection as well as
social media and online platforms. Within the project, to claim ownership on the generated data. The second
we aim at exploring the potentials and risks of LLMs scenario is relative to the problem that such generative
associated with misinformation. models can deliver malicious content. To mitigate po-
In this work, we outline our research agenda on these tential harm caused by such generated data, it is crucial
topics, which is devised in three directions: i) we present to develop methods to identify content generated by a
mid-term challenges for using LLMs to solve security- machine, when a watermark is not embedded. It is worth
related issues; ii) we discuss how watermarks can be noting that the generation of malicious content can be
applied to LLMs to mitigate attacks aiming at stealing both unintentional or intentional. Unintentional gener-
information or disseminating fake news; iii) we showcase ation may happen due to the stochastic nature of such
the gaps to be filled to make LLMs a real asset for the generative models, which causes the phenomenon of hal-
Internet. lucinations (i.e., unrealistic or imaginary content). By
The rest of the paper is structured as follows. Section contrast, intentional generation is typically done by a
2 deals with the problems of understanding whether the malicious threat actor, who pushes the generative model
output has been generated by an LLM and of tracking its to obtain mischievous data. In both cases, the gener-
provenance, while Section 3 considers usage violations, ated data could be of high quality, infusing trust among
such as unauthorized harvesting of data for training mod- readers eventually forcing them to fall into error or for-
els. Section 4 discusses challenges and opportunities rel- ward the content, e.g., through sharing functionalities
ative to the adoption of LLMs in the context of online of online social networks. Our research in this context
social platforms and debates. Section 5 discusses the aims at developing methods to identify contents gener-
adoption of LLMs in assessing cybersecurity risks related ated by a machine through a language model. We are
to systems and infrastructures in containerized environ- interested both in devising watermarking schemes and
ments. Lastly, Section 6 concludes the work and portrays in the more general challenge relative to the problem of
some prospected action points. devising predictive methods for discriminating generated
data. Besides, this research activity is aligned with the
current requirements enforced by the recently released
2. Are the Data Generated? European AI Act1 . The latter in fact introduces specific
transparency obligations to ensure that humans are in-
One of the main goals of our research is to investigate
formed when necessary, to ensure trust, and in particular,
challenges and solutions for protecting the Intellectual
that AI-generated content is identifiable.
Property (IP) of the Machine/Deep Learning (ML/DL)
The research approaches to this topic are quite recent.
models as well as of the dataset used for the training
To the best of our knowledge, the first LLM watermark-
phase [4]. Moreover, we also aim at considering tech-
ing technique for distinguishing human-generated from
niques to mark the output produced by ML/DL services,
machine-generated texts was proposed by Kirchenbauer
for instance, to understand whether an attacker “cloned”
et al. [6]. In text generation, language-based models pro-
the model through multiple remote invocations. Specifi-
duce a probability distribution over a vocabulary, i.e.,
cally, we are interested in techniques that allow the cloak-
the set of words or word fragments (i.e., tokens), used
ing of secret information within the contents we want to
for predicting the most likely next word based on the
protect. In this respect, an emerging research line consid-
previous ones. The authors propose to alter such distri-
ers watermarking techniques, i.e., arbitrary pieces of data
bution, in order to promote sampling of specific tokens.
that are embedded within the item to deliver and that
The occurrence within a given statistical significance of
are difficult to recognize besides proprietary decryption
such tokens characterizes the watermark within the text.
schemes. Such mechanisms are common with images
One of the main limitations of this approach is the gen-
and multimedia objects [5] and can be used to embed
control data within ML/DL models.
1
Techniques used to prevent unwanted/unfair usages https://digital-strategy.ec.europa.eu/en/policies/
regulatory-framework-ai
eration of low-quality texts in contexts characterized by and overcome MIA’s issues related to large dataset and
relatively deterministic content, such as code snippets the intrinsic randomness of LLMs.
or structured text. Lee et al. [7] refine the approach by
ensuring that sampling is only focused on high-entropy
tokens. 4. Fighting Fire with Fire:
One of our research objectives is to generalize these Generative AI to promote
approaches to other generative models, such as Diffusion
Models or Generative Adversarial Networks (GANs). In
Online Safety
addition, the analysis of the distribution of generated LLMs are showcasing remarkable abilities in various Nat-
data, and its comparison with that of real (not synthetic) ural Language Processing tasks, making them a highly po-
data can also be exploited for devising predictive mod- tent and beneficial tool for everyday life. However, along-
els aimed at automatically detecting the reliability and side their appealing strengths and widespread adoption,
authenticity of data. a significant concern is arising regarding their potential
role in amplifying the generation and dissemination of
3. Have You Stolen My Data? misinformation and disinformation. Generative AI tech-
nology has significantly empowered malicious actors to
Membership Inference attacks (MIAs) [8] aim to pre- produce fake content, which can be disseminated across
dict whether a data sample was included in the training online social networks and lead to detrimental phenom-
dataset of a machine learning model. These attacks serve ena, e.g., manipulating public discourse, disseminating
to evaluate the privacy vulnerabilities present in machine hate speech, and sharing fake content.
learning models, like in Neural Networks [9], GANs [10] As a remarkable example, in 2016 Microsoft released
and Diffusion Models [11]. Formally, the goal of a MIA the Tay chatbot, which triggered further controversy
is to infer whether a given data point 𝑥 was part of the by posting inflammatory and offensive tweets via its
training dataset 𝐷 for model 𝑀 by computing a mem- Twitter account, leading Microsoft to shut down the ser-
bership score 𝑠(𝑥; 𝑀 ). This score is then thresholded to vice within just 16 hours 2 . More recently, other works
determine a target sample’s membership. assessed the role of bots and AI agents in conveying
Membership inference attacks exploit the tendency of and amplifying online discourse about racism and hate
the models to overfit their training data and hence exhibit speech [14, 15], drawing further attention to this sensitive
lower loss values for these elements. A first and widely topic. Thus, as underscored by [16], the scale, velocity
used attack is the LOSS attack [12], in which samples and accessibility of generative models present compelling
are classified as training members if their loss values are challenges for online platforms, potentially inundating
lower than a fixed threshold (that is, 𝑠(𝑥; 𝑀 ) is defined them with a massive amount of fraudulent material and
in terms of ℒ(𝑥; 𝑀 )). unpredictable social consequences. While policy makers
Recent works aim to design and improve MIAs for are actively engaged in regulating the use of GenAI tools,
LLMs. In this case, MIAs consider a target model 𝑀 the efficacy of these measures remains uncertain. In re-
which gives as output a probability distribution of the sponse, our research group is working towards leveraging
next token given a prefix as input, P(𝑥𝑡 |𝑥0 . . . 𝑥𝑡−1 ; 𝑀 ). Generative AI to enhance online safety. Our objective is
The goal of MIA is hence to infer whether the target sam- to reuse the same technology used to contaminate online
ple 𝑥 = 𝑥1 . . . 𝑥𝑛 of 𝑛 tokens has been considered in the discussions for a beneficial purpose in a controlled envi-
training set. Duan et al. [13] consider several member- ronment. For instance, [17] demonstrated the potential of
ship inference attacks and show that they just outperform a GPT2-like model in crafting tailored responses to com-
random guessing for most settings across different LLM bat misinformation regarding the COVID-19 pandemic.
size and domains. They also argue that MIA is difficult Despite this first promising result, there are numerous
on LLMs because of different key reasons. These include overlooked opportunities for harnessing GenAI tools to
the difficulty of handling LLMs pre-trained over billions aid online safety. One such opportunity involves the
and trillions of tokens, or the overlap typically exhib- development of automated agents capable of serving as
ited by the underlying token distributions that can be "peace-builders" within online discussions. We aim to
observed in natural language documents, irrespective of train a large language model to generate textual content
their training data membership. that, once injected within online social media platforms,
Our research agenda is aimed at extending and lever- can help mitigate polarization and disagreement.
aging the current membership inference games, by in- This research line is interesting and open to novel
vestigating adversarial approaches in order to force the and original developments, but it also faces considerable
LLM to generate copyrighted text. In this way, we define challenges. A trivial remark is to carefully consider the
a framework that can demonstrate copyright violations 2
https://en.wikipedia.org/wiki/Tay_(chatbot)
ethical implications of using GenAI tools for online safety tem. Thus, our approach aims to expedite threat
to ensure responsible use. Second, there are considerable response when integrating human expertise into
technical challenges regarding the training and/or fine- the learning loop of the model, by using post-hoc
tuning of these large models due to scalability concerns. explanation tools to support the operator in vali-
Third, evaluating the effectiveness of GenAI interven- dating the attack and guiding the learning of the
tions in promoting online safety can be demanding and model.
could require a multi-disciplinary approach involving • Data enrichment. Another critical aspect in-
experts from fields such as psychology and sociology. volves the potential use of LLMs to enhance the
Another compelling line in our research agenda is to security of Internet-wide infrastructures. Numer-
define the aspects to take into account when analyzing ous protocols and services rely heavily on tex-
the role of LLMs in this context. We are interested in ex- tual information, such as URLs or configuration
ploring the role of LLMs in contrasting the phenomenon data. LLMs can be exploited in generating test
of false information spreading at different levels: detec- cases, particularly for automating periodic assess-
tion, mitigation, intervention, and attribution. Our effort ments aimed at detecting potential deviations in
is to improve the fake detection models under the con- the security posture of a deployment. For exam-
straint of scarcely labeled data, which is a common con- ple, recent research showcased LLMs’ capability
dition in real scenarios when discovering fakes in new to generate attacks against web destinations, par-
topics and domains. The generative capabilities can be ticularly in crafting SQL injections [22].
harnessed for exploring innovative augmentation tech-
niques. LLMs can help reduce the learning strategy costs We also foresee the adoption of LLMs as tools for
associated with expert interaction (e.g., Active Learning), analysing textual descriptions of system configurations,
thereby saving human annotators’ time. This can be in order to detect potential risks and vulnerabilities rela-
achieved by effectively integrating LLMs into learning tive to such configurations.
loops at various levels, such as tuple selection and label A further relevant application of LLMs is the creation
generation support. of a new-wave of tools to perform fuzz testing, especially
for handling network protocols [23]. This is particularly
relevant for a twofold reason. First, ubiquitous container-
5. Boosting Cybersecurity ized/virtualized frameworks are progressively migrating
to the intrinsically networked microservice paradigm.
The last research line focuses on exploring various sce-
Second, the emerging plague of malwares exploiting in-
narios where LLMs can bolster cybersecurity operations.
formation hiding is hard to mitigate, especially since it
The concept involves utilizing AI-based tools to auto-
requires knowing in advance where the attacker will
mate the analysis and processing of vast amounts of
cloak the data [24].
semi-structured data. This approach aims to evaluate
In this perspective, LLMs could be used to discover in
security risks across systems and infrastructures more ef-
advance protocol fields, metadata, header information, or
ficiently. While Machine and Deep Learning techniques
text segments in software that could be abused to conceal
have been widely used to discover deviant behaviors in
arbitrary/malicious content. For the case of networked
event logs [18, 19, 20], the adoption of LLMs represents
(micro)services, fuzzers can be used to learn the gram-
a novel and quite unexplored research line. For instance,
mar ruling a protocol starting from RFC documents [25].
in a recent work [21], the authors show how LLMs can
These testing tools can hence be guided to explore inter-
be leveraged for analyzing huge volumes of information
actions among containers or to fuzz specific operations,
stored in logs.
e.g., the setup/teardown of a connection.
A specific research objective is to support the automa-
For the case of information-hiding-capable malware,
tion of threat assessment. The intervention of the “ex-
detection and sanitization are tightly coupled with the
pert” (i.e., the human operator) is still crucial to evaluate
abused resource (e.g., digital media vs network traffic),
whether the anomalous event can be traced back to an
and the number of features and ambiguities that can
actual attack or threat. Nevertheless, we believe that the
be exploited is almost unbounded. Therefore, fuzzers
adoption of tools based on LLM can support and facilitate
can be built by starting from datasets of pre-existent
this task. Thus, our mid-term research goals are twofold.
information-hiding-capable-attacks or trained over well-
• Improving efficiency. To enhance response known clocking patterns [26]. Thus, LLMs can lead to
time to potential threats detected through logs, guided fuzzers, which demonstrated their ability to reveal
our strategy involves leveraging Active Learning corner cases or uncommon anomalous templates [23].
techniques. These techniques enable human op- A midterm goal is then to tweak an LLM to evaluate
erators to actively participate in the model learn- the limits of protocols when containing arbitrary infor-
ing process, creating a human-in-the-loop sys- mation for implementing a covert communication. The
use of LLMs will be particularly efficient for protocols 351/2022, PNRR Ricerca, CUP H23C22000550005; MUR
like HTML and MQTT, which are based on large por- on D.M. 352/2022, PNRR Ricerca, CUP H23C22000440007.
tions of textual information, especially in the header [27].
Moreover, we also plan to investigate if LLMs can be used
to improve the performance of our pre-existent AI/ML References
mechanisms for the detection of covert communications
[1] Y. Yao, J. Duan, K. Xu, Y. Cai, Z. Sun, Y. Zhang, A
[28, 29].
Survey on Large Language Model (LLM) Security
and Privacy: The Good, the Bad, and the Ugly, High-
6. Conclusions Confidence Computing (2024) 100211.
[2] E. Cambiaso, L. Caviglione, Scamming the
LLMs present a spectrum of opportunities and challenges Scammers: Using ChatGPT to Reply Mails for
within the cybersecurity domain. We’ve delved into four Wasting Time and Resources, arXiv preprint
primary research avenues, each addressing distinct prob- arXiv:2303.13521 (2023).
lems and proposing corresponding solutions. These areas [3] X. Zhao, Y.-X. Wang, L. Li, Protecting Language
include: Generation Models via Invisible Watermarking, in:
International Conference on Machine Learning,
• Watermarking and Detection of Generative Con- 2023, pp. 42187–42199.
tent: Developing methods to embed unique iden- [4] L. Caviglione, C. Comito, M. Guarascio, G. Manco,
tifiers into data for tracking and authentication Emerging Challenges and Perspectives in Deep
purposes, alongside techniques for detecting gen- Learning Model Security: A Brief Survey, Systems
erative content to combat potential trustworthi- and Soft Computing 5 (2023) 200050.
ness and security risks. [5] N. Agarwal, A. K. Singh, P. K. Singh, Survey of Ro-
• Membership Inference and Data Provenance: Ad- bust and Imperceptible Watermarking, Multimedia
dressing concerns related to establishing the ori- Tools and Applications 78 (2019) 8603–8633.
gin of training data, crucial for ensuring data in- [6] J. Kirchenbauer, J. Geiping, Y. Wen, J. Katz, I. Miers,
tegrity, privacy. T. Goldstein, A watermark for large language mod-
• Misinformation Mitigation/Intervention: Imple- els, in: ICML, volume 202 of Proceedings of Machine
menting strategies to combat misinformation and Learning Research, 2023, pp. 17061–17084.
ensure online safety, particularly in the context of [7] T. Lee, S. Hong, J. Ahn, I. Hong, H. Lee, S. Yun,
rapidly evolving online information landscapes. J. Shin, G. Kim, Who Wrote this Code? Watermark-
• Log Analysis and Stress Testing in Infrastructure ing for Code Generation, arXiv abs/2305.15060
Protection: Analyzing system logs and subjecting (2023).
infrastructures to stress tests to assess their re- [8] H. Hu, Z. Salcic, L. Sun, G. Dobbie, P. S. Yu, X. Zhang,
silience against cyber threats, essential for main- Membership inference attacks on machine learning:
taining robust security measures. A survey, ACM Comput. Surv. 54 (2022). doi:10.
1145/3523273.
We have devised specific solutions within the context of [9] N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis,
three research projects funded by the Italian Ministry of F. Tramer, Membership Inference Attacks From
Research. These solutions aim to address various cyber- First Principles, 2022. arXiv:2112.03570.
security challenges and enhance overall digital security [10] D. Chen, N. Yu, Y. Zhang, M. Fritz, GAN-Leaks:
measures, A Taxonomy of Membership Inference Attacks
against Generative Models, in: Proceedings of the
Acknowledgments 2020 ACM SIGSAC Conference on Computer and
Communications Security, CCS ’20, ACM, 2020.
This work was partially supported by the following [11] J. Dubiński, A. Kowalczuk, S. Pawlak, P. Rokita,
projects: 1) WHAM! - Watermarking Hazards and T. Trzciński, P. Morawiecki, Towards More Realistic
novel perspectives in Adversarial Machine learning Membership Inference Attacks on Large Diffusion
(B53D23013340006); 2) SERICS - SEcurity and RIghts Models, 2023. arXiv:2306.12983.
in the CyberSpace (PE00000014); 3) MIRFAK - Limiting [12] S. Yeom, I. Giacomelli, M. Fredrikson, S. Jha, Privacy
MIsinformation spRead in online environments through Risk in Machine Learning: Analyzing the Connec-
multi-modal and cross-domain FAKe news detection tion to Overfitting, in: 2018 IEEE 31st Computer
(P2022C23K9), funded under the NRRP MUR program Security Foundations Symposium, 2018, pp. 268–
funded by the EU - NGEU. A part of the work was also sup- 282.
ported by: Project RAISE (ECS00000035); MUR on D.M. [13] M. Duan, A. Suri, N. Mireshghallah, S. Min,
W. Shi, L. Zettlemoyer, Y. Tsvetkov, Y. Choi, [25] C. S. Xia, M. Paltenghi, J. Le Tian, M. Pradel,
D. Evans, H. Hajishirzi, Do Membership Inference L. Zhang, Fuzz4all: Universal Fuzzing with Large
Attacks Work on Large Language Models?, 2024. Language Models, Proc. IEEE/ACM ICSE (2024).
arXiv:2402.07841. [26] S. Wendzel, S. Zander, B. Fechner, C. Herdin,
[14] J. Uyheng, D. Bellutta, K. Carley, Bots amplify Pattern-based Survey and Categorization of Net-
and redirect hate speech in online discourse about work Covert Channel Techniques, ACM Comput-
racism during the covid-19 pandemic, Social Media ing Surveys 47 (2015) 1–26.
+ Society 8 (2022) 205630512211047. doi:10.1177/ [27] T. Schmidbauer, S. Wendzel, SoK: A Survey of In-
20563051221104749. direct Network-level Covert Channels, in: Pro-
[15] J. Uyheng, K. M. Carley, Bots and online hate during ceedings of the 2022 ACM on Asia Conference on
the covid-19 pandemic: case studies in the united Computer and Communications Security, 2022, pp.
states and the philippines, Journal of Computa- 546–560.
tional Social Science 3 (2020) 445 – 468. URL: https: [28] N. Cassavia, L. Caviglione, M. Guarascio, A. Liguori,
//api.semanticscholar.org/CorpusID:224818205. M. Zuppelli, Ensembling Sparse Autoencoders for
[16] S. Feuerriegel, R. DiResta, J. A. Goldstein, S. Kumar, Network Covert Channel Detection in IoT Ecosys-
P. Lorenz-Spreen, M. Tomz, N. Pröllochs, Research tems, in: International Symposium on Methodolo-
can help to tackle ai-generated disinformation, Na- gies for Intelligent Systems, 2022, pp. 209–218.
ture Human Behaviour 7 (2023) 1818–1821. [29] N. Cassavia, L. Caviglione, M. Guarascio, A. Liguori,
[17] B. He, M. Ahamad, S. Kumar, Reinforcement M. Zuppelli, Learning Autoencoder Ensembles for
learning-based counter-misinformation response Detecting Malware Hidden Communications in IoT
generation: a case study of covid-19 vaccine misin- Ecosystems, Journal of Intelligent Information Sys-
formation, in: Proceedings of the ACM Web Con- tems (2023) 1–25.
ference 2023, 2023, pp. 2698–2709.
[18] A. Cuzzocrea, F. Folino, M. Guarascio, L. Pontieri,
A Multi-view Learning Approach to the Discovery
of Deviant Process Instances, in: On the Move
to Meaningful Internet Systems: OTM 2015 Con-
ferences - Confederated International Conferences:
CoopIS, ODBASE, and C&TC 2015, volume 9415 of
Lecture Notes in Computer Science, Springer, 2015,
pp. 146–165.
[19] F. Folino, G. Folino, M. Guarascio, L. Pontieri, Semi-
Supervised Discovery of DNN-Based Outcome Pre-
dictors from Scarcely-Labeled Process Logs, Busi-
ness & Information Systems Engineering 64 (2022)
729–749.
[20] F. Folino, G. Folino, M. Guarascio, L. Pontieri, Data-
& Compute-efficient Deviance Mining via Active
Learning and Fast Ensembles, Journal of Intelligent
Information Systems (2024).
[21] Z. Ma, A. R. Chen, D. J. Kim, T.-H. Chen, S. Wang,
LLMParser: An Exploratory Study on Using Large
Language Models for Log Parsing, in: 2024
IEEE/ACM 46th International Conference on Soft-
ware Engineering, IEEE Computer Society, 2024, pp.
883–883.
[22] R. Fang, R. Bindu, A. Gupta, Q. Zhan, D. Kang, LLM
Agents can Autonomously Hack Websites, arXiv
preprint arXiv:2402.06664 (2024).
[23] S. Mallissery, Y.-S. Wu, Demystify the Fuzzing
Methods: A Comprehensive Survey, ACM Comput-
ing Surveys 56 (2023) 1–38.
[24] L. Caviglione, W. Mazurczyk, Never Mind the Mal-
ware, Here’s The Stegomalware, IEEE Security &
Privacy 20 (2022) 101–106.