<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Workshop on Countering Disinformation with Artificial Intelligence (CODAI 2024)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Proceedings of the Workshop co-located with the</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>th European Conference on Artificial Intelligence</string-name>
        </contrib>
      </contrib-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>This volume contains papers from the 1st Workshop on Countering Disinformation with Artificial
Intelligence (CODAI), held at the European Conference on Artificial Intelligence (ECAI) 2024.
Social media platforms which have been designed primarily to allow users to create and share content with
others, have become integral parts of modern communication, enabling people to connect with friends,
family as well as for broadcasting information to a wider audience. On one side these platforms provide an
opportunity to facilitate discussions in an open and free environment. On the flip side, new societal issues
have started emerging on these platforms. Among all the issues, the topic of misinformation has been
prevalent on these platforms. The term misinformation is an umbrella term which encompasses various
entities such as fake news, hoaxes, rumors to name a few. While misinformation refers to non-intentional
spread of non-authentic information, the term disinformation points to spreading of a piece of inauthentic
information with certain malign intentions.</p>
      <p>Initially, researchers have mainly focused on identifying and characterizing misinformation using text based
techniques through traditional and advanced NLP techniques. However, with the advancement of techniques
and availability of various AI tools, the (mis)information has started appearing in the form of multimodality.
For example, a piece of image with incorrect text embedded on it or a morphed video with audio. In
addition, the topic of misinformation has impacted individuals and communities from various domains such
as medical, political, entertainment, business, etc. This calls for combining forces from different domains.
In other words, to counter misinformation computer scientists need to work with domain specialists. To
understand the intention a psychologist’s inputs can also be vital to understand the reasons for the spreading
of misinformation. To summarize, a holistic view is needed to counter the menace of misinformation spread
on online social media platforms.</p>
      <p>The goal of this workshop is to bring together researchers interested in various domains to not only present
their works but also to provide an ecosystem for discussing ideas that facilitate countering the spread of
misinformation. We received a total of 17 submissions to the main workshop, of which seven were accepted
as oral presentations. Finally, the workshop will feature two distinguished keynote speakers: Paolo Rosso,
Universitat Polite`cnica de Vale`ncia, and David Camacho, Universidad Polite´cnica de Madrid, Spain.</p>
    </sec>
    <sec id="sec-2">
      <title>Organizing Committee</title>
      <sec id="sec-2-1">
        <title>Program Chairs</title>
        <sec id="sec-2-1-1">
          <title>Rajesh Sharma, University of Tartu, Estonia Anselmo Pen˜as, Universidad Nacional de Educacio´n a Distancia, Spain</title>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Program Committee</title>
        <sec id="sec-2-2-1">
          <title>Rodrigo Agerri, University of the Basque Country (UPV/EHU) Paolo Rosso, Universidad Polite´cnica de Valencia (UPV) Arkaitz Zubiaga, Queen Mary University of London Harith Alani, Open University, London</title>
          <p>Anwitaman Datta, Singapore
Uku Kangur, University of Tartu, Estonia
Shakshi Sharma, Bennett University, India
Johannes Langguth, Simula Research Laboratory, Norway
David Camacho, Universidad Polite´cnica de Madrid (UPM)
Anselmo Pen˜as, Universidad Nacional de Educacio´n a Distancia (UNED)
Roberto Centeno, Universidad Nacional de Educacio´n a Distancia (UNED)
A´ lvaro Rodrigo, Universidad Nacional de Educacio´n a Distancia (UNED)
Rajesh Sharma, University of Tartu, Estonia
Neha Pathak, Indian Institute of Information Technology (IIIT) Delhi
Ahmed Sabir, University of Tartu, Estonia
Giulio Rossetti, CNR, Pisa, Italy
Jan Milan, Applied University of Science, Zurich
Re´my Cazabet, Univ. Lyon 1, Lyon, France
Roshni Chakraborty, University of Tartu, Estonia
Countering disinformation with AI: discriminating
conspiracy theories from critical thinking</p>
          <p>Paolo Rosso</p>
          <p>Universitat Polite`cnica de Vale`ncia
Abstract: The rise of social media has offered a fast and easy way for the propagation of disinformation
and conspiracy theories. Despite the research attention that has received, disinformation detection remains
an open problem and users keep sharing texts that contain false statements. In this keynote I will briefly
describe how to go beyond textual information to detect disinformation, taking into account also affective and
visual information because providing important insights on how disinformation spreaders aim at triggering
certain emotions in the readers. I will also describe how psycholinguistic patterns and users’ personality
traits may play an important role in discriminating disinformation spreaders from fact checkers. Moreover,
I will comment on some studies on the propagation of conspiracy theories. In the framework of the PAN
Lab at CLEF, we are organising a challenge on oppositional thinking analysis to discriminate between
conspiracy narratives and critical thinking. This distinction between critical and conspiracist narratives is vital
because considering a message as conspiratorial when it is only oppositional to mainstream views could
start a psychosocial process that drives into the arms of the conspiracy communities those who were simply
critical about controversial topics such as vaccination or climate change. Most of the work was done in
the framework of IBERIFIER, the Iberian media research and fact-checking hub on disinformation funded
by the European Digital Media Observatory, and the research projects XAI-DisInfodemics (eXplainable AI
for disinformation and conspiracy detection during infodemics), and FAKEnHATE-PdC (FAKE news and
HATE speech).</p>
          <p>Bio: Paolo Rosso is Full Professor of Computer Science at the Universitat Polite`cnica de Vale`ncia, Spain.
His current research interests fall mainly in the area of detection of harmful information in social media,
both fake news and hate speech. He is the principal investigator of two related projects: XAI-DisInfodemics
on eXplainable AI for disinformation and conspiracy detection during infodemics (PLEC2021-007681), and
FAKEnHATE-PdC on FAKE news and HATE speech (PDC2022-133118-I00), both funded by the Spanish
Ministry of Science, Innovation and Universities, and by European Union NextGenerationEU/PRTR. He
collaborated with the Spanish National Security Department and with the Science and Tech.
Rethinking the problem of disinformation and Artificial</p>
          <p>Intelligence:
boundaries, threats, and trends</p>
          <p>David Camacho</p>
          <p>Universidad Polite´cnica de Madrid
Abstract: Disinformation (and more generally misinformation) is spreading everywhere online, causing
problems for individuals, societies, and countries. This unchecked dissemination of falsehoods, has
nurtured an environment ripe for the proliferation of rumors, propaganda, and hoaxes, exacting a toll on the
economic, political, and public health realms, among many other aspects in our daily lives. Confronting this
multifaceted adversary demands a united front, drawing upon the collective wisdom and resources of diverse
stakeholders including individuals, media entities, governmental bodies, technology firms, and scholars.
This keynote endeavours to illuminate the intricate contours of this challenge, delving into some popular
Computational techniques such as Machine Learning and Graph Computing as a new set of weapons in the
battle against misinformation. Focused primarily on three domains, Natural Language Processing (NLP)
and Multimodal Deep Learning (MDL) and Social Network Analysis (SNA), our discourse aims to unveil
the potential of these techniques in discerning truth from falsehood. Within the realm of NLP/MDL and
SNA, particular attention will be devoted to the facter-check architecture, a novel framework that through
the use of ensembles and deep learning techniques based in Transformer technology, enables the
identification and tracking of misleading content across the vast expanse of online social networks.
Bio: David Camacho is Full Professor at Computer Systems Engineering Department of Universidad
Polite´cnica de Madrid (UPM), he is the head of the Applied Intelligence and Data Analysis research group
(AIDA: https://aida.etsisi.uam.es), the Director of the PhD program in Computer Science and Technologies
of Smart Cities, and the Director of the Master program in Machine Learning and Big Data at UPM. He has
published more than 300 journals, books, and conference papers (google scholar). His research interests
include Machine Learning (Clustering/Deep Learning), Computational Intelligence (Evolutionary
Computation, Swarm Intelligence), Social Network Analysis, Fake News and Disinformation Analysis. He has
participated/led more than 60 AI-based R&amp;D projects (National and International: H2020, MCSA
ITNETN, DG Justice, ISFP, NRF Korea), applied to real-world problems in areas as aeronautics, aerospace
engineering, cybercrime/cyber intelligence, social networks applications, disinformation countering, or video
games among others. He serves as Editor in Chief of Expert Systems from 2023 and sits on the
Editorial Board of several journals including Information Fusion, Human-centric Computing and Information
Sciences (HCIS), and Cognitive Computation, IEEE Transactions on Emerging Topics in Computational
Intelligence (IEEE TETCI), among others. Contact at: David.Camacho@upm.es.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Workshop Program</title>
      <p>Analysis of Climate Change Misleading Information in TikTok
Clara Baltasar, Sergio D’Antonio Maceiras, Alejandro Martin and David Camacho
Diachronic Political Content Analysis: A Comparative Study of Topics
and Sentiments in Echo Chambers and Beyond
Michele Joshua Maggini, Virginia Morini, Davide Bassi and Giulio Rossetti
Factoring in context for the automatic detection of misrepresentation
Bruna Paz Schmid, Annette Hautli-Janisz and Steve Oswald
Are Misinformation Propagation Models Holistic Enough? Identifying
Gaps and Needs</p>
      <p>Raquel Rodriguez-Garc´ıa, A´ lvaro Rodrigo and Roberto Centeno
Diachronic Political Content Analysis: A Comparative Study of Topics and Sentiments in Echo Chambers
and Beyond
Michele Joshua Maggini, Virginia Morini, Davide Bass, Giulio Rossetti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1</p>
      <sec id="sec-3-1">
        <title>Factoring in Context for the Automatic Detection of Misrepresentation</title>
        <p>Bruna Paz Schmid, Annette Hautli-Janisz, Steve Oswald . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11</p>
      </sec>
      <sec id="sec-3-2">
        <title>Detecting fake news using Twitter social information</title>
        <p>Jesu´s M. Fraile-Herna´ndez, A´lvaro Rodrigo, Roberto Centeno . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
On the Categorization of Corporate Multimodal Disinformation with Large Language Models
Ana-Maria Bucur, So´nia Gonc¸alves, Paolo Rosso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Automated Fact-checking based on Large Language Models: An application for the press
Bogdan Andrei Baltes, Yudith Cardinale, Benjam´ın Arroquia-Cuadros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40</p>
      </sec>
      <sec id="sec-3-3">
        <title>Analysis of Climate Change Misleading Information in TikTok</title>
        <p>Clara Baltasar, Sergio D’Antonio Maceiras, Alejandro Mart´ın David Camacho . . . . . . . . . . . . . . . . . . . . 54
Are Misinformation Propagation Models Holistic Enough? Identifying Gaps and Needs
Raquel Rodr´ıguez-Garc´ıa, A´lvaro Rodrigo, Roberto Centeno . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>