<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Management Information
Systems 24 (2007) 45-77.
[51] A.R. Hevner</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute for Complex Networks, Vienna University of Economics and Business</institution>
          ,
          <addr-line>Vienna</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <fpage>656</fpage>
      <lpage>667</lpage>
      <abstract>
        <p>Extensive consumption of news and rapid communication flows on the web, especially via online content sharing platforms, leads to economic and societal harm caused by information disorders. These are commonly known as fake news and represent a threat to democratic processes and societies. While artificial intelligence-based models can detect information disorder types, current algorithms are not transparent, explainable, trust-building, and domain-specific enough. Therefore, the aim of this work is to advance the state-of-the-art in terms of information disorder detection and mitigation by combining explainable artificial intelligence, bias detection, and knowledge graphs. Moreover, an additional aim is to improve intent recognition in order to separate misinformation, disinformation, and malinformation better from each other. This distinction serves the purpose to enhance the detection of specific types of information disorders.</p>
      </abstract>
      <kwd-group>
        <kwd>Information disorder</kwd>
        <kwd>detection</kwd>
        <kwd>natural language processing</kwd>
        <kwd>bias</kwd>
        <kwd>knowledge graph</kwd>
        <kwd>explainability1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>Problem</title>
      </sec>
      <sec id="sec-1-2">
        <title>Statement and</title>
      </sec>
      <sec id="sec-1-3">
        <title>Relevance. The</title>
        <sec id="sec-1-3-1">
          <title>Global Risk</title>
        </sec>
        <sec id="sec-1-3-2">
          <title>Report 2024 [1] identifies</title>
          <p>
            misinformation and disinformation – on which this work places particular emphasis – as the
number one global risk (ranked by severity over the short and long term) for the upcoming two
years and on the fifth rank for the next ten years. Geopolitical events and developments are
interconnected with information disorders (cf. [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ]) and have led to societal polarization.
Advancements in artificial intelligence (AI) (e.g., deep fakes) have only accelerate this.
          </p>
          <p>
            Since 2008 online content sharing platforms (OCSPs) have been on the rise (cf. [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ]). But this
development and the change of information consumption have also caused problems. Types of
information disorder (cf. [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ]), as shown in Figure 1, which spread rapidly, have caused
economic and societal harm. Examples for the latter are the COVID-19 pandemic or infodemic,
like the World Health Organizations (WHO) denoted it alternatively (cf. [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]), the European
Medicines Agency (EMA) hacking in 2020 and release of manipulated vaccine data (cf. [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ]), as
well as the 2021 United States Capitol attack (cf. [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]). However, for voters it is key to have access
to authentic information or facts within a networked society (cf. [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]).
          </p>
          <p>
            That way a monitorial citizen (cf. [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]) can come to a rational choice in the sense that an individual
follows his or her rational interest when voting (cf. [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]). However, information disorders make
this task more difficult.
          </p>
          <p>
            Policies and legal regulations have addressed different media content (e.g., text, images, etc.),
OCSPs (e.g., data access), and flows of information (e.g., disinformation task forces) while at the
same time protecting human rights and fundamental rights (cf. [
            <xref ref-type="bibr" rid="ref10 ref11 ref7">7, 10, 11</xref>
            ]). However,
transparency and trust are open issues (cf. [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ]), especially when it comes to solution
approaches that should fulfill legal requirements as well as societal expectations (cf. [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]).
While critical thinking and media literacy are indeed crucial skills, in order to establish societal
resilience against information disorders (cf. [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]), information technology can assist democratic
actors such as citizens with information disorder detectors (cf. [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ]) and provide meaningful
and transparent further context (cf. [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ]) to voters for example. Additionally, there exists a lack
of (training) data (cf. [17]). The consequence of this are possibly biased natural language
processing (NLP) machine learning (ML) and deep learning (DL) models (cf. [18]). Lastly,
information disorders are complex regarding their phases, specific types, multiple classes,
content variability, rapid dissemination, context sensitivity, origins, dynamic nature, and when
they are generated by AI (cf. [19]).
          </p>
          <p>
            Focus and Contributions. The first focus lies on improving information disorder detection.
For that, different detection algorithms (e.g., deep learning, BERT; rule or lexicon-based; linear
regression, Support Vector Machine; probabilistic, Naïve Bayes; proximity-based, K-nearest
neighbors algorithm – see Figure 3) will be used to evaluate and compare their performance for
the task of detecting information disorder types on five selected datasets (see Table 1) with
respect to mis-, dis-, and malinformation (depicted in Figure 1). A second research aim is the
analysis of potential bias within black box models applying explainable artificial intelligence
(XAI) tools. Examples for the latter include LIME (Local interpretable model-agnostic
explanations) as well as Anchors (cf. [
            <xref ref-type="bibr" rid="ref24">38</xref>
            ]). Further, to enrich the contextual understanding of
choices a model makes, a knowledge graph (KG) can be applied. But also existing fairness
toolkits for bias detection are relevant for the analysis (cf. [20]). Third, the goal is to investigate
to what extent it is possible to detect intent to harm. When it comes to transparency, a KG can
add useful context to an intent recognition (IR) model. The overarching goal is to increase trust
in AI-based detection models as well as help research, societal, and business stakeholders to
cope with information disorders better.
          </p>
          <p>Paper Structure. The rest of this paper is organized as follows: in Section 2 related work is
presented and discussed. Afterwards, the research objectives, open issues or challenges, main
hypothesis, research questions, methodology, research plan, and data sources are given in
Sections 3 and 4. Finally, Section 5 concludes and identifies future work.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Already when the COVID-19 pandemic started, AI fact-checking models were available. But
training datasets did not specifically cover COVID-19 misinformation which led to domain
analysis issues and limited the application scope of these detection models (cf. [21]).
Therefore, domain-specific datasets, that are designed to include the unique characteristics
of such events, had to be created (cf. [21, 22]). Another problem is that false information tends
to spread faster compared to facts. Typical NLP classification models for misinformation
detection are based on BERT or modified versions of it and misinformation can be analyzed
looking into textual elements like sentiment or veracity (cf. [22]). The overall aim is to tackle
it by detecting or validating its content and analyze its dynamics or management (cf. [23]).
Regarding the analysis of the distribution of false information and datasets that are used to train
detectors, specific topics or demographics matter (cf. [24]).</p>
      <p>Others like Mensio and Alani [25] analyzed who interacts with misinformation.
Interestingly, readability is better for false information and the more complex it is the more
people believe in it. In this regard, characteristics like sentence and word length are relevant
(cf. [26]). Bruel and Alani [27] crafted a reporting tool for dynamic spread and fact-checks.
Detection of false claims is a multidisciplinary challenge and especially DL as well as XAI
have been researched in computer science. It should be taken into account that combating
information disorders is a task that can be tackled via human-centric artificial intelligence (AI)
approaches good (cf. [28]). When it comes to large language models (LLMs), Leite et. al. [29]
emphasized the time-consuming annotation process and role of credibility signals.</p>
      <p>Although there have been significant advancements in terms of NLP-based text analysis
tools, reputation evaluation, network data analyses, and imaged-based detection
techniques, there are a number of open challenges with respect to the concept drift and
dynamic streaming nature of (false) information but also deepfakes represent an arising
problem. Explainable lifelong learning concepts could be a remedy (cf. [30]).</p>
      <p>
        From a symbolic AI point of view, semantic web (SW) concepts like ontologies (e.g.,
OWL) (cf. [
        <xref ref-type="bibr" rid="ref17">31</xref>
        ]) and KGs have already been used to detect fake news. Tchechmedjiev et. al.
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] have shown an extensible KG consisting of fact-checked claims that serves to tackle the
lack of corpora containing structured (meta) data. Pan et. al. [
        <xref ref-type="bibr" rid="ref18">32</xref>
        ] also used KGs for content
based fake news detection and showed promising results for detecting false content.
Moreover, they found that even incomplete or imprecise KGs are helpful.
      </p>
      <p>
        Kaliyar et. al. [
        <xref ref-type="bibr" rid="ref19">33</xref>
        ] have shown in their experiments that BERT-based fake news detection
can reach a high accuracy. Combining such DL models with SW technologies can be done in
various ways. Denaux et al. [
        <xref ref-type="bibr" rid="ref20">34</xref>
        ] introduced a data model and distributed agents’
architecture for composable credibility reviews in context of explainable misinformation
detection. Furthermore, Lovera et. al. [
        <xref ref-type="bibr" rid="ref21">35</xref>
        ] took a different approach and worked on another
hybrid solution. They analyzed the sentiment of short texts by using KGs and DL
technologies to classify their sentiment. Traceability but also interpretability via XAI led to
results that are more transparent. Such an approach compensates for the weaknesses of a
black box model in this regard. Their model outperformed character n-gram solutions.
      </p>
      <p>
        In particular, adapting original data sources with additional information from semantic
resources is prominent for contextual enrichment. Some KGs are commonly used in that
regard (e.g., ConceptNet or DBpedia) (cf. [18]). KGs can help to identify relevant events for
information disorder detection as certain events are often linked to fake news. Opdahl and
Tessem [
        <xref ref-type="bibr" rid="ref22">36</xref>
        ] looked into ontologies to find journalistic angles. KGs can also support
detecting online toxicity by mitigating possible bias, subjective views, or lacking domain
knowledge of annotators (cf. [
        <xref ref-type="bibr" rid="ref23">37</xref>
        ]). Keeping in mind that black box models should be
explainable, Szczepański et. al. [
        <xref ref-type="bibr" rid="ref24">38</xref>
        ] proposed an extension which is based on LIME and
Anchors that enriches a model’s classification with additional reasoning. XAI techniques like
this and KGs for facilitating context-based detection of information disorders like Fane-KG
(cf. [
        <xref ref-type="bibr" rid="ref25">39</xref>
        ]) are on the rise. Fane-KG represents a KG that is designed to be used specifically for
context-based fake news detection. Hani et. al. [
        <xref ref-type="bibr" rid="ref25">39</xref>
        ] provide a human as well as machine
processable data repository built on semantic standards (OWL and RDF).
      </p>
      <p>
        Generating a KG for news articles, which can be then used for detecting or checking false
information, is challenging. Generally speaking, KG construction consists of three phases:
the acquisition, refinement, and evolution of knowledge (cf. [
        <xref ref-type="bibr" rid="ref26">40</xref>
        ]). Such created KGs can
then be used to improve the detection of information disorders. Mayank et. al. [
        <xref ref-type="bibr" rid="ref27">41</xref>
        ] worked
on a combination of a NLP and tensor decomposition model. They encoded content from
news and embedded KG entities. The latter and their variety could improve their detector.
There is a research trend to combine ML and SW technologies like KGs identifiable (cf. [
        <xref ref-type="bibr" rid="ref28">42</xref>
        ]).
      </p>
      <p>
        Lastly, regarding intent recognition (IR), there are already pretrained BERT-based
models that achieve promising and precise results (cf. [
        <xref ref-type="bibr" rid="ref29">43</xref>
        ]). In addition, domain-specific
KGs can improve the intent classification task (cf. [
        <xref ref-type="bibr" rid="ref30">44</xref>
        ]). Zhou et. al. [
        <xref ref-type="bibr" rid="ref31">45</xref>
        ] assessed the intent
of shared fake news and proposed an influence graph as well as modeled the intent of an
individual when spreading fake news. Furthermore, a centralized KG with checked articles
could help with queries and add contextual knowledge for fact-checking tasks (cf. [
        <xref ref-type="bibr" rid="ref32">46</xref>
        ]).
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Hypothesis and Research Questions</title>
      <p>
        Starting with the challenges, as outlined in Section 2, AI techniques (i) used to identify
information disorders are not transparent enough and (ii) lack in cross-domain applicability. In
addition, (iii) AI causes issues if producing deep fakes (cf. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]). Researchers classify fake news
mostly based on intent (e.g., to harm) and content (e.g., text or images). Also, (iv) it is a
challenge to label fake news since there are many terms (cf. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]). According to Farhangian
et. al. [47], it may be useful for information disorder detection research purposes to look
into multiple feature representations, perspectives, dynamic ensemble models, and a
crossdataset evaluation (tackling concept drift). DL and ML can identify domain-specific
information disorders but there exists a lack of trust into black box models (cf. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]).
Furthermore, (v) missing data for detection models is a major issue (cf. [17]) and (vi) data
validity (e.g., incomplete data) or how humans use AI systems are problems that can
increase bias in algorithms (cf. [18]). Combining symbolic and sub-symbolic AI has the
potential to fill those weaknesses of current AI systems. Common approaches to mitigate
bias are altering the data AI systems are trained on, changing how a model learns, or using
a holdout set (not part of training) to adapt the outcome. However, there is still not enough
research on identifying bias (cf. [18]). Additionally, combining a NLP DL approach with a
KG, using structured and unstructured data, can outperform non-hybrid models (cf. [48]).
      </p>
      <p>
        Regarding XAI, Yuan et. al. [28] concluded that explainable detection of false information
from a user perspective is key and highlighted human-machine communication. As a last
obstacle for detecting and analyzing information disorders, there remains (vii) the open
issue of classifying it better into mis-, dis-, and malinformation. For this task, automatically
created domain-specific KGs can assist during the intent classification process (cf. [
        <xref ref-type="bibr" rid="ref30">44</xref>
        ]) and
add for example useful contextual information (cf. [
        <xref ref-type="bibr" rid="ref32">46</xref>
        ]). For instance, provenance or
temporal characteristics of information disorders could be considered in this context.
      </p>
      <p>Therefore, the main hypothesis for this research proposal is summarized below:</p>
      <sec id="sec-3-1">
        <title>A more transparent information disorder detection model may be made possible by:</title>
        <p>(i) a domain-specific cross-dataset analysis for information disorder detection using</p>
      </sec>
      <sec id="sec-3-2">
        <title>ML and DL; (ii) the enrichment of ML and DL with knowledge graph-based meaning that facilitates explainability and bias detection; and (iii) intent recognition approaches that can distinguish between mis-, dis-, and malinformation.</title>
        <p>The following research questions (RQs) are derived from this hypothesis:
RQ1) Which machine and deep learning algorithms are most effective when it comes to
information disorder detection?
RQ2) To what extent can machine and deep learning explainability and bias detection be
facilitated via knowledge graph-based enrichment?
RQ3) How can hybrid AI-based approaches be used to better distinguish between the three
information disorder types (misinformation, disinformation, and malformation)?</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Research Plan and Methodology</title>
      <p>This research project is planned according to the design science research methodology (DSRM),
which is an information systems specific research method that aims to develop innovative
artifacts (cf. [51]) and consists of six phases as shown in Figure 2 (cf. [49, 50]).</p>
      <p>The problem and motivation is to increase transparency and trust in AI-based detection
models in order to help stakeholders to better cope with information disorders. The primary
goal is to improve the detection of information disorders, explainability of models, bias
detection as well as mitigation, and intent recognition.</p>
      <p>In terms of solution objectives, the plan is to combine ML or DL algorithms with XAI, bias
detection, and KGs, in order to understand and mitigate potential bias in information disorder
detection models better. However, also for intent recognition, which is needed to distinguish
the three information disorder types (misinformation, disinformation, and malinformation)
effectively, transparency is key. Addressing the lack of training datasets (cf. [28]), one of the
envisaged outcomes and innovative artifacts (cf. [51]) is a new dataset incorporating the three
information disorder types with binary as well as non-binary labels.</p>
      <p>Fake News
Misinformation</p>
      <p>Hate Speech</p>
      <p>Dataset</p>
      <p>ISOT
BuzzFeed</p>
      <p>LIAR
NELA-GT-2022</p>
      <p>ETHOS</p>
      <p>Design and Development aims to create one or more prototypes as artifacts that address the
issues highlighted under the solution objectives heading (cf. [50]). Figure 3 shows the research
plan. Possible artifacts include (i) statistical insights (accuracy, recall, precision, F1 score), (ii)
performance results, (iii) models, (iv) datasets, or (v) KGs. In addition, Table 1 shows five selected
datasets which cover different types of information disorders.</p>
      <p>During demonstration, the artifact is used to address the problem or instances derived from
a concrete use case scenario. This can be a case study or experimental setting (cf. [50]). For RQ1,
this includes the application of detection model prototypes. Regarding RQ2, KGs and statistical
insights from XAI are applied to analyze bias. Third, for RQ3, a new dataset containing mis-,
dis-, and malinformation as well as a model are part of the demonstration.</p>
      <p>In order to evaluate the artifact, we need to analyze how well it provides a solution to the
issue(s) it aims to address. Using selected quantitative metrics (e.g., accuracy, precision, recall,
F1 score, etc.), the objectives and goals can be compared as well as analyzed (cf. [50]).</p>
      <p>Finally, communication ensures that the problem and its relevance, artifacts and their
utility, consequent design, and solutions as well as their effectiveness are presented to our
stakeholders. The plan is to publish in highly ranked conferences and journals (cf. [50]).</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>This proposal aims to improve the detection of information disorders, bias detection and
mitigation, intent recognition, and stressed the need for more transparency and explainability
concerning AI detection models. We started by investigating the status quo, discussed research
questions, described how to approach open issues, and showed the research plan as well as
methodology guiding it. A limitation of this work is that it does not address multi-modal
detection. Future work includes tackling the research questions with these key objectives: (i) a
cross-dataset and detection model comparison; (ii) a bias analysis deploying XAI and KGs; and
(iii) the separation of information disorders by intent to harm.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <sec id="sec-6-1">
        <title>This research is conducted under the supervision of Prof. Sabrina Kirrane.</title>
        <p>[17] K. Hao, Even the best AI for spotting fake news is still terrible, MIT Technology Review,
2018. URL:
https://www.technologyreview.com/2018/10/03/139926/even-the-best-ai-forspotting-fake-news-is-still-terrible/.
[18] P. Reyero, E. Daga, H. Alani, M. Fernández, A Systematic Survey of Semantic Web
Technologies for Bias in Artificial Intelligence Solutions, in: 2021.
https://api.semanticscholar.org/CorpusID:237102597.
[19] S. Tufchi, A. Yadav, T. Ahmed, A comprehensive survey of multimodal fake news detection
techniques: advances, challenges, and opportunities, International Journal of Multimedia
Information Retrieval 12 (2023). https://doi.org/10.1007/s13735-023-00296-3.
[20] B. Johnson, J. Bartola, R. Angell, S. Witty, S. Giguere, Y. Brun, Fairkit, fairkit, on the wall,
who’s the fairest of them all? Supporting fairness-related decision-making, EURO Journal
on Decision Processes 11 (2023) 100031. https://doi.org/10.1016/j.ejdp.2023.100031.
[21] Y. Jiang, X. Song, C. Scarton, A. Aker, K. Bontcheva, Categorising Fine-to-Coarse Grained
Misinformation: An Empirical Study of COVID-19 Infodemic, CoRR abs/2106.11702 (2021).
https://arxiv.org/abs/2106.11702.
[22] Y. Peskine, R. Troncy, P. Papotti, Analyzing COVID-Related Social Discourse on Twitter
using Emotion, Sentiment, Political Bias, Stance, Veracity and Conspiracy Theories, in:
Companion Proceedings of the ACM Web Conference 2023, Association for Computing
Machinery, New York, NY, USA, 2023: pp. 688–693.
https://doi.org/10.1145/3543873.3587622.
[23] M. Fernandez, H. Alani, Online Misinformation: Challenges and Future Directions, in:
Companion Proceedings of the The Web Conference 2018, International World Wide Web
Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2018: pp. 595–602.
https://doi.org/10.1145/3184558.3188730.
[24] G. Burel, T. Farrell, H. Alani, Demographics and topics impact on the co-spread of
COVID19 misinformation and fact-checks on Twitter, Information Processing &amp; Management 58
(2021) 102732. https://doi.org/10.1016/j.ipm.2021.102732.
[25] M. Mensio, H. Alani, MisinfoMe: Who is Interacting with Misinformation?, in: M.C.
SuárezFigueroa, G. Cheng, A.L. Gentile, C. Guéret, C.M. Keet, A. Bernstein (Eds.), Proceedings of
the ISWC 2019 Satellite Tracks (Posters &amp; Demonstrations, Industry, and Outrageous
Ideas) Co-Located with 18th International Semantic Web Conference (ISWC 2019),
Auckland, New Zealand, October 26-30, 2019, CEUR-WS.org, 2019: pp. 217–220.
https://ceur-ws.org/Vol-2456/paper57.pdf.
[26] M. ali Tavakoli, H. Alani, G. Burel, On the Readability of Misinformation in Comparison to
the Truth, in: Text2Story@ECIR, 2023.
https://api.semanticscholar.org/CorpusID:258334770.
[27] G. Burel, H. Alani, The Fact-Checking Observatory: Reporting the Co-Spread of
Misinformation and Fact-checks on Social Media, in: Proceedings of the 34th ACM
Conference on Hypertext and Social Media, Association for Computing Machinery, New
York, NY, USA, 2023. https://doi.org/10.1145/3603163.3609042.
[28] L. Yuan, H. Jiang, H. Shen, L. Shi, N. Cheng, Sustainable Development of Information
Dissemination: A Review of Current Fake News Detection Research and Practice, Systems
11 (2023) 458. https://doi.org/10.3390/systems11090458.
[29] J.A. Leite, O. Razuvayevskaya, K. Bontcheva, C. Scarton, Detecting misinformation with
llm-predicted credibility signals and weak supervision, arXiv Preprint arXiv:2309.07601
(2023).
[30] M. Choraś, K. Demestichas, A. Giełczyk, Á. Herrero, P. Ksieniewicz, K. Remoundou, D.</p>
        <p>Urda, M. Woźniak, Advanced Machine Learning techniques for fake news (online
disinformation) detection: A systematic mapping study, Applied Soft Computing 101 (2021)
107050. https://doi.org/10.1016/j.asoc.2020.107050.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>World</given-names>
            <surname>Economic</surname>
          </string-name>
          <string-name>
            <surname>Forum</surname>
          </string-name>
          ,
          <source>The Global Risks Report</source>
          <year>2024</year>
          , World Economic Forum,
          <year>2024</year>
          . URL: https://www.weforum.org/publications/global-risks
          <source>-report-2024/.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wardle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Derakhshan</surname>
          </string-name>
          , Information disorder:
          <article-title>Toward an interdisciplinary framework for research and policy making</article-title>
          ,
          <source>Council of Europe</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Ortiz-Ospina</surname>
          </string-name>
          ,
          <article-title>The rise of social media</article-title>
          , Our World in Data,
          <year>2019</year>
          . URL: https://ourworldindata.org
          <article-title>/rise-of-social-media.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>United</given-names>
            <surname>Nations</surname>
          </string-name>
          ,
          <article-title>UN tackles 'infodemic' of misinformation and cybercrime in COVID-</article-title>
          19 crisis,
          <year>2020</year>
          . URL: https://www.un.org/en/un-coronavirus
          <string-name>
            <surname>-</surname>
          </string-name>
          communications-team/untackling-%
          <source>E2%80%98infodemic%E2%</source>
          <volume>80</volume>
          %
          <fpage>99</fpage>
          -
          <string-name>
            <surname>misinformation-</surname>
          </string-name>
          and
          <string-name>
            <surname>-</surname>
          </string-name>
          cybercrime-covid-
          <volume>19</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cerulus</surname>
          </string-name>
          ,
          <article-title>EU medicines agency says hackers manipulated leaked coronavirus vaccine data</article-title>
          ,
          <source>Politico</source>
          ,
          <year>2021</year>
          . URL: https://www.politico.eu/article/european-medicines
          <article-title>-agencyema-cyberattack-coronavirus-vaccine-data/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.W.</given-names>
            <surname>Green</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.E.</given-names>
            <surname>Fielding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.C.</given-names>
            <surname>Brownson</surname>
          </string-name>
          , More on Fake News, Disinformation, and
          <article-title>Countering These with Science</article-title>
          ,
          <source>Annu Rev Public Health</source>
          <volume>42</volume>
          (
          <year>2021</year>
          )
          <article-title>vi</article-title>
          . https://doi.org/10.1146/annurev-pu-
          <volume>42</volume>
          -012821-100001.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Iosifidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nicoli</surname>
          </string-name>
          , Digital Democracy, Social Media and Disinformation, Milton,
          <year>2020</year>
          . https://doi.org/10.4324/9780429318481.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Chadwick</surname>
          </string-name>
          ,
          <article-title>Web 2.0: New Challenges for the Study of E-Democracy in an Era of Informational Exuberance, I/S: A Journal of Law and Policy for the Information Society 5 (</article-title>
          <year>2009</year>
          )
          <fpage>9</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Antunes</surname>
          </string-name>
          ,
          <source>Theoretical models of voting behaviour, Exedra</source>
          <volume>4</volume>
          (
          <year>2010</year>
          )
          <fpage>145</fpage>
          -
          <lpage>170</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>COLOMINA</surname>
          </string-name>
          ,
          <string-name>
            <surname>H. SÁNCHEZ MARGALEF</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. YOUNGS</surname>
          </string-name>
          ,
          <article-title>The impact of disinformation on democratic processes and human rights in the world</article-title>
          ,
          <source>European Parliament - Policy Department</source>
          ,
          <year>2021</year>
          . URL: https://www.europarl.europa.eu/RegData/etudes/STUD/
          <year>2021</year>
          /653635/EXPO_STU(
          <year>2021</year>
          )
          <article-title>65 3635_EN</article-title>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Vese</surname>
          </string-name>
          , Governing Fake News:
          <article-title>The Regulation of Social Media and the Right to Freedom of Expression in the Era of Emergency</article-title>
          ,
          <source>European Journal of Risk Regulation</source>
          (
          <year>2021</year>
          )
          <article-title>41</article-title>
          . https://doi.org/10.1017/err.
          <year>2021</year>
          .
          <volume>48</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Fabrizio</given-names>
            <surname>Di</surname>
          </string-name>
          <string-name>
            <surname>Mascio</surname>
          </string-name>
          , Michele Barbieri, Alessandro Natalini, Donatella Selva,
          <article-title>Covid-19 and the Information Crisis of Liberal Democracies: Insights from Anti-Disinformation Action in Italy and EU</article-title>
          ,
          <source>Partecipazione e Conflitto</source>
          <volume>14</volume>
          (
          <year>2021</year>
          )
          <article-title>240</article-title>
          . https://doi.org/10.1285/i20356609v14i1p221.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>E.</given-names>
            <surname>Aimeur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Amri</surname>
          </string-name>
          , G. Brassard,
          <article-title>Fake news, disinformation and misinformation in social media: a review</article-title>
          ,
          <source>Social Network Analysis and Mining</source>
          <volume>13</volume>
          (
          <year>2023</year>
          ). https://doi.org/10.1007/s13278-023-01028-5.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.M.</given-names>
            <surname>Jones-Jang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mortensen</surname>
          </string-name>
          , J. Liu,
          <article-title>Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don't</article-title>
          ,
          <source>AM BEHAV SCI 65</source>
          (
          <year>2021</year>
          )
          <article-title>388</article-title>
          . https://doi.org/10.1177/0002764219869406.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Cassauwers</surname>
          </string-name>
          ,
          <article-title>Can artificial intelligence help end fake news?</article-title>
          ,
          <source>Horizon Magazine</source>
          ,
          <year>2019</year>
          . URL: https://ec.europa.eu/research-and
          <article-title>-innovation/en/horizon-magazine/can-artificialintelligence-help-end-fake-news.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tchechmedjiev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fafalios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Boland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gasquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zapilko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietze</surname>
          </string-name>
          , K. Todorov,
          <article-title>ClaimsKG: A Knowledge Graph of Fact-Checked Claims</article-title>
          , in: C.
          <string-name>
            <surname>Ghidini</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Hartig</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Maleshkova</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Svátek</surname>
            ,
            <given-names>I. Cruz</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hogan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lefrançois</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Gandon</surname>
          </string-name>
          (Eds.),
          <source>The Semantic Web - ISWC 2019</source>
          , Springer International Publishing, Cham,
          <year>2019</year>
          : pp.
          <fpage>309</fpage>
          -
          <lpage>324</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lahby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-S.</given-names>
            <surname>Khan Pathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Maleh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.M.S.</given-names>
            <surname>Yafooz</surname>
          </string-name>
          ,
          <article-title>Combating Fake News with Computational Intelligence Techniques</article-title>
          , 1st ed., Springer Cham,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>J.Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pavlova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Content Based Fake News Detection Using Knowledge Graphs</article-title>
          , in: International Workshop on the Semantic Web,
          <year>2018</year>
          . https://api.semanticscholar.org/CorpusID:52900831.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>R.K.</given-names>
            <surname>Kaliyar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Goswami</surname>
          </string-name>
          , P. Narang,
          <article-title>FakeBERT: Fake news detection in social media with a BERT-based deep learning approach</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          <volume>80</volume>
          (
          <year>2021</year>
          )
          <fpage>11765</fpage>
          -
          <lpage>11788</lpage>
          . https://doi.org/10.1007/s11042-020-10183-2.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>R.</given-names>
            <surname>Denaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mensio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.M.</given-names>
            <surname>Gomez-Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Alani</surname>
          </string-name>
          ,
          <article-title>Weaving a Semantic Web of Credibility Reviews for Explainable Misinformation Detection (Extended Abstract)</article-title>
          , in: Z.
          <string-name>
            <surname>-H. Zhou</surname>
          </string-name>
          (Ed.),
          <source>Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, International Joint Conferences on Artificial Intelligence Organization</source>
          ,
          <year>2021</year>
          : pp.
          <fpage>4760</fpage>
          -
          <lpage>4764</lpage>
          . https://doi.org/10.24963/ijcai.
          <year>2021</year>
          /646.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>F.A.</given-names>
            <surname>Lovera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.C.</given-names>
            <surname>Cardinale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.N.</given-names>
            <surname>Homsi</surname>
          </string-name>
          ,
          <source>Sentiment Analysis in Twitter Based on Knowledge Graph and Deep Learning Classification, Electronics</source>
          <volume>10</volume>
          (
          <year>2021</year>
          ). https://doi.org/10.3390/electronics10222739.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>A.L.</given-names>
            <surname>Opdahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Tessem</surname>
          </string-name>
          ,
          <article-title>Ontologies for finding journalistic angles</article-title>
          ,
          <source>Software and Systems Modeling</source>
          <volume>20</volume>
          (
          <year>2021</year>
          )
          <fpage>71</fpage>
          -
          <lpage>87</lpage>
          . https://doi.org/10.1007/s10270-020-00801-w.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>P.</given-names>
            <surname>Reyero Lobo</surname>
          </string-name>
          , E. Daga,
          <string-name>
            <given-names>H.</given-names>
            <surname>Alani</surname>
          </string-name>
          ,
          <article-title>Supporting Online Toxicity Detection with Knowledge Graphs</article-title>
          ,
          <source>Proceedings of the International AAAI Conference on Web and Social Media</source>
          <volume>16</volume>
          (
          <year>2022</year>
          )
          <fpage>1414</fpage>
          -
          <lpage>1418</lpage>
          . https://doi.org/10.1609/icwsm.v16i1.
          <fpage>19398</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>M.</given-names>
            <surname>Szczepański</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pawlicki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kozik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Choraś</surname>
          </string-name>
          ,
          <article-title>New explainability method for BERTbased model in fake news detection</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>11</volume>
          (
          <year>2021</year>
          ). https://doi.org/10.1038/s41598-021-03100-6.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [39]
          <string-name>
            <surname>A. B. Hani</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Adedugbe</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Al-Obeidat</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Benkhelifa</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Majdalawieh</surname>
          </string-name>
          ,
          <article-title>Fane-KG: A Semantic Knowledge Graph for Context-Based Fake News Detection on Social Media</article-title>
          , in: 2020
          <source>Seventh International Conference on Social Networks Analysis, Management and Security (SNAMS)</source>
          ,
          <year>2020</year>
          : pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . https://doi.org/10.1109/SNAMS52053.
          <year>2020</year>
          .
          <volume>9336542</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>A Comprehensive</surname>
          </string-name>
          <article-title>Survey on Automatic Knowledge Graph Construction, ACM Comput</article-title>
          .
          <year>Surv</year>
          .
          <volume>56</volume>
          (
          <year>2023</year>
          ). https://doi.org/10.1145/3618295.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sharma</surname>
          </string-name>
          , DEAP-FAKED:
          <article-title>Knowledge Graph based Approach for Fake News Detection</article-title>
          ,
          <source>2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)</source>
          (
          <year>2021</year>
          )
          <fpage>47</fpage>
          -
          <lpage>51</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>A.</given-names>
            <surname>Breit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Waltersdorfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.J.</given-names>
            <surname>Ekaputra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sabou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ekelhart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Iana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Paulheim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Portisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Revenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.T.</given-names>
            <surname>Teije</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. Van Harmelen</surname>
          </string-name>
          ,
          <source>Combining Machine Learning and Semantic Web: A Systematic Mapping Study, ACM Comput. Surv</source>
          .
          <volume>55</volume>
          (
          <year>2023</year>
          ). https://doi.org/10.1145/3586163.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>V.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.A.</given-names>
            <surname>Meenai</surname>
          </string-name>
          ,
          <article-title>Pretrained Natural Language Processing Model for Intent Recognition (BERT-IR)</article-title>
          ,
          <source>Human-Centric Intelligent Systems</source>
          <volume>1</volume>
          (
          <year>2021</year>
          )
          <fpage>66</fpage>
          -
          <lpage>74</lpage>
          . https://doi.org/10.2991/hcis.k.
          <volume>211109</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>M.</given-names>
            <surname>Arcan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Manjunath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Robin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pillai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sarkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dutta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Assem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.P.</given-names>
            <surname>McCrae</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Buitelaar</surname>
          </string-name>
          ,
          <article-title>Intent Classification by the Use of Automatically Generated Knowledge Graphs</article-title>
          ,
          <source>Information</source>
          <volume>14</volume>
          (
          <year>2023</year>
          ). https://doi.org/10.3390/info14050288.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.V.</given-names>
            <surname>Phoha</surname>
          </string-name>
          , H. Liu, R. Zafarani, “This is Fake!
          <article-title>Shared it by Mistake”:Assessing the Intent of Fake News Spreaders</article-title>
          ,
          <source>in: Proceedings of the ACM Web Conference</source>
          <year>2022</year>
          ,
          <article-title>Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2022</year>
          : pp.
          <fpage>3685</fpage>
          -
          <lpage>3694</lpage>
          . https://doi.org/10.1145/3485447.3512264.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>G.K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <article-title>FakeKG: a knowledge graph of fake claims for improving automated factchecking (student abstract)</article-title>
          ,
          <source>in: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>