<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Madrid, Spain
* Corresponding author.
$ alex.petrescu@upb.ro (A. Petrescu); elena.apostol@upb.ro (E. Apostol); ciprian.truica@upb.ro (C. Truică)
 https://alexpetrescu.net/ (A. Petrescu); https://sites.google.com/view/elena-simona-apostol (E. Apostol);
https://sites.google.com/view/ciprian-octavian-truica (C. Truică)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Awakened at EXIST2025: Adaptive Mixture of Transformers</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexandru Petrescu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elena-Simona Apostol</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ciprian-Octavian Truică</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Academy of Romanian Scientists</institution>
          ,
          <addr-line>3 Ilfov, Bucharest</addr-line>
          ,
          <country country="RO">Romania</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National University of Science and Technology Politehnica University Bucharest</institution>
          ,
          <addr-line>Splaiul Independent</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This paper presents an adaptive Mixture of Transformers architecture developed for the EXIST 2025 Lab, targeting the detection of sexist content in social media text. The proposed system combines nine transformer-based models-spanning both English-specific and multilingual variants-each specialized by language, platform, or task. A dynamic weighting mechanism automatically adjusts the contribution of each model in the ensemble based on the detected language and performance metrics, enabling robust and context-aware classification across diverse linguistic settings. Experimental results demonstrate that the 2025 architecture achieves competitive performance compared to previous years, surpassing the 2023 and 2024 iterations in several Subtasks and delivering notable improvements in cross-lingual and task-specific detection. However, results for certain Subtasks indicate areas for further optimization. The findings highlight the efectiveness of adaptive model selection and weighting in ensemble architectures for harmful content detection and suggest promising directions for future research in multilingual and context-sensitive text classification.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Mixture of Transformers</kwd>
        <kwd>Text Classification</kwd>
        <kwd>Learning with Disagreements</kwd>
        <kwd>Sexism detection</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The increase of harmful content on social media platforms presents a significant challenge for both
researchers and practitioners, particularly when it comes to the detection and mitigation of sexist
material. Addressing these issues requires robust, adaptable, and multilingual solutions capable of
operating efectively across diverse linguistic and contextual settings. Building on previous work in
the field, this paper introduces an enhanced approach for the EXIST 2025 Lab [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], focusing on the
detection of sexist content in textual data using an adaptive mixture of transformer-based models.
      </p>
      <p>
        Our objective is to improve upon earlier iterations of the Mixture of Transformers [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] architecture by
overcoming identified limitations and leveraging recent advancements in transformer models. While
the EXIST Lab encompasses multiple content types—including text, images, and videos—this work
concentrates exclusively on text classification, reflecting both the prevalence of textual interactions on
social networks and the unique challenges posed by language-based harmful content.
      </p>
      <p>The proposed system integrates nine distinct transformer models, including both English-specific
and multilingual variants, each tailored to specific languages, tasks, or social media platforms. Central
to our approach is a dynamic weighting mechanism that automatically adjusts the contribution of each
model within the ensemble based on the detected language and relevant performance metrics. This
adaptive strategy ensures optimal performance across a wide range of linguistic scenarios and enhances
the system’s ability to detect and classify sexist content accurately.</p>
      <p>By advancing the state of the art in mixture-of-experts architectures and addressing the complex
problem of learning with disagreements, this paper aims to contribute efective tools for reducing
harmful content and fostering safer online communities.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        In recent years, researchers have addressed harmful content detection using three primary approaches.
The most prevalent method involves combining word and transformer-based embeddings with deep
learning techniques to classify textual data [
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6 ref7">4, 5, 6, 7, 3</xref>
        ]. Another research direction focuses on
enriching contextual understanding by incorporating metadata, such as social context [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] or tracking
how information spreads within a network [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Additionally, emerging strategies advocate for the
application of network immunization techniques to stop the spread and dissemination of harmful
content [
        <xref ref-type="bibr" rid="ref10">10, 11, 12, 13, 14, 15</xref>
        ]. Furthermore, holistic systems and architectures have also been developed
to monitor and analyze social media content in real-time to detect and stop the spread of harmful
content [16, 17].
      </p>
      <p>
        The objectives of this lab focus on reducing harmful content on social networks and not only with
a particular focus on addressing sexist material. Our goal is to enhance the approach developed by
our team in previous editions, [18] and [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], by addressing some of the shortcomings identified in the
earlier Mixture of Transformers architecture. While the lab proposes 3 types of content (text, images
and videos), we will be focusing on text.
      </p>
      <p>
        As the current literature also focuses on how harmful content and its spread online [
        <xref ref-type="bibr" rid="ref10">10, 12, 11</xref>
        ],
leveraging Mixture of Experts architectures to pretrained BERT models [19], we can later start a
discussion on how its efects can be mitigated on social platforms [13, 14, 15].
      </p>
      <p>Our system architecture, showcased in Figure 1, is built to efectively detect sexist content in social
media textual content by utilizing a mixture of transformer-based models combined with another learner
capable of automatically adjusting the contribution of each transformer within the ensemble. The
architecture integrates a total of nine distinct transformer models, including four that support multiple
languages, each specialized in either the language, task, or platform from which the content comes.
The central concept is to dynamically select and weight the models based on the detected language of
the input tweet and relevant performance metrics, ensuring optimal results across various linguistic
settings.</p>
      <p>
        In order to have proper experiments, we have chosen the same transformer model repository as the
one in the 2024 architecture [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], provided by Hugging Face. Our selection of English-specific models
for these tasks includes:
• twitter-roberta: a model pre-trained on Twitter data, ideal for social media language.
• bert-toxic-comment-classification : a BERT-based model fine-tuned for detecting toxic content.
• distilbert-uncased-english: a smaller, faster, and more eficient version of BERT.
• MiniLM-L12-H384: a highly compact but powerful language model.
• roberta-hate-speech-dynabench-r4: a RoBERTa variant specifically trained for hate speech
detection.
      </p>
      <p>For our system’s multilingual functionality, we selected the following transformer models:
• twitter-xlm-roberta-base-sentiment: an XLM-RoBERTa variant fine-tuned for sentiment
analysis on Twitter data across multiple languages.
• twitter-xlm-roberta-base-sentiment-multilingual: this model is similar to the above but
ifne-tuned on a broader, more diverse multilingual sentiment dataset.
• distilbert-base-multilingual-cased-sentiments: a more compact and eficient version of BERT,
pre-trained on a vast multilingual corpus and fine-tuned for sentiments.
• xlm-roberta: a multilingual model pre-trained on an enormous dataset covering over 100
languages.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments</title>
      <p>
        Building upon the mixture architectures introduced in our 2024 work [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], the 2025 Lab implementation
employs three transformer configurations with automated weight optimization:
• Full Ensemble - Configuration 1: Uses all nine transformer models, with weights dynamically
adjusted by the learning algorithm.
• Top Performers - Configuration 2: Combines specialized leaders from distinct categories:
twitter-roberta (platform-optimized), twitter-xlm-roberta-base-sentiment-multilingual
(multilingual analysis), and bert-toxic-comment-classification (task-specific detection).
• Best and Worst - Roberta - Configuration 3: Averaged contributions for Subtasks 1.1 and
1.2 for Roberta variations and picked the best 1 and worst 3 to add some noise, resulting in:
twitter-roberta, twitter-xlm-roberta-base-sentiment, roberta-hate-speech-dynabench-r4, and
twitter-xlm-roberta-base-sentiment-multilingual.
      </p>
      <p>In Table 1 we showcase the contribution of each model in the ensemble for the main configuration
(Full Ensemble) for Subtasks 1.1 and 1.2. We analyze the importance of each model in the ensemble
(Imp - 1 for Subtask 1.1 and Imp - 2 for Subtask 1.2) and the Contribution to the output Percentage (%
1 for Subtask 1.1 and % - 2 for Subtask 1.2). Based on these results, we have derived configurations 2
and 3. In the considerations for the ensemble weights, we have taken only the single-label classification
Subtasks 1.1 and 1.2 and not Subtask 1.3, which is multi-label classification, for this version of the
system. Each sub-task and configuration employs its own learning module, with the same learning
strategy: maximize the hard-label metric. This dynamic weighting strategy enables context-aware
prioritization of models, improving both detection accuracy and cross-lingual robustness.</p>
      <p>For our experiments, we propose a mixture of English-only and multi-lingual transformer-based
models, presented in Table 2, as we want to showcase our mixture of models architecture, as seen
in Figure 1, based on the language of the tweets. The output module leverages 3 types of mixtures,
described above. Since the structure of this competition is that Subtasks 1.2 and 1.3 leverage the output
of Subtask 1.1, our system does the same, so we propagate the mixtures, meaning that for Subtasks 1.2
and 1.3, for each mixture type, the corresponding mixture from Subtask 1.1 is used. For all the learning
tasks, we have used an early stop with 3 epochs of tolerance, the best model strategy, and the following
hyperparameters:
1. _ = 2− 5
2. ___ℎ_ = 32
3. ___ℎ_ = 32
4. ℎ_ = 0.01
5. _ℎ = 50
cardifnlp/twitter-roberta-base-sentiment-latest
cardifnlp/twitter-xlm-roberta-base-sentiment-multilingual
cardifnlp/twitter-xlm-roberta-base-sentiment
JungleLee/bert-toxic-comment-classification
distilbert/distilbert-base-uncased-finetuned-sst-2-english
lxyuan/distilbert-base-multilingual-cased-sentiments-student
microsoft/Multilingual-MiniLM-L12-H384
papluca/xlm-roberta-base-language-detection
facebook/roberta-hate-speech-dynabench-r4-target
IsMultiLingual</p>
      <p>No
Yes
Yes
No
No
Yes
No
Yes
No</p>
      <p>According to the oficial competition guidelines, the evaluation metrics include ICM-Hard, ICM-Hard
Norm, F1-Score, Cross Entropy, Majority class, Minority class, and Oracle most voted. Our system
is specifically optimized for these metrics. For Subtasks 1.1 and 1.2, we focus on maximizing the
F1-Score [20], while for Subtask 1.3, we employ a custom Mean Squared Error as the primary metric.</p>
      <p>Regarding hyperparameter tuning, each model is individually optimized as if it were solving the
Subtask independently in the current setup and then the Weight Adjuster learns the weights from the
outputs of each individual transformer based on the configuration.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>
        The proposed solution achieved strong leaderboard results, and we are analyzing its performance
in comparison to the 2024 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and 2023 [18] approaches. Table 3 presents the overall evaluation
metrics—both soft and hard—for all languages.
      </p>
      <p>For Subtask 1.1, the proposed 2025 architecture obtains better results than the 2023 architecture,
but falls short to the 2024 one, both in soft and hard evaluations, with a bigger diference in the soft
one. Surprisingly, the best behaving configuration is number 3 (Best and Worst - Roberta), followed by
number 1 shortly (Full Ensemble), and with a bit of a diference by number 2 (Top Performers).</p>
      <p>For Subtask 1.2, the results are worse than both 2024 and 2023 systems in all metrics by a considerable
amount, for both soft and hard evaluations, which means that we have something to improve on and
consider a main point of interest. Here, the best behaving configuration is number 2 (Top Performers),
followed by a small diference by number 1 (Full Ensemble), and then, by a big diference by number 3
(Best and Worst - Roberta).</p>
      <p>For Subtask 1.3, we have managed to improve our results over the 2023 and 204 iterations, by some
margin, both in the soft and hard evaluations. The best behaving configuration is also number 2 (Top
Performers), followed by a small diference by number 1 (Full Ensemble), and then again by a small
diference by number 3 (Best and Worst - Roberta).</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and future directions</title>
      <p>The adaptive Mixture of Transformers architecture developed for EXIST 2025 demonstrates the value
of leveraging diverse transformer models over certain categories, such as multilingual, task-specific,
and platform-specific. By learning the weights for the transformer ensemble according to the provided
configurations, the system achieves robust and context-aware performance across multiple languages
and evaluation settings. Experimental results show that the 2025 solution performs competitively on the
leaderboard, surpassing the 2023 approach in several Subtasks and ofering improvements in Subtask 1.3
over both previous iterations. However, Subtask 1.2 results indicate areas where further optimization is
needed, particularly in both soft and hard evaluation metrics.</p>
      <p>The analysis of diferent mixture configurations reveals that adaptive selection and weighting of
transformer models can significantly influence performance, with certain configurations excelling
in specific Subtasks. The architecture remains resource-eficient and easily upgradable, supporting
ongoing experimentation and refinement.</p>
      <p>For future work, by leveraging Subtask 1.1 findings, we have noticed that adding noise to the ensemble
provided an interesting result, which needs to be analyzed over other datasets in order to confirm this
behavior.</p>
      <p>From Subtask 1.2 findings, we can also apply multiple strategies such as better metrics for multi-class
classification, not focusing on only the hard-label evaluation, and choosing better models for the model
repository.</p>
      <p>As for the findings from Subtask 1.3, there can also be an ensemble custom-tailored with its results,
on the soft-label evaluation, compared to the hard-label evaluations that have been considered in the
current iteration of the system.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The research presented in this paper was supported in part by (1) The Academy of Romanian Scientists
through the funding of the project “SCAN-NEWS: Smart system for deteCting And mitigatiNg
misinformation and fake news in social media” (AOS, R-TEAMS-III); (2) The Academy of Romanian Scientists
through the funding of the project “NetGuardAI: Intelligent system for harmful content detection and
immunization on social networks” (AOS, R-TEAMS-IV); and (3) the National University of Science and
Technology, POLITEHNICA Bucharest, through the PubArt program.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[11] A. Petrescu, C.-O. Truică, E.-S. Apostol, A. Paschke, EDSA-Ensemble: an Event Detection Sentiment
Analysis Ensemble Architecture, IEEE Transactions on Afective Computing (2024) 1–18. doi: 10.
1109/TAFFC.2024.3434355.
[12] C.-O. Truică, E.-S. Apostol, T. S, tefu, P. Karras, A Deep Learning Architecture for Audience Interest
Prediction of News Topic on Social Media, in: International Conference on Extending Database
Technology (EDBT2021), 2021, pp. 588–599. doi:10.5441/002/EDBT.2021.69.
[13] A. Petrescu, C.-O. Truică, E.-S. Apostol, P. Karras, Sparse Shield: Social Network Immunization vs.</p>
      <p>Harmful Speech, in: ACM International Conference on Information and Knowledge Management
(CIKM2021), ACM, 2021, pp. 1426–1436. doi:10.1145/3459637.3482481.
[14] C.-O. Truică, E.-S. Apostol, R.-C. Nicolescu, P. Karras, MCWDST: A Minimum-Cost Weighted
Directed Spanning Tree Algorithm for Real-Time Fake News Mitigation in Social Media, IEEE
Access 11 (2023) 125861–125873. doi:10.1109/ACCESS.2023.3331220.
[15] E.-S. Apostol, Özgur Coban, C.-O. Truică, CONTAIN: A community-based algorithm for
network immunization, Engineering Science and Technology, an International Journal 55 (2024)
1–10(101728). doi:https://doi.org/10.1016/j.jestch.2024.101728.
[16] E.-S. Apostol, C.-O. Truică, A. Paschke, ContCommRTD: A Distributed Content-Based
Misinformation-Aware Community Detection System for Real-Time Disaster Reporting, IEEE
Transactions on Knowledge and Data Engineering (2024) 1–12. doi:10.1109/tkde.2024.
3417232.
[17] C.-O. Truică, A.-T. Constantinescu, E.-S. Apostol, StopHC: A Harmful Content Detection and
Mitigation Architecture for Social Media Platforms, in: IEEE International Conference on Intelligent
Computer Communication and Processing (ICCP 2024), 2024, pp. 1–5. doi:10.1109/ICCP63557.
2024.10793051.
[18] A. Petrescu, Leveraging MiniLMv2 Pipelines for EXIST2023, in: Working Notes of the Conference
and Labs of the Evaluation Forum (CLEF 2023), volume 3497 of CEUR Workshop Proceedings,
CEUR-WS.org, 2023, pp. 1037–1043.
[19] L. Hallee, R. Kapur, A. Patel, J. P. Gleghorn, B. Khomtchouk, Contrastive learning and
mixture of experts enables precise vector embeddings, 2024. URL: https://arxiv.org/abs/2401.15713.
arXiv:2401.15713.
[20] C.-O. Truică, C. A. Leordeanu, Classification of an imbalanced data set using decision tree
algorithms, Univiversity Politechnica of Bucharest Scientific Bulletin - Series C Electrical Engineering
and Computer Science 79 (2017) 69–84.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Plaza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Carrillo-de Albornoz</surname>
          </string-name>
          , I. Arcos,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Spina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Amigó</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gonzalo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Morante</surname>
          </string-name>
          , Overview of exist 2025:
          <article-title>Learning with disagreement for sexism identification and characterization in tweets, memes, and tiktok videos. experimental ir meets multilinguality, multimodality, and interaction</article-title>
          .,
          <source>in: Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF</source>
          <year>2025</year>
          ),
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Plaza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Carrillo-de Albornoz</surname>
          </string-name>
          , I. Arcos,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Spina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Amigó</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gonzalo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Morante</surname>
          </string-name>
          , Overview of exist 2025:
          <article-title>Learning with disagreement for sexism identification and characterization in tweets, memes, and tiktok videos (extended overview)</article-title>
          ., in: CLEF 2025 Working Notes,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Petrescu</surname>
          </string-name>
          , C.
          <article-title>-</article-title>
          <string-name>
            <surname>O. Truică</surname>
            ,
            <given-names>E.-S.</given-names>
          </string-name>
          <string-name>
            <surname>Apostol</surname>
          </string-name>
          ,
          <article-title>Language-based Mixture of Transformers for EXIST2024</article-title>
          ,
          <source>in: Working Notes of the Conference and Labs of the Evaluation Forum</source>
          , volume
          <volume>3740</volume>
          <source>of CEUR Workshop Proceedings</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1157</fpage>
          -
          <lpage>1164</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.-I.</given-names>
            <surname>Ilie</surname>
          </string-name>
          , C.
          <article-title>-</article-title>
          <string-name>
            <surname>O. Truică</surname>
            ,
            <given-names>E.-S.</given-names>
          </string-name>
          <string-name>
            <surname>Apostol</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Paschke</surname>
          </string-name>
          ,
          <article-title>Context-Aware Misinformation Detection: A Benchmark of Deep Learning Architectures Using Word Embeddings, IEEE Access 9 (</article-title>
          <year>2021</year>
          )
          <fpage>162122</fpage>
          -
          <lpage>162146</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2021</year>
          .
          <volume>3132502</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.-O.</given-names>
            <surname>Truică</surname>
          </string-name>
          , E.-S. Apostol, MisRoBÆRTa: Transformers versus Misinformation,
          <source>Mathematics</source>
          <volume>10</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          (
          <issue>569</issue>
          ). doi:
          <volume>10</volume>
          .3390/math10040569.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.-O.</given-names>
            <surname>Truică</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.-S.</given-names>
            <surname>Apostol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Paschke</surname>
          </string-name>
          , Awakened at CheckThat! 2022:
          <article-title>Fake News Detection using BiLSTM and sentence transformer</article-title>
          ,
          <source>in: Working Notes of the Conference and Labs of the Evaluation Forum</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>749</fpage>
          -
          <lpage>757</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.-O.</given-names>
            <surname>Truică</surname>
          </string-name>
          , E.-S. Apostol,
          <article-title>It's all in the Embedding! Fake News Detection using Document Embeddings</article-title>
          ,
          <source>Mathematics</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          (
          <issue>508</issue>
          ). doi:
          <volume>10</volume>
          .3390/math11030508.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.-O.</given-names>
            <surname>Truică</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.-S.</given-names>
            <surname>Apostol</surname>
          </string-name>
          , P. Karras, DANES:
          <article-title>Deep Neural Network Ensemble Architecture for Social and Textual Context-aware Fake News Detection, Knowledge-Based Systems 294 (</article-title>
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          (
          <issue>111715</issue>
          ). doi:https://doi.org/10.1016/j.knosys.
          <year>2024</year>
          .
          <volume>111715</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.-O.</given-names>
            <surname>Truică</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.-S.</given-names>
            <surname>Apostol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Marogel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Paschke</surname>
          </string-name>
          , GETAE:
          <article-title>Graph Information Enhanced Deep Neural NeTwork Ensemble ArchitecturE for fake news detection</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>275</volume>
          (
          <year>2025</year>
          )
          <article-title>126984</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2025</year>
          .
          <volume>126984</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Petrescu</surname>
          </string-name>
          , C.
          <article-title>-</article-title>
          <string-name>
            <surname>O. Truică</surname>
          </string-name>
          , E.-S. Apostol,
          <article-title>Sentiment Analysis of Events in Social Media</article-title>
          ,
          <source>in: 2019 IEEE 15th International Conference on Intelligent Computer Communication and Processing (ICCP)</source>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>149</lpage>
          . doi:
          <volume>10</volume>
          .1109/iccp48234.
          <year>2019</year>
          .
          <volume>8959677</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>