<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Team karami-sh at PAN: Transformer-based Ensemble Learning for Multi-Author Writing Style Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohammad Karami Sheykhlan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Saleh Kheiri Abdoljabbar</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mona Nouri Mahmoudabad</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Mohaghegh Ardabili</institution>
          ,
          <addr-line>Daneshgah St., Ardabil, 5619911367</addr-line>
          ,
          <country country="IR">Iran</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Tabriz</institution>
          ,
          <addr-line>Bahman Boulevard, Tabriz, 5166616471</addr-line>
          ,
          <country country="IR">Iran</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <abstract>
        <p>Our study addresses the intricate task of detecting style changes within documents authored by multiple individuals. The primary aim is to pinpoint instances where authors transition within the text. This task holds significant importance in author identification, particularly in situations where no comparative texts are available. By discerning variations in writing style, we can unveil instances of plagiarism, identify instances of gift authorship, authenticate claimed authorships, and potentially develop novel technologies to support writing endeavors. Our methodology employs sophisticated ensemble learning techniques, incorporating fine-tuned Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Pretraining Approach (RoBERTa) and Eficiently Learning an Encoder that Classifies Token Replacements Accurately (ELECTRA) models for medium and hard subtask and fine-tuned RoBERTa for easy subtask, to efectively address this complex challenge across diverse datasets, such as those found in Reddit comments. Our team's approach achieved F1-scores of 97.2%, 66.4%, and 64.2% for the Easy, Medium, and Hard subtasks, respectively.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Ensemble learning</kwd>
        <kwd>Multi-author style change detection</kwd>
        <kwd>Natural language processing</kwd>
        <kwd>Transformers</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        A RoBERTa and employed data augmentation and ensemble learning techniques. Their models ranked
ifrst in two sub-tasks and second in others, showcasing strong performance. Chen et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] present a
study employing comparative learning techniques for analyzing writing style. They enhance sentence
segment embeddings produced by a pre-trained model’s encoder to ensure closer proximity for similar
stylistic sentences and greater separation for dissimilar ones. Utilizing this optimized encoder, they
generate sentence embeddings by combining tag data with paragraph sample pairs and classifying
them through a full connection layer. Experimental results demonstrate F1-scores of 0.9145, 0.8203,
and 0.6755 on Task 1, Task 2, and Task 3 of the oficial test set, respectively. Kucukkaya et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
delve into the task of multi-author writing style detection, which involves identifying shifts in writing
style within text documents. They frame this challenge as a natural language inference task, pairing
consecutive paragraphs. Their strategy emphasizes paragraph transitions and token truncation for
input. Using various Transformer-based encoders with warmup training, they submit a model version
that surpasses baselines and other proposed versions in experimentation. Specifically, they employ
a transition-focused natural language inference approach based on Decoding-enhanced BERT with
Disentangled Attention (DeBERTa) with warmup training for easy and medium setups, while opting
for the same model without transitions for the hard setup.
      </p>
      <p>
        While feature extraction methods like TF-IDF have yielded satisfactory results in author profiling
and Authorship Attribution tasks [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ], their performance is not as efective, particularly on the Hard
dataset in the SCD task. This is due to the significant similarity observed among sample texts, limiting
their efectiveness. Recent progress in this field has shown that BERT and similar language models,
known for their complexity with extensive parameters, perform exceptionally well in SCD tasks.
      </p>
      <p>In this study, we investigated the eficacy of transformer-based models for detecting changes in
authorial style at the paragraph level. Initially, we fine-tuned three transformer-based models on the
training dataset. Furthermore, we employed Ensemble learning techniques to augment the performance
of our approach.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Task and Datasets</title>
      <p>The datasets provided by PAN@CLEF for writing SCD are categorized into three dificulty levels:
Easy, Medium, and Hard. Each level presents unique challenges for detecting shifts in authorship
within documents. On the Easy level, documents contain paragraphs covering diverse topics, allowing
approaches to utilize topic information efectively in detecting authorship changes. Conversely, the
Medium level entails documents with limited topical variety, requiring a stronger emphasis on stylistic
analysis to address the detection task. Finally, the Hard level poses the most challenging scenario, where
all paragraphs within a document focus on the same topic, demanding approaches to rely solely on
stylistic cues for accurate detection.</p>
      <p>Moreover, for each subtask, PAN@CLEF provides distinct datasets comprising multiple documents,
each containing paragraphs. Accompanying these datasets are ground truth files, which furnish
essential information: the number of authors associated with each document and the identification of
consecutive paragraphs where style changes occur, signifying transitions in authorship. These datasets
are partitioned into training and validation sets to facilitate experimental setup. In our approach, we
treat every pair of consecutive paragraphs as a sample, concatenating them and assigning a label
indicating whether a style change occurred between the two paragraphs (labeled as 1) or if they were
written by the same author (labeled as 0).</p>
    </sec>
    <sec id="sec-3">
      <title>3. System Overview</title>
      <sec id="sec-3-1">
        <title>3.1. Data preparation</title>
        <p>For each subtask, our methodology involves concatenating two consecutive paragraphs and assigning
them a binary label, thereby framing each subtask as a binary classification problem. In our study, we
harness the power of three state-of-the-art transformer-based models: BERT, RoBERTa, and ELECTRA
complemented by their corresponding tokenizers. Given the constraint of 512 tokens imposed by the
medium-sized models and the rarity of samples exceeding this limit, we adopt a uniform approach by
adhering to the 512-token threshold and truncating longer samples as necessary. Additionally, to ensure
consistent attention distribution across paragraphs within each sample, we implement a truncation
strategy that involves removing tokens from the end of the sequence. This meticulous approach enables
us to efectively leverage the capabilities of these advanced models while addressing the practical
constraints of our dataset.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Transformer-based Models</title>
        <p>
          Transformer models have revolutionized natural language processing, capturing complex relationships
in text data through self-attention mechanisms. Unlike traditional models, they process all words in a
sentence simultaneously, enabling eficient handling of longer sequences. Pre-trained on large corpora,
models like BERT and Generative Pre-trained Transformer (GPT) learn rich language representations,
adaptable to various tasks with minimal labeled data. Widely used in machine translation,
summarization [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], and text classification like hate speech detection [ 11], transformers are fundamental in
modern NLP systems. In our study, we employed three popular pre-trained transformer models, namely
BERT, RoBERTa, and ELECTRA.
        </p>
        <p>BERT, RoBERTa, and ELECTRA are prominent transformer-based models in NLP. BERT, introduced
by Google, utilizes bidirectional training and transformer architecture to capture contextual information
efectively. RoBERTa, an improvement upon BERT, optimizes training strategies and scales model size,
achieving superior performance on various NLP tasks. ELECTRA, employing a novel pre-training
approach, replaces tokens with plausible alternatives and trains a discriminator to distinguish between
real and replaced tokens, enhancing eficiency and learning efectiveness.</p>
        <p>Ultimately, we augmented each language model with a binary classification layer to detect changes
in writing style. We fine-tuned the models for each subtask using their respective datasets. Additionally,
we aggregated the datasets from all three subtasks and fine-tuned the RoBERTa and ELECTRA models
on this combined dataset.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Ensemble learning</title>
        <p>Hard Ensemble Learning refers to a sophisticated approach in machine learning where multiple models,
known as base learners, are combined to form a stronger, more robust predictive model. Unlike
traditional ensemble methods, such as bagging and boosting, which primarily focus on combining
diverse but weak models, Hard Ensemble Learning integrates several complex and high-performing
models to tackle challenging tasks. This technique leverages the collective intelligence of diverse models,
each trained on diferent aspects of the data, to improve overall predictive accuracy and generalization.</p>
        <p>In this study, we developed five single models for each subtask, all based on three fine-tuned models:
BERT, RoBERTa, and ELECTRA. Initially, we fine-tuned all three models with the dataset corresponding
to each subtask. Then, we combined the datasets of all three subtasks and fine-tuned the RoBERTa
and ELECTRA models on them. Our goal with ensemble learning is to leverage the strengths of each
approach to enhance system performance. Detailed specifics will be provided in the next section.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <sec id="sec-4-1">
        <title>4.1. Hyperparameter tuning and Evaluation</title>
        <p>In this study, we utilized Google Colaboratory’s GPU to fine-tune the BERT, RoBERTa, and ELECTRA
models. Due to token constraints, we limited our consideration to 512 tokens. The hyperparameter
learning rate for all three transformer models was set to 2e-5. For the dataset associated with the
"easy" subtask, we employed an epoch value of 10, while for the other two subtasks, we used 7 epochs.
Moreover, for the fine-tuning models chosen for the entire subtasks dataset (RoBERTa, ELECTRA), the
epoch value was set to 3.</p>
        <p>The F1-score is a balanced measure of a model’s accuracy, combining precision and recall. Precision
measures the accuracy of positive predictions, while recall measures the model’s ability to find all
positive instances. The macro F1-score averages F1-scores across all classes, providing a balanced
evaluation, especially in datasets with class imbalances. Following our experimentation and analysis of
results from evaluation sets, we identify the most efective model for each subtask. Subsequently, we
execute the selected model on an unseen test set through the TIRA platform [12].</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Results</title>
        <p>In this section, we present the results of our experiments on the SCD task. Table 1 summarizes the
performance of the tested models. We conducted 12 diferent experiments for each subtask, both with
standalone models and ensemble learning. Our findings show that for the "easy" subtask, the pre-trained
RoBERTa model achieved the highest macro F1-score on the provided validation dataset. However,
for the remaining subtasks, ensemble learning models outperformed others. Specifically, fine-tuning
RoBERTa with the entire subtasks dataset (Combined1) and ELECTRA (Combined2), as well as training
ELECTRA, RoBERTa, and BERT models on the medium subtask dataset, yielded the highest macro
F1-score accuracy. Moreover, for the "Hard" subtask, models trained on RoBERTa and ELECTRA, along
with Combined1, delivered the best performance across all experiments. Therefore, for each subtask’s
test dataset, we selected the approach with the best performance. The results of our work on the unseen
dataset are presented in Table 2.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper provides an overview of our team’s performance in the PAN Shared Task for the SCD task.
We embarked on a series of diverse experiments to pinpoint the most efective strategy for identifying
shifts in writing style within consecutive paragraphs. Initially, we trained five distinct language models,
meticulously fine-tuning each to grasp the intricacies of our task. Following this, we delved into an
extensive exploration of various combinations of these models, tailoring our approaches to suit the
unique demands of each subtask. For the "easy" subtask, we opted to leverage the capabilities of the
RoBERTa model exclusively, given its robust performance in preliminary assessments. However, for the
more complex subtasks, we adopted an ensemble learning approach, harnessing the collective power of
multiple models to tackle the nuanced challenges presented.
[11] M. K. Sheykhlan, J. Shafi, S. Kosari, Pars-hao: Hate speech and ofensive language detection on
persian social media using ensemble learning, Authorea Preprints (2023).
[12] M. Fröbe, M. Wiegmann, N. Kolyada, B. Grahm, T. Elstner, F. Loebe, M. Hagen, B. Stein, M. Potthast,
Continuous Integration for Reproducible Shared Tasks with TIRA.io, in: J. Kamps, L. Goeuriot,
F. Crestani, M. Maistro, H. Joho, B. Davis, C. Gurrin, U. Kruschwitz, A. Caputo (Eds.), Advances in
Information Retrieval. 45th European Conference on IR Research (ECIR 2023), Lecture Notes in
Computer Science, Springer, Berlin Heidelberg New York, 2023, pp. 236–241. URL: https://link.
springer.com/chapter/10.1007/978-3-031-28241-6_20. doi:10.1007/978-3-031-28241-6_20.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X. B.</given-names>
            <surname>Casals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chulvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dementieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Elnagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Freitag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Korenčić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Panchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Smirnova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Stamatatos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Taulé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ustalov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          , E. Zangerle,
          <article-title>Overview of PAN 2024: Multi-Author Writing Style Analysis, Multilingual Text Detoxification, Oppositional Thinking Analysis, and Generative AI Authorship Verification, in: Experimental IR Meets Multilinguality, Multimodality, and Interaction</article-title>
          .
          <source>Proceedings of the Fourteenth International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Lecture Notes in Computer Science, Springer, Berlin Heidelberg New York,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zangerle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <article-title>Overview of the Multi-Author Writing Style Analysis Task at PAN 2024</article-title>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galuščáková</surname>
          </string-name>
          , A. G. S. de Herrera (Eds.), Working Notes of CLEF 2024 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Jacobo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dehesa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rojas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gómez-Adorno</surname>
          </string-name>
          ,
          <article-title>Authorship verification machine learning methods for Style Change Detection in texts</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.), Working Notes of CLEF 2023 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          , pp.
          <fpage>2652</fpage>
          -
          <lpage>2658</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-217.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L. Z. J.</given-names>
            <surname>Zia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liua</surname>
          </string-name>
          ,
          <article-title>Style Change Detection Based On Bi-LSTM And Bert</article-title>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hanbury</surname>
          </string-name>
          , M. Potthast (Eds.),
          <article-title>CLEF 2022 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2022</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3180</volume>
          /paper-234.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hashemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <article-title>Enhancing Writing Style Change Detection using Transformer-based Models and Data Augmentation</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.), Working Notes of CLEF 2023 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          , pp.
          <fpage>2613</fpage>
          -
          <lpage>2621</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-212.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Writing</given-names>
            <surname>Style</surname>
          </string-name>
          <article-title>Embedding Based on Contrastive Learning for Multi-Author Writing Style Analysis</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.), Working Notes of CLEF 2023 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          , pp.
          <fpage>2562</fpage>
          -
          <lpage>2567</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-206.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Kucukkaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Sahin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Toraman</surname>
          </string-name>
          , ARC-NLP at PAN 23:
          <article-title>Transition-Focused Natural Language Inference for Writing Style Detection</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.), Working Notes of CLEF 2023 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          , pp.
          <fpage>2659</fpage>
          -
          <lpage>2668</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-218.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rahgouy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Giglou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rahgooy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sheykhlan</surname>
          </string-name>
          , E. Mohammadzadeh,
          <article-title>Cross-domain Authorship Attribution: Author Identification using a Multi-Aspect Ensemble Approach</article-title>
          , in: L.
          <string-name>
            <surname>Cappellato</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Losada</surname>
          </string-name>
          , H. Müller (Eds.),
          <source>CLEF 2019 Labs and Workshops</source>
          , Notebook Papers, CEURWS.org,
          <year>2019</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2380</volume>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H. B.</given-names>
            <surname>Giglou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rahgouy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rahgooy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Sheykhlan</surname>
          </string-name>
          , E. Mohammadzadeh,
          <article-title>Author profiling: Bot and gender prediction using a multi-aspect ensemble approach</article-title>
          , in: L.
          <string-name>
            <surname>Cappellato</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
            ,
            <given-names>D. E.</given-names>
          </string-name>
          <string-name>
            <surname>Losada</surname>
          </string-name>
          , H. Müller (Eds.),
          <source>Working Notes of CLEF 2019 - Conference and Labs of the Evaluation Forum, Lugano, Switzerland, September</source>
          <volume>9</volume>
          -
          <issue>12</issue>
          ,
          <year>2019</year>
          , volume
          <volume>2380</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2380</volume>
          /paper_231.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lapata</surname>
          </string-name>
          ,
          <article-title>Text summarization with pretrained encoders</article-title>
          , arXiv preprint arXiv:
          <year>1908</year>
          .
          <volume>08345</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>