<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Sentence-Level Style Change Detection with RoBERTa for Multi-Author Writing Style Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Harjas Rohra</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nirbhay Shah</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sheetal Sonawane</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Pune Institute of Computer Technology</institution>
          ,
          <addr-line>Pune</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Style change detection aims to identify points within a document where authorship shifts occur, which is crucial for tasks like plagiarism detection, authorship verification, and writing support. This submission addresses the intrinsic style change detection task from PAN, which involves identifying sentence-level style changes in multi-author documents under varying topical constraints. We employ a RoBERTa-based model that captures subtle stylistic diferences between consecutive sentences. Our approach achieves F1 scores of 0.823, 0.766, and 0.667 on the easy, medium, and hard datasets, respectively, demonstrating robust performance across increasing levels of dificulty.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;PAN 2025</kwd>
        <kwd>Multi-Author Writing Style Analysis</kwd>
        <kwd>Style Change Detection</kwd>
        <kwd>Authorship Attribution</kwd>
        <kwd>RoBERTa</kwd>
        <kwd>Pre-trained Language Model</kwd>
        <kwd>Transformers</kwd>
        <kwd>Fine-Tuning</kwd>
        <kwd>Stylometric Analysis</kwd>
        <kwd>CLEF 2025</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The objective of the style change detection task is to locate the places in a multi-author document
where authorship shifts. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] This raises a key question in authorship analysis: is it possible to find
stylistic evidence that many authors are present in a text that was authored collaboratively? Solving
this problem is particularly relevant in scenarios where reference documents are unavailable, as it
enables plagiarism detection purely through internal stylistic cues. Beyond that, style change detection
has practical applications in verifying claimed authorship, identifying instances of ghostwriting or gift
authorship, and supporting tools for collaborative writing.
      </p>
      <p>
        Over the years, the PAN shared task on multi-author writing style analysis has evolved significantly.
Earlier editions focused on identifying whether a document was authored by one or more individuals
(2018), estimating the number of contributing authors (2019) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and detecting style changes at the
paragraph level (2020–2022) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In 2022, the task was extended to pinpoint style changes at the
sentence level [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Until then, many of the datasets exhibited high topical variability, which allowed
participants to exploit shifts in content as proxies for style variation. Recognizing this, the 2023 and 2024
editions placed greater emphasis on controlling for topic, compelling systems to rely on finer-grained
stylistic features rather than content-driven cues. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Previous work on Multi Author Writing Style analysis spans a wide range of methodologies. Multiple
eforts adopt a binary classification framework. The document is divided into text segments that are
compared to determine whether they are written by the same author or co-written by two diferent
authors. [
        <xref ref-type="bibr" rid="ref10 ref11 ref13 ref14 ref9">9, 10, 11, 13, 14</xref>
        ]
      </p>
      <p>
        Zlatkova et al. explore ensemble learning through the concept of stacking LightGBM classifiers
trained over TF-IDF encoding. They also explore the use of Multi Layered Perceptrons for the same. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
      </p>
      <p>
        Weerasinghe et al. explore the task on small and large datasets, with the logistic regression for the
former and neural networks for the latter. The authors use metrics such as ROC, AUC and F1-scores to
evaluate their models used in TIRA. [
        <xref ref-type="bibr" rid="ref10">10, 18</xref>
        ]
      </p>
      <p>
        Deibel et al. make use of MLP and Bidirectional LSTM architectures. They use the models in an
ensemble setup to capture both sequential and non-linear correlations. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
      </p>
      <p>
        A notable approach is the one used by Shams Alshamasi et al. which diverges from the usual binary
classification approach to employ a clustering-based approach. The aim is to create clusters of segments
sharing the same author. They use algorithms such as K-Means and DBSCAN. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]
      </p>
      <p>
        A famous trend in this task is to use transformer architectures such as BERT and RoBERTa for binary
classification. [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13, 14, 15</xref>
        ] We continue on the trend to utilize a fine-tuned RoBERTa model for binary
classification. The paper describes our approach in detail.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset</title>
      <p>To support the development and testing of style change detection models, three datasets corresponding
to the three dificulty levels— easy, medium, and hard—are made available. All datasets contain annotated
ground truth reflecting sentence-level style changes, with the exception of the last test segment.</p>
      <p>Each data set is divided into three segments: the training set (70% of the data) is accompanied by
ground truth and is used to train and develop models; the validation set (15%) also includes ground
truth and serves to tune and evaluate model performance; the test set (15%) contains no ground
truth and is reserved for the final evaluation of submitted systems. This structured split ensures fair
benchmarking across all dificulty levels while supporting robust model development and optimization.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>This section describes the methodology for implementation of our approach. We approached the
problem as a binary classification task, predicting whether a particular sentence pair shares the same
authorship or not. Our goal was to leverage the capabilities of pre-trained language models to achieve
stable precision and recall results on both seen and unseen data. To achieve this goal it was necessary
to prepare training data and conduct fine tuning in a manner to avoid model overfitting. The approach
can be explained in 2 phases: 1) Data Preparation and 2) Model fine tuning.</p>
      <sec id="sec-4-1">
        <title>4.1. Data Preparation</title>
        <p>To prepare the dataset for fine-tuning, we created sentence pairs from the input documents. Using the
provided solution vector, we identified sentence pairs written by the same author (negative class) and
by multiple authors (positive class). However, the resulting dataset was overwhelmingly unbalanced,
leading to a bias toward same-authored pairs. To address this, we divided our documents into training
and validation sets. From each training document, we sourced an equal number of positive and negative
sentence pairs to ensure the training data was balanced. For the validation set, we included all possible
sentence pairs to ensure the model is evaluated on data that closely resembles realistic inputs.</p>
        <p>To prepare sentence pairs for our model we performed tokenization using pretrained tokenizers for
the model architecture(RoBERTa in our case). For our truncation strategy we employed the maximum
token length to be 256 tokens with the assumption that individual sentences would rarely cross the
length restrictions. This was a practical adaptation that allowed us to balance between capturing
suficient contextual information and maintaining computational eficiency.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Model Fine-Tuning</title>
        <p>
          The pretrained RoBERTa-base [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] model was chosen to be the base model for our approach. RoBERTa
has the ability to develop rich linguistic representations for high performance on downstream tasks.
        </p>
        <p>
          To avoid overfitting and improve generalisation we employed weight decay regularization [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Weight
decay penalizes large weights in the model efectively adding a L2 Regularization term. This prevented
the model from simply memorizing author-specific quirks and focus on comparing stylometric features
of sentence pairs.
        </p>
        <p>ℒtotal = ℒCE +  ∑︁ 2

where ℒCE is the standard cross-entropy loss,  is the weight decay coeficient, and
trainable weights of the model.</p>
        <p>For accelerated training and eficient hardware usage we utilized fp16 for our training that allowed
us to train weights on 16 bit floating point arithmetic instead of 32 bit standard precision. We relied on
dynamic loss scaling to maintain training stability despite the lower precision.
 are the</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Classification Head</title>
        <p>The model utilizes a standard classification head to generate our final predictions. It consists of a
Dropout Layer attached to the final hidden layers to counter overfitting, a linear layer and an activation
layer where we utilized softmax for inference.</p>
        <p>This architecture enables the model to learn the subtle stylistic diferences between sentence pairs
and make accurate binary predictions on authorship change.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experimentation</title>
      <sec id="sec-5-1">
        <title>5.1. Experimental Setup</title>
        <p>
          This section outlines the experimental setup used for training and evaluation, followed by the results
obtained on the oficial PAN 2025 datasets [
          <xref ref-type="bibr" rid="ref16">16, 17</xref>
          ] for the easy, medium, and hard dificulty levels.
We fine-tuned the roberta-base model using the Hugging Face Trainer API. The model was trained
using the settings summarized in Table 3. Training was conducted on a Tesla P100 GPU with mixed
precision enabled (fp16=True) to optimize GPU memory and computation time.
        </p>
        <p>Balanced sentence pairs (equal positive and negative) were used for training, while all sentence pairs
were used in validation to mimic real-world data distributions. The model was checkpointed after each
epoch, and the best-performing checkpoint was used for final inference.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Results</title>
        <p>We evaluated the model’s ability to detect style changes across all three PAN-provided dificulty levels:
easy, medium, and hard. The results are presented in Table 4. Evaluation metrics include precision,
recall, and F1-score.</p>
        <p>The model performs strongly on the easy and medium datasets, where topic variation provides subtle
clues for authorship change. In the hard setting—where topical consistency is enforced—the model still
maintains reasonable performance, demonstrating its ability to identify fine-grained stylistic diferences
without relying on content-based signals.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper, we focused on the PAN 2025 task of detecting style change at the sentence level in
multiauthor writings. Our solution frames the task as a binary classification problem, using the RoBERTa-base
model fine-tuned on balanced sentence pairs. We proposed data balancing, regularization, and
precisionaware training strategies to enhance the model’s robustness and generalizability.</p>
      <p>Our model scored F1 of 0.823, 0.766, and 0.667 on the easy, medium, and hard datasets respectively,
improving upon naive baselines and providing competitive performance even under stringent topical
limitations. These outcomes point to the model’s capacity for sensitive stylistic variations without
relying substantially upon topic changes. Future directions could leverage more advanced architectures
or contrastive learning strategies towards improved performance, particularly in topic-consistent
settings.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>We thank the PAN organizers for providing the datasets and evaluation framework. We also acknowledge
the computational resources provided by Pune Institute of Computer Technology.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
A. Panchenko, M. Potthast, A. Shelmanov, E. Stamatatos, B. Stein, Y. Wang, M. Wiegmann, and
E. Zangerle, Overview of PAN 2025: Voight-Kampf Generative AI Detection, Multilingual Text
Detoxification, Multi-Author Writing Style Analysis, and Generative Plagiarism Detection , CLEF 2025,
Lecture Notes in Computer Science, Springer, Madrid, Spain, 2025.
[17] E. Zangerle, M. Mayerl, M. Potthast, and B. Stein, Overview of the Multi-Author Writing Style
Analysis Task at PAN 2025, Working Notes of CLEF 2025 – Conference and Labs of the Evaluation
Forum, CEUR Workshop Proceedings, Madrid, Spain, 2025.
[18] M. Fröbe, M. Wiegmann, N. Kolyada, B. Grahm, T. Elstner, F. Loebe, M. Hagen, B. Stein, and
M. Potthast, Continuous Integration for Reproducible Shared Tasks with TIRA.io, in: J. Kamps, L.
Goeuriot, F. Crestani, M. Maistro, H. Joho, B. Davis, C. Gurrin, U. Kruschwitz, A. Caputo (Eds.),
Advances in Information Retrieval - 45th European Conference on Information Retrieval (ECIR 2023),
Dublin, Ireland, April 2–6, 2023, Proceedings, Part III, Lecture Notes in Computer Science, vol. 13982,
Springer, 2023, pp. 236–241.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zangerle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          , et al.,
          <source>Overview of the Multi-Author Writing Style Analysis Task at PAN</source>
          <year>2024</year>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galuščáková</surname>
          </string-name>
          , A. G. S. de Herrera (Eds.), Working Notes of CLEF 2024 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X. B.</given-names>
            <surname>Casals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chulvi</surname>
          </string-name>
          , et al.,
          <source>Overview of PAN</source>
          <year>2024</year>
          <article-title>: Multi-Author Writing Style Analysis, Multilingual Text Detoxification, Oppositional Thinking Analysis, and Generative AI Authorship Verification</article-title>
          , in: L.
          <string-name>
            <surname>Goeuriot</surname>
          </string-name>
          et al. (Eds.),
          <source>Proceedings of the Fifteenth International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Lecture Notes in Computer Science, Springer,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zangerle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tschuggnall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Specht</surname>
          </string-name>
          , et al.,
          <source>Overview of the Style Change Detection Task at PAN</source>
          <year>2019</year>
          , in: L.
          <string-name>
            <surname>Cappellato</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Losada</surname>
          </string-name>
          , H. Müller (Eds.),
          <article-title>CLEF 2019 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zangerle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Specht</surname>
          </string-name>
          , et al.,
          <source>Overview of the Style Change Detection Task at PAN</source>
          <year>2020</year>
          , in: L.
          <string-name>
            <surname>Cappellato</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Eickhof</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Névéol (Eds.),
          <article-title>CLEF 2020 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zangerle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          , et al.,
          <source>Overview of the Style Change Detection Task at PAN</source>
          <year>2021</year>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maistro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Piroi</surname>
          </string-name>
          (Eds.),
          <article-title>CLEF 2021 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zangerle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          , et al.,
          <source>Overview of the Style Change Detection Task at PAN</source>
          <year>2022</year>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hanbury</surname>
          </string-name>
          , M. Potthast (Eds.),
          <article-title>CLEF 2022 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <article-title>RoBERTa: A Robustly Optimized BERT Pretraining Approach</article-title>
          , arXiv preprint arXiv:
          <year>1907</year>
          .11692,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kosson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Messmer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Jaggi</surname>
          </string-name>
          , Rotational Equilibrium:
          <article-title>How Weight Decay Balances Learning Across Neural Networks</article-title>
          ,
          <source>arXiv preprint arXiv:2305.17212</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zlatkova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kopev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mitov</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Atana</surname>
          </string-name>
          ,
          <article-title>An ensemble rich multi-aspect approach for robust style change detection</article-title>
          ,
          <source>PAN at CLEF</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Weerasinghe</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Greenstadt</surname>
          </string-name>
          ,
          <article-title>Feature Vector Diference based Neural Network and Logistic Regression Models for Authorship Verification</article-title>
          , PAN at CLEF,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R.</given-names>
            <surname>Deibel</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Löflad</surname>
          </string-name>
          ,
          <article-title>Style Change Detection on Real-World Data using an LSTM-powered Attribution Algorithm</article-title>
          , PAN at CLEF,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Alshamasi</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Menai</surname>
          </string-name>
          ,
          <article-title>Ensemble-Based Clustering for Writing Style Change Detection in Multi-Authored Textual Documents</article-title>
          , PAN at CLEF,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          , Y. Han, and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yi</surname>
          </string-name>
          , Team Chen at PAN:
          <string-name>
            <surname>Integrating R-Drop</surname>
          </string-name>
          and
          <article-title>Pre-trained Language Model for Multi-author Writing Style Analysis</article-title>
          ,
          <source>PAN at CLEF</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alvi</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Samad</surname>
          </string-name>
          , Team Gladiators at PAN:
          <article-title>Improving Author Identification: A Comparative Analysis of Pre-Trained Transformers for MultiAuthor Classification</article-title>
          , PAN at CLEF,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Wegmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schraagen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <article-title>Same Author or Just Same Topic? Towards ContentIndependent Style Representations</article-title>
          ,
          <source>Proceedings of the 7th Workshop on Representation Learning for NLP, ACL</source>
          , Dublin, Ireland,
          <year>2022</year>
          , pp.
          <fpage>249</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dementieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gipp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Greiner-Petter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Karlgren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          , P. Nakov,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>