<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Team NYCU-NLP at PAN 2024: Integrating Transformers with Similarity Adjustments for Multi-Author Writing Style Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tzu-Mi Lin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yu-Hsin Wu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lung-Hao Lee</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Artificial Intelligence Innovation, National Yang Ming Chiao Tung University</institution>
          ,
          <country country="TW">Taiwan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <abstract>
        <p>This paper describes our NYCU-NLP system design for multi-author writing style analysis tasks of the PAN Lab at CLEF 2024. We propose a unified architecture integrating transformer-based models with similarity adjustments to identify author switches within a given multi-author document. We first fine-tune the RoBERTa, DeBERTa and ERNIE transformers to detect diferences in writing style in two given paragraphs. The output prediction is then determined by the ensemble mechanism. We also use similarity adjustments to further enhance multi-author analysis performance. The experimental data contains three dificulty levels to reflect simultaneous changes of authorship and topic. Our submission achieved a macro F1-score of 0.964, 0.857 and 0.863 respectively for the easy, medium and hard levels, ranking first and second, respectively for hard and medium levels out of 16 and 17 participating teams.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Pre-trained Language Models</kwd>
        <kwd>Embedding Similarity</kwd>
        <kwd>Authorship Analysis</kwd>
        <kwd>Plagiarism Detection</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The PAN Lab hosts a series of shared tasks for digital text forensics [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Following the achievements
of the past Style Change Detection (SCD) tasks at the PAN Lab [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], the goal of this multi-author
writing analysis task seeks to identify all positions of writing style change at the paragraph level within
a multi-authored document. Given a single document combined from separate comments by diferent
users from the Reddit, the developed system should determine at which positions the author changes
at three levels of dificulty: 1) Easy: the document contains multiple paragraphs on multiple topics; 2)
Medium: the paragraphs in the document contains fewer topics; and 3) Hard: the document consists of
multiple paragraphs on a single topic. All documents may contain an arbitrary number of style changes,
which only occur between paragraphs.
      </p>
      <p>This paper describes our developed NYCU-NLP (National Yang Ming Chiao Tung University, Natural
Language Processing Lab) system. Our solution explores the use of three pre-trained transformers:
RoBERTa, DeBERTa and ERNIE, and then fine-tunes the downstream classification task for the detection
of changes to writing style. Finally, the system output is assembled using a majority voting-based
assembly mechanism. We also take advantage of the property that sentences belonging to the same topic
show greater similarity in the vector semantics space. We use the embedding similarity adjustments
to enhance prediction performance at easy and medium levels which include paragraphs on diferent
topics. Our final submission received macro F1-scores of 0.964, 0.857 and 0.863 respectively at the easy,
medium and hard levels. These results ranked our method first and second, respectively for the hard
and medium levels, out of 16 and 17 participating teams.</p>
      <p>The rest of this paper is organized as follows. Section 2 reviews related studies. Section 3 describes
our proposed NYCU-NLP system. Section 4 presents evaluation results and performance comparisons.</p>
      <sec id="sec-1-1">
        <title>Conclusions are finally drawn in Section 5.</title>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        The BERT transformer was used as the paragraph representation to train a random forest classifier
for the SCD task [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Siamese neural networks were used to measure the paragraph similarities and
identify authorship changes [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Individual transformers were trained independently and then assembled
together for the final authorship change prediction [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The SCD task was regarded as a natural language
inference task and solved using the DeBRETaV3 transformer [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. A prompt-based approach was used
to train a transformer model for the SCD task[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. RoBERTa, BERT, and ELECTRA transformers were
combined with a binary classification layer to solve the SCD task [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The SCD task was also regarded
as an authorship verification problem based on the term-document matrix [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The mT0-x1 was used
as the based teacher model to train the smaller student model based on the knowledge distillation
mechanism [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. A comparative learning method was presented to train the DeBERTa transformer to
ensure paragraphs written by the same author are close in the semantic space [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>In summary, using transformer-based models usually obtained promising results in the previous SCD
tasks. Therefore, this motivates us to explore how to use transformers more efectively to solve the
multi-author writing style analysis task at PAN-2024.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The NYCU-NLP System</title>
      <p>
        • a Robust optimized BERT pre-training approach (RoBERTa) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
      </p>
      <p>
        RoBERTa enhances BERT [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] by removing the next sentence prediction objective that simplifies
the training process, and using a dynamic masking strategy that improves model robustness.
Furthermore, RoBERTa benefits from training with significantly larger batch sizes, enhancing the
stability and efectiveness of the training process. These modifications result in a more robust
pre-trained language model that achieves superior performance on various natural language
processing tasks.
• Decoding-enhanced BERT with disentangled attention (DeBERTa) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
      </p>
      <p>
        DeBERTa improves BERT [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] by using a disentangled attention mechanism and an enhanced
mask decoder. Each word is represented using content and position vectors and then disentangled
matrices are used to compute attention weights. In the enhanced mask decoder architecture,
absolute positions are used to predict the masked tokens for model pre-training.
• Enhanced Representation through Knowledge Integration (ERNIE) [16]
      </p>
      <p>
        Inspired by the masking strategy of BERT [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], ERNIE is designed to learn language
representations by entity-level masking and phrase-level masking. ERNIE 2.0 is an advanced version
of ERNIE [17], which uses continuous multitask learning and a variety of pre-training tasks to
enhance language comprehension. A continuous learning methodology is used to progressively
integrate multiple tasks, which allows the model to proceed without forgetting what it has learned
previously. In addition, ERNIE 2.0 proposes several new pre-training tasks, including word-aware,
structure-aware, and semantic-aware tasks to respectively capture lexical information, syntactic
information, and semantic information.
      </p>
      <p>We fine-tune the language model of the individual pre-trained transformer and connected Multi-Layer
Perceptron (MLP) as a classifier. Each pair of consecutive paragraphs is used for fine-tuning, along with
its labeled classes (where ‘1’ means change and otherwise ‘0’). We then use a voting-based assembly
mechanism [18], which each transformer model makes an independent classification (i.e., a vote 0 or 1)
for each testing instance. The final system output is determined by a majority of votes.</p>
      <p>We suggest that two paragraphs with a similar topic should obtain a higher embedding similarity.
Therefore, a multilingual LaBSE [19] embedding is used to represent each paragraph as a semantic
vector. We then measure the cosine similarity between two given paragraph embedding vectors. If the
similarity exceeds a predefined threshold, the topics of the two paragraphs should have a higher degree
of similarity. We modify the assembly prediction from 1 (change) to 0 based on an assumption that
paragraphs with similar topics usually reflect no change of author if the cosine similarity exceeds the
threshold. In addition, since the paragraphs of a document at the easy and medium levels may contain
a variety of topics, we only adopt this similarity adjustment mechanism at these two levels.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Evaluation</title>
      <p>
        4.1. Data
The experimental datasets were mainly provided by task organizers [20]. Each level has 4,200 documents
for model training and 900 documents for system validation. We also use additional 4,2000 documents
each from the SCD-2023 task [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] to fine-tune the transformers for the medium and hard levels.
      </p>
      <sec id="sec-4-1">
        <title>4.2. Settings</title>
        <p>The pre-trained RoBERTa1, DeBERTa2, and ERNIE 2.03 models were downloaded from HuggingFace
[21]. All models were fine-tuned on a server using a Nvidia Titan RTX GPU (24GB memory). The
hyper-parameter values were optimized as follows: maximum sequence length of 256; learning rate
0.00005; dropout 0.25; epoch 10 and batch size 60. The LaBSE 4 was downloaded from TensorFlow Hub
and the similarity adjustment threshold was set to 0.8. The system was deployed on the TIRA platform
[22] to evaluate performance on the various dificulty levels using the macro-averaging F1-score.</p>
        <sec id="sec-4-1-1">
          <title>1https://huggingface.co/roberta-base 2https://huggingface.co/microsoft/deberta-base 3https://huggingface.co/nghuyong/ernie-2.0-base-en 4https://tfhub.dev/google/LaBSE.</title>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.3. Results</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This study describes the design, implementation and evaluation of our NYCU-NLP system for the
multi-author writing style analysis task at PAN 2024. We selected pre-trained transformer models
as the starting points and fine-tuned the corresponding downstream classification tasks. Our unified
architecture used a voting-based assembly mechanism to determine final system detection. We also
adopted embedding similarity to adjust the system output at the easy and medium levels. Our submitted
system ranked first of 17 participating systems at the hard level and second of 16 systems at the medium
level.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments References</title>
      <p>This study is partially supported by the National Science and Technology Council, Taiwan, under the
grant NSTC 111-2628-E-A49-029-MY3. This work was also financially supported by the Co-creation
Platform of the Industry Academia Innovation School, NYCU.
[16] Y. Sun, S. Wang, Y. Li, S. Fen, H. Tian, H. Wu, H. Wang, Ernie 2.0: A continual pre-training
framework for language understanding, arXiv preprint arXiv:1907.12412 (2019).
[17] Z. Zhang, X. Han, Z. Liu, M. S. Xin Jiang, Q. Liu, Enhanced language representation with
informative entities, Proceedings of ACL 2019 (2019) 1441–1451. doi:https://doi.org/10.
18653/v1/P19-1139.
[18] L.-H. Lee, Y.-S. Wang, C.-Y. Chen, L.-C. Yu, Ensemble multi-channel neural networks for scientific
language editing evaluation, Institute of Electrical and Electronics Engineers Access (2021) 158540
– 158547. doi:10.1109/ACCESS.2021.3130042.
[19] F. Feng, Y. Yang, D. Cer, N. Arivazhagan, W. Wang, Language-agnostic bert sentence
embedding, Proceedings of ACL 2022 (2022) 878–891. doi:https://doi.org/10.18653/v1/2022.
acl-long.62.
[20] E. Zangerle, M. Mayerl, M. Potthast, B. Stein, Overview of the Multi-Author Writing Style Analysis
Task at PAN 2024, in: G. Faggioli, N. Ferro, P. Galuščáková, A. G. S. de Herrera (Eds.), Working
Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum, CEUR-WS.org, 2024.
[21] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M.
Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger,
M. Drame, Q. Lhoest, A. M. Rush, Huggingface’s transformers: State-of-the-art natural language
processing (2019). doi:https://doi.org/10.48550/arXiv.1910.03771.
[22] M. Fröbe, M. Wiegmann, N. Kolyada, B. Grahm, T. Elstner, F. Loebe, M. Hagen, B. Stein, M. Potthast,
Continuous Integration for Reproducible Shared Tasks with TIRA.io, in: J. Kamps, L. Goeuriot,
F. Crestani, M. Maistro, H. Joho, B. Davis, C. Gurrin, U. Kruschwitz, A. Caputo (Eds.), Advances in
Information Retrieval. 45th European Conference on IR Research (ECIR 2023), Lecture Notes in
Computer Science, Springer, Berlin Heidelberg New York, 2023, pp. 236–241. URL: https://link.
springer.com/chapter/10.1007/978-3-031-28241-6_20. doi:10.1007/978-3-031-28241-6_20.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X. B.</given-names>
            <surname>Casals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chulvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dementieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Elnagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Freitag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Korenčić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Panchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Smirnova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Stamatatos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Taulé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ustalov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          , E. Zangerle,
          <article-title>Overview of PAN 2024: Multi-Author Writing Style Analysis, Multilingual Text Detoxification, Oppositional Thinking Analysis, and Generative AI Authorship Verification, in: Experimental IR Meets Multilinguality, Multimodality, and Interaction</article-title>
          .
          <source>Proceedings of the Fourteenth International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Lecture Notes in Computer Science, Springer, Berlin Heidelberg New York,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zangerle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <source>Overview of the Style Change Detection Task at PAN</source>
          <year>2022</year>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hanbury</surname>
          </string-name>
          , M. Potthast (Eds.),
          <article-title>CLEF 2022 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2022</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3180</volume>
          /paper-186.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Zangerle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <article-title>Overview of the Multi-Author Writing Style Analysis Task at PAN 2023</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.),
          <article-title>CLEF 2023 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-201. pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Iyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vosoughi</surname>
          </string-name>
          ,
          <article-title>Style Change Detection Using BERT-Notebook for PAN at CLEF 2020</article-title>
          , in: L.
          <string-name>
            <surname>Cappellato</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Eickhof</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Névéol (Eds.),
          <article-title>CLEF 2020 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2020</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2696</volume>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nath</surname>
          </string-name>
          ,
          <article-title>Style change detection using Siamese neural networks-Notebook for PAN at CLEF 2021</article-title>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maistro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Piroi</surname>
          </string-name>
          (Eds.),
          <article-title>CLEF 2021 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2021</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2936</volume>
          /paper-183.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>T.-M. Lin</surname>
            ,
            <given-names>C.-Y.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Y.-W.</given-names>
          </string-name>
          <string-name>
            <surname>Tzeng</surname>
            ,
            <given-names>L.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Ensemble Pre-trained Transformer Models for Writing Style Change Detection</article-title>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hanbury</surname>
          </string-name>
          , M. Potthast (Eds.),
          <article-title>CLEF 2022 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2022</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3180</volume>
          / paper-210.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Kucukkaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Sahin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Toraman</surname>
          </string-name>
          , ARC-NLP at PAN 23:
          <article-title>Transition-Focused Natural Language Inference for Writing Style Detection</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.),
          <article-title>CLEF 2023 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-218.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <surname>L</surname>
          </string-name>
          . Kong,
          <source>Style Change Detection based on Prompt</source>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hanbury</surname>
          </string-name>
          , M. Potthast (Eds.),
          <article-title>CLEF 2022 Labs and Workshops, Notebook Papers, CEUR-WS</article-title>
          .org,
          <year>2022</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3180</volume>
          /paper-197.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hashemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <article-title>Enhancing Writing Style Change Detection using Transformer-based Models and Data Augmentation</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.), Working Notes of CLEF 2023 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          , pp.
          <fpage>2613</fpage>
          -
          <lpage>2621</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-212.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Jacobo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dehesa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rojas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gómez-Adorno</surname>
          </string-name>
          ,
          <article-title>Authorship verification machine learning methods for Style Change Detection in texts</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.), Working Notes of CLEF 2023 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          , pp.
          <fpage>2652</fpage>
          -
          <lpage>2658</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-217.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kong</surname>
          </string-name>
          ,
          <article-title>Encoded Classifier Using Knowledge Distillation for Multi-Author Writing Style Analysis</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.), Working Notes of CLEF 2023 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          , pp.
          <fpage>2629</fpage>
          -
          <lpage>2634</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-214.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Writing</given-names>
            <surname>Style</surname>
          </string-name>
          <article-title>Embedding Based on Contrastive Learning for Multi-Author Writing Style Analysis</article-title>
          , in: M.
          <string-name>
            <surname>Aliannejadi</surname>
            , G. Faggioli,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , M. Vlachos (Eds.), Working Notes of CLEF 2023 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2023</year>
          , pp.
          <fpage>2562</fpage>
          -
          <lpage>2567</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3497</volume>
          /paper-206.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <article-title>Roberta: A robustly optimized bert pretraining approach (</article-title>
          <year>2019</year>
          ). doi:https://doi.org/10. 48550/arXiv.
          <year>1907</year>
          .
          <volume>11692</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          ,
          <source>Proceedings of NAACL-HLT</source>
          <year>2019</year>
          (
          <year>2019</year>
          )
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          . doi:https: //doi.org/10.48550/arXiv.
          <year>1810</year>
          .
          <volume>04805</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          , W. Chen, Deberta:
          <article-title>Decoding-enhanced bert with disentangled attention</article-title>
          ,
          <source>International Conference on Learning Representations</source>
          (
          <year>2021</year>
          ). doi:https://doi.org/10.48550/ arXiv.
          <year>1907</year>
          .
          <volume>11692</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>