<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Team cnlp-nits-pp at PAN: Leveraging BERT for Accurate Authorship Verification: A Novel Approach to Textual Attribution</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Annepaka Yadagiri</string-name>
          <email>annepaka22rs@cse.nits.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dimpal Kalita</string-name>
          <email>kalitadimpal112@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abhishek Ranjan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ashish Kumar Bostan</string-name>
          <email>kumarashishbostan@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Parthib Toppo</string-name>
          <email>toppoparthib@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Partha Pakray</string-name>
          <email>partha@cse.nits.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science &amp; Engineering, National Institute of Technology</institution>
          ,
          <addr-line>Silchar, Assam</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <abstract>
        <p>The launch of ai-generated tools has attracted a lot of interest from the academic and business worlds. Efectively handling a broad spectrum of human inquiries, ai-generated tools ofer clear, thorough responses that far outperform earlier open-source chatbots regarding security and use. People are interested in learning how powerful AI is and how far it has come from human specialists. However, concerns about the possible detrimental efects large language models like ChatGPT can have on society-including fake news, plagiarism, and social security problems-are beginning to surface. In this work, The dataset is provided from CLEF PAN-24 humanwritten text data and 13 diferent types of ai-generated models text data like alpaca-7b,bigscience-bloomz-7b1, chavinlo-alpaca-13b, Gemini-pro, gpt-3.5-turbo-0125,gpt-4-turbo-preview,meta-llama-llama-2-7b-chat-hf,metallama-llama-2-70b-chat-hf,mistralai-mistral-7b-instruct-v0.2,mistralai-mixtral-8x7b-instruct-v0.1,qwen- qwen1.5-72b-chat-8bit,text-bison-002,vicgalle-gpt2-open-instruct-v1. which approximately provides imbalanced data. The comparison of human-written and ai-generated data. We examine the features of ChatGPT's replies, the distinctions and shortcomings of human experts, and the prospects for LLMs based on the pan-24 dataset. We conducted extensive human assessments and linguistic examinations of ai-generated content compared to human content, yielding several intriguing findings. Then, we conduct in-depth research on the best ways to identify whether a given text was produced by ai-generated or humans. We construct three distinct detection systems, investigate critical variables afecting their performance, and test them in various contexts. Our solution approach for this task involves using the BERT model with a preprocessing model, where we achieved classification results with over 97.6% ROC-AUC for all the results included in this challenge.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Large Language Models</kwd>
        <kwd />
        <kwd>AI-Generated Content Detection</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Generative AI</kwd>
        <kwd>BERT</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Task</title>
      <p>
        In cooperation with the Voight-Kampf Task at the ELOQUENT Lab, the Generative AI Authorship
Verification Task at PAN uses a builder-breaker approach. ELOQUENT participants research innovative
text creation and obfuscation techniques to evade detection, while PAN participants develop systems to
distinguish between human and AI-generated content [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Detecting whether text is human or AI-generated is challenging due to several factors. First,
AIgenerated text from LLMs like GPT-4 is often highly coherent and contextually appropriate, making
it dificult to distinguish from human writing. Additionally, LLMs can mimic human writing styles
and nuances, further complicating detection. Statistical methods used to diferentiate text, such as
analyzing word frequency and sentence structure, often find significant overlap between human and</p>
      <p>AI text. Moreover, detection models trained on specific text types may not perform well on others,
requiring extensive retraining and resources. Ethical and practical concerns also arise, such as the risk
of false positives and negatives, privacy issues in data analysis, and the ongoing need to adapt to new
AI techniques. Addressing these issues involves continuous advancements in detection algorithms and
comprehensive research eforts.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Dataset Description</title>
      <p>
        This section outlines the classification methods and specific model training approaches, Section 3.2
discusses the model’s overall structure and Section 3.3 focuses on the key points of model training.
The dataset, acquired via CLEF 2024 PAN [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], consists of about 1,087 rows of text composed by humans
and approximately 14,131 rows of text produced by AI. The text comprises a combination of authentic
and fraudulent news stories from diferent 2021 U.S. news headlines. Initially, the dataset contained
numerous JSON encodings, which were removed in the first step. During further analysis of the cleaned
dataset, NAN values were identified. These were addressed by consolidating all data into a single
data frame. Using linguistic analysis, the text column extracted features such as average line length,
vocabulary, word density, and POS tags. This provides an overview of the data processing steps, as
shown in Figure 1. From this dataset, we extracted feature statistics. Table 1 represents statistics and
feature extraction data.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. System Overview</title>
      <p>This section examines the linguistic diferences between human-written and AI-generated texts. Next,
the performance of existing detection algorithms is assessed using the PAN-24 dataset [2]. Finally, the
criteria used by deep learning-based detection methods are investigated.</p>
      <sec id="sec-3-1">
        <title>3.1. Vocabulary Features</title>
        <p>This section examines the vocabulary characteristics of the PAN-24 dataset. The study is focused on
the word choices made by AI-generated text and humans when responding to identical queries. Given
the diversity of texts written by humans and AI, these diferences are analyzed during the statistical
procedure. The following traits were computed: in addition to lexicon measure (V), which measures
the total number of unique words used in all responses, and average length (L), which measures the
average number of words in each text, an additional characteristic named word density (D) is proposed.
Word density is determined by the formula D = 100 × V / (L × N), where N is the number of answers.
Density quantifies the degree to which words are employed intensively in a text. For instance, if 1,000
words of the text are published but only 100 distinct words are used, the density is 100 × 100 / 1,000 =
10. The higher the density, the more diferent words are used in the same text length. [3].</p>
        <p>Lexical analysis Within the domain of NLP, every word can be categorized into one of several
lexical categories. The part-of-speech (POS) tagging task aims to identify each word’s grammatical
class within a given phrase. In this section, the lexical distributions of various AI-generated and human
texts in the PAN-24 dataset are computed using the POS module in NLTK [4]. The data is then arranged
according to lexical percentage. As illustrated in Figures 2 and 3, various parts of speech are displayed.
Figures 4 and 5 present punctuation and adposition tags, respectively. Finally, Figures 6 and 7 show
determiners and pronouns. The statistics for the top ten lexical categories are displayed. Nouns (NOUN)
make up the largest proportion of all lexical categories, while punctuation (PUNCT), verbs (VERB),
adpositions (ADP), adjectives (ADJ), and determiners (DET) constitute most of the remaining categories.</p>
        <p>When comparing human-written texts to AI-generated texts, the following observations can be made:
AI-generated texts have higher proportions of nouns (NOUN), verbs (VERB), determiners (DET),
adjectives (ADJ), auxiliaries (AUX), coordinating conjunctions (CCONJ), and particles (PART) than
human-written texts. This suggests that the rich knowledge embedded in AI-generated texts ofers a
more varied vocabulary, enhancing their informativeness.</p>
        <p>Human-written texts contain higher proportions of adverbs (ADV) and punctuation (PUNCT) than
AI-generated texts. This indicates that humans prioritize structure, consistency, and logical flow, in
which AI-generated texts are comparatively weaker.
3.2. Model
A BERT-based sequence classification [ 5] and transformer-based model designed to understand the
context of a word in search queries. Unlike traditional models that process text sequentially (either
leftto-right or right-to-left), BERT considers the entire sequence of words simultaneously. This bidirectional
approach allows BERT to grasp the context of a word based on its surrounding words, leading to better
performance in NLP tasks.</p>
        <p>Key Features of BERT:
• Bidirectional Training: BERT uses a Transformer architecture that reads text bi-directionally.</p>
        <p>This helps the model understand the context of each word more comprehensively.
• Pre-training and Fine-tuning: BERT involves two main stages:
– Pre-training: The model is trained on a large corpus of text, learning to predict
missing words in sentences (Masked Language Model) and the next sentence (Next Sentence
Prediction).
– Fine-tuning: The pre-trained BERT model is then fine-tuned on tasks such as text
classification, named entity recognition, or question answering using task-specific data.</p>
        <p>Our team plans to extract features from the original dataset, including the text and numerical columns.
Initially, this dataset was utilized as training data for 3 epochs to train a new model, referred to as model
A, using BERT. BERT, an enhanced version of previous models, incorporates a more significant number
of parameters, more extensive training data, and larger batch sizes. It is trained significantly more
significantly than CNN-BILSTM, which takes considerably longer. This extensive training allows BERT
representations to generalize more efectively to downstream tasks and deliver superior performance
compared to other models. As a result, the BERT model demonstrates high accuracy and faster processing
speeds, as illustrated in Figure. 8.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.3. Model Training</title>
        <p>First, with Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz and NVIDIA A800-SXM4-80GB hardware
platform, we will split the dataset into 80% training and 20% validation sets. Each training iteration will
utilize a batch size of 32. The data consists of a text column along with numerical columns ([‘Vocabulary’,
‘Noun Count’, ‘Verb Count’, ‘AUX Count’, ‘NUM Count’, ‘PRON Count’, ‘ADV Count’, ‘INTJ Count’,
‘PART Count’]). This dataset will be input into the BERT model for sequence classification, incorporating
the numerical features using PyTorch and Hugging Face Transformers.</p>
        <p>The CustomDataset class inherits from torch.utils.data.Dataset.</p>
        <p>• __init__: Initializes the dataset with text, numerical data, and labels, converting numerical data
and labels to tensors.
• __len__: Returns the length of the dataset.
• __getitem__: Tokenizes text data, processes numerical features, and returns a dictionary with
input IDs, attention mask, and label for a given index.
then Load a pre-trained BERT tokenizer and model from Hugging Face’s model hub. Create instances
of the CustomDataset class for the training and validation sets. Create data loaders for the training
and validation datasets with a batch size 32. AdamW: Optimizer with weight decay. CrossEntropyLoss:
Loss function for classification tasks. Then, given to the model. tain() After 3 epochs of training, Model
A is generated. Model A’s training round takes about 20 minutes, while prediction time takes about 15
minutes. Then, based on the output results of Model A, our team has established the following criteria
to evaluate the classification between the text and labels: In the data processing step, we are going to
call the label data universally human ‘0’ and ai-generated ‘1’. Based on the label data, we will predict
which was a human-written text and which was ai-generated text. After training the model, we are
going to check after training. We will take 20 percent of the data for testing purposes, whether it is
predicted wrong or correct results. It will give good accuracy to the exact results that came. The model
training process is shown in the figure 9.</p>
        <sec id="sec-3-2-1">
          <title>3.3.1. Execution Steps</title>
          <p>We have written software that can be run from the command line. An input file (an absolute path to the
input JSONL file) and an output directory (an absolute path to the location where the results will be
written) are the two arguments that the script requires.</p>
          <p>We execute the command as follows in the terminal:
python3 model.py &lt;input_file_path&gt; &lt;output_directory&gt;
Here, model.py is the main Python file that loads and runs the model. The &lt;input_file_path&gt; is
the path of the file containing the input texts, and the &lt;output_directory&gt; is the directory where
the output file is saved.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.4. Hyperparameters</title>
        <p>The precise adjustments a user makes to control the learning process are known as hyperparameters.
The best/optimal hyperparameters for learning algorithms must be selected during training to yield
the most meaningful results. The hyperparameters used in our recommended techniques are shown in
Table 2 we selected these values by analyzing the performance of the suggested methods with diferent
combinations of hyperparameters.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.5. Features Extracted</title>
        <p>Feature extraction in NLP involves transforming raw text data into a structured representation that
machine learning algorithms can use for various NLP tasks. The following features were extracted, and
our model was trained on those parameters.</p>
        <sec id="sec-3-4-1">
          <title>3.5.1. Average Line Length</title>
          <p>In NLP, average line length is the mean number of characters or words per line in a text dataset like
PAN-24. A sample text has been taken from this dataset. For example,
• Text: “President Joseph R. Biden Jr. calls for unity and a renewed commitment to democracy".
• Average characters per line: 74
• Average words per line: 12</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>3.5.2. Vocabulary</title>
          <p>In NLP, vocabulary (vocab) refers to the set of unique words or tokens in a text dataset like PAN-24. A
sample text has been taken from this dataset. For example,
• Vocabulary:
• Number of lines:
• Average line length:</p>
          <p>– There is 1 line in the text.
• Text: “Biden’s inauguration is impacted by the pandemic and security threats."
• Vocabulary: "Biden’s," “inauguration," “is," “impacted," “by," “the," “pandemic," “and," “security,"
“threats"
• Size of vocabulary: 10</p>
        </sec>
        <sec id="sec-3-4-3">
          <title>3.5.3. Word Density</title>
          <p>In NLP, word density measures how many unique words (vocabulary) appear per unit of text, calculated
as 100 times the vocabulary size divided by the product of the number of lines and the average line
length.</p>
          <p>A sample text has been taken from this dataset. For example, “A new chapter of American democracy
begins amidst unprecedented times."</p>
          <p>Step-by-Step Calculation:
– Unique words: "A," "new," "chapter," "of," "American," "democracy," "begins," "amidst,"
"unprecedented," "times"
– Vocabulary size: 10
– Line 1: "A new chapter of American democracy begins amidst unprecedented times." (70
characters)
– Average line length: 70 characters
• Word Density Calculation: The word density ( ) can be calculated using the formula:
  =</p>
          <p>100 × Vocabulary Size
No of Lines × Average Line Length
  : Word Density
Vocabulary Size : Number of unique words in the text</p>
          <p>No of Lines : Total number of lines in the text
Average Line Length : Average number of characters per line
(1)</p>
          <p>Where:
3.5.4. POS Tags
• Noun
• Verb
– Definition: Words representing people, places, things, or ideas.
– Examples: “cat," “city," “happiness."
– Usage: “The cat is sleeping."
– Definition: Words that describe actions, states, or occurrences.</p>
          <p>– Examples: “run," “is," “seem."</p>
          <p>So, the word density of the text is approximately 14.29.</p>
          <p>Part-of-speech (POS) tags are labels assigned to each word in a text to indicate its grammatical category,
such as noun, verb, adjective, etc. POS tagging is a fundamental task in NLP that helps understand
sentences’ syntactic structure and meaning. Explanation of POS Tags:</p>
          <p>– Usage: "She runs every morning."
• Punctuation
– Definition: Symbols used to separate sentences and their elements and to clarify meaning.
– Examples: ".", ",", "!"
– Usage: "Hello, world!"
• Determiner
– Definition: Determiners are words placed before nouns to specify quantity or definiteness.
– Examples: "the," "a," "some."
– Usage: "The apple is red."
• Pronoun
– Definition: Pronouns are words that replace nouns.
– Examples: "he," "they," "it."
– Usage: "She loves her dog."
• Proper Noun
– Definition: Proper nouns are specific names of people, places, or organizations.
– Examples: "John," "Paris," "Google."
– Usage: "Google is a search engine."
• Adjective
– Definition: Adjectives are words that describe or modify nouns.
– Examples: "happy," "blue," "tall."
– Usage: "The tall building is new."
• Auxiliary Verb
– Definition: Auxiliary verbs are used with main verbs to express tense, mood, or voice.
– Examples: "is," "have," "will."
– Usage: "She is running."
• Adverb
– Definition: Adverbs modify verbs, adjectives, or other adverbs.
– Examples: "quickly," "very," "well."
– Usage: "He ran quickly."
• Particles
– Definition: Particles are small words with grammatical functions that do not fit into other
categories.
– Examples: "to" (in "to go"), "not" (in "do not")
– Usage: "She decided to go."
• Subordinating conjunctions
– Definition: Subordinating conjunctions connect clauses to show a relationship between
them.
– Examples: "because," "although," "if"
– Usage: "She stayed home because it was raining."
• Numerals
– Definition: Numerals are words that represent numbers.
– Examples: "one," "two," "third."
– Usage: "She has two cats."
• X
– Definition: Other categories of words that do not fit into the standard parts of speech.
– Examples: Foreign words, typos
– Usage: "She said ’ciao’ as she left."</p>
        </sec>
      </sec>
      <sec id="sec-3-5">
        <title>3.6. Implementation</title>
        <p>There are three major steps of our implementation as follows:
• Tokenization and Model Loading: This part sets up the tokenizer and the model into 19 distinct
features as shown in Table 1. From among these features, only suitable features, The tokenizer
and model configuration, are loaded from the ‘bert-base-uncased’ pre-trained model, and the
actual model weights are loaded from a specified path. The model is set to evaluation mode and
moved to the appropriate device (CPU or GPU).
• TextDetector Class: This class takes a text string as input tokenizes it, and then uses the model
to get the logits (Logits are a neural network model’s raw, unnormalized outputs). The logits are
converted to probabilities using the softmax function. It assumes a binary classification model
and returns the second class’s probability (index 1).
• Comparative Score Function:</p>
        <p>comparative_score(score1, score2, epsilon=1e-3)
This function compares two scores with a small threshold (epsilon) to avoid floating-point
precision issues. It returns a value between 0 and 1 based on the comparison:
– Returns a value between 0.5 and 1 if the first score is significantly higher.
– Returns a value between 0 and 0.5 if the second score is significantly higher.</p>
        <p>– Returns 0.5 if the scores are very close (within epsilon).</p>
        <p>In the final function of calculating the result, it reads the line and parses it as JSON, then extracts
the two texts (text1 and text2) and computes scores for both texts. It uses a comparative score
function to determine a final score. Finally, the results are written in a JSONL file in the specified
output directory.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. Evaluation Metrics</title>
        <p>Systems are assessed using the PAN authorship verification tasks as a benchmark. The metrics listed
below are employed:
The region that falls within the receiver operating characteristic (ROC) curve. Characteristics of the
Receiver Operating Area An indicator of the actual positive rate against the false positive rate at diferent
threshold settings is the area under the receiver operating characteristic curve, or "Under the Curve."
Higher numbers indicate better discrimination performance. It ofers a total assessment of a model’s
capacity to distinguish between positive and negative classes.</p>
        <p>The Brier score’s complement (mean squared loss). For binary classification problems, the Brier score
calculates the mean squared diference between the expected probability and the actual result (0 or 1).
Lower Brier ratings indicate better calibration and accuracy of the probability predicted by the model.
A modified accuracy score that uses the average accuracy of the remaining instances to assign
nonanswers (score = 0.5). C@1 quantifies the percentage of cases in which the model’s top-ranked prediction
corresponds with the ground truth label. It is a typical assessment metric for recommendation or
information retrieval systems.</p>
        <p>The harmonic mean between recall and accuracy. The F1 score is calculated by taking the harmonic
mean of the two variables: recall, the ratio of accurate optimistic predictions to all real positives,
and precision, which is the ratio of accurate optimistic predictions to all projected positives. Better
performance is indicated by higher numbers, which strike a balance between recall and precision.
A precision-weighted F measure (modified F0.5 measure) that considers non-answers (score = 0.5) to be
false negatives. While recall is less important than precision, the F0.5 score is comparable to the F1
score. When recall is less important than precision, like in situations where erroneous positives are
more expensive than false negatives, it might be helpful.
4.1.2. Brier
4.1.3. C@1
4.1.4. F1
4.1.5. F0.5u
4.1.6. Mean
The sum of all of the following measurements. The mean score indicates the average performance
across all samples or occurrences in the evaluation dataset.</p>
        <p>These metrics collectively provide insights into diferent aspects of model performance, including
discrimination ability, calibration, accuracy, ranking quality, and the balance between precision and
recall.</p>
        <p>Table 3 shows the results, initially pre-filled with the oficial baselines provided by the PAN organizers.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Baseline Models</title>
        <p>Baseline models are simple reference models used to establish a benchmark for evaluating the
performance of more complex models in machine learning and natural language processing tasks. These
models provide a standard or point of comparison, allowing researchers and practitioners to assess
whether new models ofer improvements in accuracy, eficiency, or other relevant metrics. By comparing
against baseline models, it is possible to quantify the gains achieved by novel techniques and ensure
that the advancements are meaningful and not merely coincidental. Six LLM detection baselines are
used as references for the model results. These six LLM detection baselines are re-implementations
from the original papers:
• Baseline Binoculars[6]
• Baseline DetectGPT [7]
• Baseline PPMd [8]
• Baseline Unmasking [9]
• Baseline Fast-DetectGPT [10]</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper presents a bootstrap dataset of actual and false news items encompassing multiple 2021
U.S. news headlines, using the shared task on the PAN-24 dataset, which includes almost 1087 rows of
human-written text. And diferent in almost 13 LLMs with 14181 rows( ai-generated text). Based on the
PAN-24 dataset, we conduct broad considers counting human-written content assessments, phonetic
investigation, and ai-generated content discovery tests. The human-written content assessments and
phonetics analysis provide us with knowledge about the specific contrasts between human-written
content and AI-generated text, which persuade our considerations of LLMs’ future headings. The
ai-generated content substance detection experiments outline a few imperative conclusions that can
give advantageous guides to the research and improvement of AIGC-detection instruments. We make
all our data, code, and models publicly available to facilitate related research and applications at our git
hub repository AI vs Human</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>We express our gratitude to the National Institute of Technology Silchar’s Department of Computer
Science and Engineering and the Center for Natural Language Processing (CNLP) for providing the
necessary infrastructure and assistance for this study.
(Eds.), Working Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum, CEUR
Workshop Proceedings, CEUR-WS.org, 2024.
[2] A. A. Ayele, N. Babakov, J. Bevendorf, X. B. Casals, B. Chulvi, D. Dementieva, A. Elnagar, D. Freitag,
M. Fröbe, D. Korenčić, M. Mayerl, D. Moskovskiy, A. Mukherjee, A. Panchenko, M. Potthast,
F. Rangel, N. Rizwan, P. Rosso, F. Schneider, A. Smirnova, E. Stamatatos, E. Stakovskii, B. Stein,
M. Taulé, D. Ustalov, X. Wang, M. Wiegmann, S. M. Yimam, E. Zangerle, Overview of PAN 2024:
Multi-Author Writing Style Analysis, Multilingual Text Detoxification, Oppositional Thinking
Analysis, and Generative AI Authorship Verification, in: L. Goeuriot, P. Mulhem, G. Quénot,
D. Schwab, L. Soulier, G. M. D. Nunzio, P. Galuščáková, A. G. S. de Herrera, G. Faggioli, N. Ferro
(Eds.), Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of
the Fifteenth International Conference of the CLEF Association (CLEF 2024), Lecture Notes in
Computer Science, Springer, Berlin Heidelberg New York, 2024.
[3] B. Guo, X. Zhang, Z. Wang, M. Jiang, J. Nie, Y. Ding, J. Yue, Y. Wu, How close is chatgpt to human
experts? comparison corpus, evaluation, and detection, arXiv preprint arXiv:2301.07597 (2023).
[4] S. Bird, Nltk: the natural language toolkit, in: Proceedings of the COLING/ACL 2006 Interactive</p>
      <p>Presentation Sessions, 2006, pp. 69–72.
[5] C. A. C. Sáenz, K. Becker, Understanding stance classification of bert models: an attention-based
framework, Knowledge and Information Systems 66 (2024) 419–451.
[6] A. Hans, A. Schwarzschild, V. Cherepanova, H. Kazemi, A. Saha, M. Goldblum, J. Geiping, T.
Goldstein, Spotting llms with binoculars: Zero-shot detection of machine-generated text, 2024. URL:
https://arxiv.org/abs/2401.12070. arXiv:2401.12070.
[7] E. Mitchell, Y. Lee, A. Khazatsky, C. D. Manning, C. Finn, Detectgpt: Zero-shot
machinegenerated text detection using probability curvature, 2023. URL: https://arxiv.org/abs/2301.11305.
arXiv:2301.11305.
[8] D. Sculley, C. Brodley, Compression and machine learning: a new perspective on feature space
vectors, in: Data Compression Conference (DCC’06), 2006, pp. 332–341. doi:10.1109/DCC.2006.
13.
[9] J. Bevendorf, B. Stein, M. Hagen, M. Potthast, Generalizing unmasking for short texts, in:
J. Burstein, C. Doran, T. Solorio (Eds.), Proceedings of the 2019 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume
1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota,
2019, pp. 654–659. URL: https://aclanthology.org/N19-1068. doi:10.18653/v1/N19-1068.
[10] G. Bao, Y. Zhao, Z. Teng, L. Yang, Y. Zhang, Fast-detectgpt: Eficient zero-shot detection of
machinegenerated text via conditional probability curvature, 2024. URL: https://arxiv.org/abs/2310.05130.
arXiv:2310.05130.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Karlgren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Dürlich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Gogoulou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Talman</surname>
          </string-name>
          , E. Stamatatos,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <article-title>Overview of the “Voight-Kampf” Generative AI Authorship Verification Task at PAN</article-title>
          and
          <article-title>ELOQUENT 2024</article-title>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galuščáková</surname>
          </string-name>
          , A. G. S. de Herrera
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>