<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Conference and Labs of the Evaluation Forum, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Generative AI Authorship Verification based on ChatGLM</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Haotian Lei</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiangyu Liu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guo Niu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yan Zhou</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yuexia Zhou</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Foshan University</institution>
          ,
          <addr-line>Foshan</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>0</volume>
      <fpage>9</fpage>
      <lpage>12</lpage>
      <abstract>
        <p>In this paper, we use the LoRA method to fine-tune the large language model ChatGLM. To balance the data distribution in the dataset, we modified the labels and transformed it into a multi-classification task. This enables the large language model to better learn the diferences in expression among diferent authors on the same topic, thereby learning the writing styles of humans and machines. During inference, we modify its final output by remapping it into a binary classification task, distinguishing whether the text was authored by a human or a machine. This approach aims to achieve the task of Generative AI Authorship Verification. The evaluation results on the PAN corpus test dataset indicate that this method is efective, with an mean score greater than 0.7.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Generative AI Authorship Verification</kwd>
        <kwd>Large Language Models</kwd>
        <kwd>LoRA</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        As artificial intelligence-generated content (AIGC) technology continues to advance, large language
models (LLMs) such as ChatGPT, ChatGLM [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], and Qwen [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] are improving at an astonishing rate and
being increasingly adopted across various sectors. The text generated by these models has reached a level
comparable to that of human peers, enabling them to provide highly fluent and meaningful responses
to a wide variety of user queries. The rapid development and widespread adoption of LLMs highlight
their potential to revolutionize how we interact with technology, ofering significant improvements in
eficiency and user experience.
      </p>
      <p>However, with these advancements, several issues have also surfaced. One major concern is the rapid
spread of fake news, as LLMs can generate realistic and convincing false information that can be quickly
disseminated across various platforms. Additionally, there is the manipulation of public opinion through
social media comments, where LLMs are used to produce a large volume of persuasive and biased posts,
swaying public perception and discourse. Another significant problem is academic dishonesty, with
students using LLMs to complete their assignments, which undermines the integrity of the educational
process and presents challenges for educators in assessing genuine student performance.</p>
      <p>
        This paper presents our approach for Generative AI Authorship Verification task [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] on PAN
2024. For this task, our approach is to use LLMs to counteract LLMs. Our work is based on ChatGLM,
utilizing the LoRA [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] method for fine-tuning. Additionally, to better fine-tune the LLM, we modified
the training dataset content and its labels to make it easier to distinguish between human and machine
writing, thereby improving its reasoning ability on the test set. Finally, we submitted our results on
TIRA [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        As large language models rapidly advance in generating extremely high-quality text, powerful LLMs
provide unprecedented convenience to people. These models not only understand and process complex
language inputs but also excel in generating coherent and contextually appropriate text, making
them valuable tools for applications such as customer service, content creation, educational support,
and more.However, the false text generated by powerful LLMs is raising ethical and legal concerns.
Moreover, it has become increasingly dificult for people to rely on their own experience to determine
whether a piece of text was written by a human or a machine. RoFT [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] attempted to involve users
in detecting machine-generated text. However, only 15.8% of annotations correctly identified the
detection boundary. This has led researchers to consider using more accurate methods to combat and
detect false text. Consequently, various methods have been developed to detect and diferentiate these
generated texts.Zellers [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] illustrated the generation of machine-produced fake news by proposing a
GPT-based news generator called GROVER. They also used GROVER itself to classify and detect fake
news. GLTR [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] detects generated text in a zero-shot manner by utilizing token prediction probabilities
from available pre-trained NLP models, such as BERT [10] and GPT-2 [11]. OpenAI recently released
an AI text classifier by fine-tuning a GPT model [ 12], using LLMs to counteract the misuse of LLMs.
Similarly, we fine-tune a large language model, ChatGLM, using the LoRA method to achieve Generative
AI Authorship Verification.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. System Overview</title>
      <p>For this task, we utilize the dataset provided by PAN, which includes both genuine and fake news
articles spanning multiple headlines from the United States in 2021. It consists of a JSON file written by
13 diferent machine authors and one human author. Each file contains articles on the same topic. The
IDs and line order of the articles are the same, so the same line always corresponds to the same topic.
Each document contains 24 topics and 1087 articles.</p>
      <p>Although this could be considered as a binary classification task, to determine whether the text
is written by a human or not.However, due to the extremely high quality of text generated by large
language models, we believe that simply dividing this task into a binary classification task may not
yield satisfactory results. Since we are also using a large language model to perform this task, in order
to enable the model to learn diferent textual features, we have modified the dataset labels to implement
a multi-class classification task.</p>
      <p>Table 1 shows the specific modifications made to the labels in the dataset. Fortunately, the dataset
provided by PAN is very well-organized, with each type of "author" providing the same number of
articles, which is 1087.In the form of multi-class tasks, there won’t be exaggerated data proportions like
"human:machines=1:13", which could lead to the issue of unbalanced learning data. Finally, during the
inference test, we limit the output result to a number between 0 and 1. The closer the number is to
1, the more likely the text is written by a human. Conversely, the closer the number is to 0, the more
likely it is written by a machine. When the probabilities are equal, we default to considering the first
comment as written by a human and the second comment as written by a machine.</p>
      <p>This paper employs LoRA technology to fine-tune ChatGLM. LoRA is a method used for fine-tuning
large language models, aimed at enhancing the model’s performance on specific tasks. The core idea of
LoRA is to expand the language representation of the model by introducing domain-specific corpora,
making it more specialized and adaptable, as shown in Equation 1
 =  x + ∆  x =  x + BAx = ( + )x
(1)
Where  ∈ ×  represents the weight matrix of the pre-trained model. ∆  represents the change
in the weights. ∆  =  represents the update part obtained through fine-tuning, which is updated
using low-rank decomposition. Where  ∈ × ,  ∈ × , and the rank  ≪ min(, ).</p>
      <p>During fine-tuning, the weight parameters  of the pre-trained model are frozen, and only the
parameters  and  are trained. As shown in Figure 1, LoRA integrates the trained bypass weight
parameters with the pre-trained model weights without introducing additional pathways for inference,
making it suitable for real-time requirements in vertical domains.</p>
      <p>Pretrained
Weights
(freeze)</p>
      <p>B=0</p>
      <p>r</p>
      <p>A=N(0,σ2)
h
+
x
d</p>
      <p>In this work, for LoRA, we set the LoRA rank to 4, the batch size to 8, and the learning rate to 1e-4,
using FP16 for training. We completed this fine-tuning on a single A800 GPU. For ChatGLM, we set the
top p to 0.7, max length to 2, and temperature to 0.2.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>We submitted our system to TIRA and utilized the evaluation metrics provided by TIRA, which
specifically include the following:</p>
      <p>ROC-AUC: The area under the ROC (Receiver Operating Characteristic) curve.</p>
      <p>Brier: The complement of the Brier score (mean squared loss).</p>
      <p>C@1: A modified accuracy score that assigns non-answers (score = 0.5) the average accuracy of the
remaining cases.</p>
      <p>F1: The harmonic mean of precision and recall.</p>
      <p>F0.5u: A modified F0.5 measure (precision-weighted F measure) that treats non-answers (score = 0.5)
as false negatives.</p>
      <p>Mean:The arithmetic mean of all the metrics above</p>
      <p>We evaluated the performance of our model on the new test set provided by PAN, and the test
results are shown in Table 2. Our test results are higher than Baseline Unmasking and Baseline
FastDetectGPT, but lower than Baseline Binoculars, Baseline Fast-DetectGPT (Mistral), and Baseline PPMd.
Our approach performs poorly on the new test dataset. This suggests that our model approach’s
generalization ability is not satisfactory.</p>
      <p>Table 3 further shows the average accuracy of ours model on diferent dataset variants, particularly
on the test sets of nine variants. Our model’s minimum value across all variants was 0.219,with the 25th
and 75th percentiles at 0.691 and 0.776, respectively, a median of 0.725, and a maximum value of 0.907.
Our method surpasses PPMd, Unmasking, and Fast-DetectGPT on the 25-th quantile,75-th quantile and
Max. Compared to the quantile results of other participants, our model is surpasses the models in the
25-th percentile in most metrics. This indicates that there is still a significant gap between our approach
and the current state-of-the-art methods.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this paper, we propose a method for Generative AI Authorship Verification on PAN 2024. We modified
the labels in the training dataset to transform it into a multi-classification task. We fine-tuned ChatGLM
with the aim of enabling the large language model to better understand the writing styles of diferent
authors, thus learning the diferences between robot and human writing. We utilized LoRA technology
for fine-tuning, as LoRA method can extend the language representation of the model, making it more
professional and adaptive. From the results, it appears that the method performs poorly on the new
test dataset, indicating that our approach lacks some degree of generalization ability.In subsequent
work, it is advisable to employ more efective methods to augment the data and enhance the model’s
classification ability on open sets.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work is supported by the National Natural Science Foundation of China (No. 61972091), Natural
Science Foundation of Guangdong Province of China (No. 2022A1515010101, No. 2021A1515012639).
[10] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers
for language understanding, arXiv preprint arXiv:1810.04805 (2018).
[11] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al., Language models are
unsupervised multitask learners, OpenAI blog 1 (2019) 9.
[12] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt,
S. Altman, S. Anadkat, et al., Gpt-4 technical report, arXiv preprint arXiv:2303.08774 (2023).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xia</surname>
          </string-name>
          , et al.,
          <article-title>Glm-130b: An open bilingual pre-trained model</article-title>
          ,
          <source>arXiv preprint arXiv:2210.02414</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ge</surname>
          </string-name>
          , Y. Han,
          <string-name>
            <given-names>F.</given-names>
            <surname>Huang</surname>
          </string-name>
          , et al.,
          <source>Qwen technical report, arXiv preprint arXiv:2309.16609</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X. B.</given-names>
            <surname>Casals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chulvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dementieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Elnagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Freitag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Korenčić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Panchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Smirnova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Stamatatos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Taulé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ustalov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          , E. Zangerle,
          <article-title>Overview of PAN 2024: Multi-Author Writing Style Analysis, Multilingual Text Detoxification, Oppositional Thinking Analysis, and Generative AI Authorship Verification</article-title>
          , in: L.
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Mulhem</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Quénot</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Schwab</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Soulier</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. M. D. Nunzio</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Galuščáková</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. G. S. de Herrera</surname>
          </string-name>
          , G. Faggioli, N. Ferro (Eds.),
          <source>Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Fifteenth International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Lecture Notes in Computer Science, Springer, Berlin Heidelberg New York,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          , E. Stamatatos,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <article-title>Overview of the Voight-Kampf Generative AI Authorship Verification Task at PAN 2024</article-title>
          , in: G.
          <string-name>
            <given-names>F. N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galuščáková</surname>
          </string-name>
          , A. G. S. de Herrera (Eds.), Working Notes of CLEF 2024 -
          <article-title>Conference and Labs of the Evaluation Forum, CEUR-WS</article-title>
          .org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wallis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Allen-Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Chen</surname>
          </string-name>
          , Lora:
          <article-title>Low-rank adaptation of large language models</article-title>
          ,
          <source>arXiv preprint arXiv:2106.09685</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kolyada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Grahm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Elstner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Loebe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hagen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <article-title>Continuous Integration for Reproducible Shared Tasks with TIRA.io</article-title>
          , in: J.
          <string-name>
            <surname>Kamps</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Crestani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Maistro</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Joho</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          <string-name>
            <surname>Kruschwitz</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Caputo (Eds.),
          <source>Advances in Information Retrieval. 45th European Conference on IR Research (ECIR</source>
          <year>2023</year>
          ), Lecture Notes in Computer Science, Springer, Berlin Heidelberg New York,
          <year>2023</year>
          , pp.
          <fpage>236</fpage>
          -
          <lpage>241</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>031</fpage>
          -28241-6_
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Dugan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ippolito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kirubarajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Callison-Burch</surname>
          </string-name>
          ,
          <article-title>Roft: A tool for evaluating human detection of machine-generated text</article-title>
          , arXiv preprint arXiv:
          <year>2010</year>
          .
          <volume>03070</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Zellers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holtzman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Rashkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bisk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Farhadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Roesner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <article-title>Defending against neural fake news</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>32</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gehrmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Strobelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Rush</surname>
          </string-name>
          , Gltr:
          <article-title>Statistical detection and visualization of generated text</article-title>
          , arXiv preprint arXiv:
          <year>1906</year>
          .
          <volume>04043</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>