<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Literal Re-translation as a Method for AI Text Disguise and Detection Evasion</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Poojan Vachharajani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Netaji Subhas University of Technology</institution>
          ,
          <addr-line>New Delhi</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>This paper presents the PJs-team's submission to the Voight-Kampf 2025 shared task, where we act as a "breaker" aiming to evade AI-generated text detection. Our core strategy, termed "literal re-translation," involves prompting a large language model to first generate text in Hindi and then perform a literal, word-for-word translation into English. Two variations were tested: a baseline system (v1) with a direct re-translation prompt, and an enhanced system (v2) which was additionally instructed to simulate human-like imperfections such as grammatical errors and awkward phrasing. In the task's evaluation framework, lower detection scores indicate more successful evasion. Our oficial results show that the enhanced v2 system was significantly more efective, achieving a Brier score of 0.425 compared to the baseline's 0.722. This demonstrates that while literal re-translation introduces some ambiguity, the key to successful evasion lies in explicitly prompting the model to be "less perfect" and to mimic specific human textual flaws.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI Text Detection</kwd>
        <kwd>Detection Evasion</kwd>
        <kwd>Prompt Engineering</kwd>
        <kwd>Generative AI</kwd>
        <kwd>Authorship Verification</kwd>
        <kwd>Literal Translation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <sec id="sec-2-1">
        <title>Both of our submissions utilized the Anthropic Claude 3.5 Sonnet model [5]. No model fine-tuning was</title>
        <p>performed. The entire generation process was controlled through a single system prompt, and the final</p>
      </sec>
      <sec id="sec-2-2">
        <title>English text was extracted from the model’s XML output.</title>
        <sec id="sec-2-2-1">
          <title>2.1. Submission 1: Baseline Literal Re-translation (PJs-team-v1)</title>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>The first system served as our baseline. It was designed to test the core hypothesis that the structure of "translationese" could, by itself, be enough to confuse detectors. The prompt instructed the model to</title>
        <p>generate a Hindi response and then a literal, grammatically correct English translation.
You are given a user prompt along with optional context and style or genre information.
Your task is to write a fluent Hindi response based on this input, followed by a literal
word-for-word English translation.</p>
        <p>The English must preserve the structure and vocabulary of the Hindi sentence as closely
as possible, but grammar should be correct.</p>
        <p>Requirements:
- The Hindi response must be approximately 500 words long.
- The English translation must be literal (word-by-word aligned with the Hindi), without
adding or omitting meaning.
- The English output must have correct grammar but should not rephrase or interpret
freely.
- Use only full stops and commas in the English output. No other punctuation.
- Capitalize correctly in English where needed.
- Ensure both the Hindi and English parts match in length and structure as closely as
possible.
- Output must strictly follow the XML tag format below.
&lt;hindi_response&gt;
...your Hindi response here...
&lt;/hindi_response&gt;
&lt;english_literal_translation&gt;
...your literal English translation here...
&lt;/english_literal_translation&gt;</p>
      </sec>
      <sec id="sec-2-4">
        <title>Listing 1: System prompt for the baseline v1 system.</title>
        <sec id="sec-2-4-1">
          <title>2.2. Submission 2: Enhanced Re-translation with Imperfections (PJs-team-v2)</title>
          <p>Our second system was designed to be more adversarial. It built upon the re-translation concept but
added explicit instructions for the model to mimic human flaws, directly countering heuristics that look
for overly perfect text.</p>
          <p>You are given a user prompt along with optional context and style or genre information.
Your task is to write a fluent Hindi response (~500 words), followed by a literal</p>
          <p>English translation.</p>
          <p>Additional instructions:
- Make the Hindi text natural and varied. Use mixed sentence lengths and structures.
- Occasionally repeat or restate ideas, just like a human might.
- Use culturally inconsistent idioms, regionalisms, or slight awkwardness.
- Do not use exact keywords from the input. Instead, paraphrase or use alternate wording
.
- Ensure the English translation is a **literal word-by-word** translation of the Hindi,
preserving structure and phrasing.
- Do not improve grammar in English; keep common mistakes.
- Only use full stops and commas in English, with correct capitalization.
- If a genre is specified, make the tone match that genre, including typical human flaws
in it (like filler words in podcast, dramatization in fanfiction, etc.)
Output format:
&lt;Hindi_response&gt;
...your Hindi response here...
&lt;/Hindi_response&gt;</p>
        </sec>
      </sec>
      <sec id="sec-2-5">
        <title>Listing 2: System prompt for the baseline v2 system.</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and Analysis</title>
      <sec id="sec-3-1">
        <title>In the Voight-Kampf evaluation, breaker submissions like ours are judged on their ability to fool the</title>
        <p>
          builder’s detection systems. The provided metrics, such as the Brier score and C@1, measure the
performance of these detectors on our generated text. Consequently, lower scores indicate a more
successful evasion strategy, as they signify poorer detector performance [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The oficial results are
presented in Table 1.
        </p>
        <p>Mean
0.59864
0.34568
The results clearly show a significant performance gap between our two approaches.
• PJs-team-v1 (Baseline): This system was largely unsuccessful. The high Brier score of 0.722
indicates that the detection systems were able to identify its output as AI-generated with high
confidence. The structural artifacts from literal re-translation, when constrained by correct
grammar, were not suficient to fool the detectors.
• PJs-team-v2 (Enhanced): This system was highly successful. Its Brier score of 0.425 is
dramatically lower, demonstrating that the detection systems struggled significantly to classify its output.</p>
      </sec>
      <sec id="sec-3-2">
        <title>The text generated by this system was far more likely to be mistaken for human writing.</title>
      </sec>
      <sec id="sec-3-3">
        <title>The superior performance of v2 provides a clear insight: the key to efective evasion was not the</title>
        <p>re-translation method itself, but the explicit instruction to be imperfect. By prompting the model to
introduce awkward phrasing, repetition, and grammatical errors, we created text that successfully
bypassed detectors tuned to flag the unnatural perfection of typical AI output. Examples of outputs
generated by both systems (v1 and v2) are provided in the Appendix for reference.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <sec id="sec-4-1">
        <title>Our participation in the Voight-Kampf 2025 task demonstrates that while the "literal re-translation"</title>
        <p>method shows some promise, its true potential for evading AI text detection is only realized when
combined with adversarial prompting. Our baseline system was easily detected, whereas our enhanced
system, explicitly instructed to mimic human errors, proved to be a highly efective evasion method.</p>
      </sec>
      <sec id="sec-4-2">
        <title>This finding underscores a critical vulnerability in current detection paradigms: they are susceptible to systems that are prompted to be deliberately and convincingly imperfect. Future work in detection must evolve to identify more fundamental fingerprints of machine generation beyond surface-level stylistic polish.</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used the Anthropic Claude 3.5 Sonnet model to generate
the text submissions for the Voight-Kampf task as described in the methodology. The author also used
a generative AI assistant for grammar checking, spell checking, and formatting the LaTeX code for this
paper. After using these tools, the author reviewed and edited the content as needed and takes full
responsibility for the publication’s content.</p>
    </sec>
    <sec id="sec-6">
      <title>Appendix: Example Outputs</title>
      <sec id="sec-6-1">
        <title>PJs-team-v1 (Baseline Output Examples)</title>
        <p>% Example 1</p>
        <sec id="sec-6-1-1">
          <title>Dear S i r ,</title>
          <p>I am w r i t i n g t o you a b o u t t h i s i m p o r t a n t m a t t e r . I am P r i n c e J o e</p>
        </sec>
        <sec id="sec-6-1-2">
          <title>Eboh , t h e Chairman o f t h e N i g e r D e l t a Development Commission (</title>
        </sec>
        <sec id="sec-6-1-3">
          <title>NDDC) o f N i g e r i a . We have an a d d i t i o n a l amount o f $ 2 5 m i l l i o n from p e t r o l e u m c o n t r a c t s , which n e e d s t o be d i s c r e e t l y t r a n s f e r r e d o u t o f t h e c o u n t r y .</title>
        </sec>
        <sec id="sec-6-1-4">
          <title>Due t o l o c a l laws , government e m p l o y e e s a r e n o t p e r m i t t e d t o h o l d</title>
          <p>f o r e i g n a c c o u n t s . T h e r e f o r e , we a r e s e a r c h i n g f o r a f o r e i g n
p a r t n e r who can t e m p o r a r i l y r e c e i v e t h e s e f u n d s . I n t h i s
t r a n s a c t i o n , you w i l l be p r o v i d e d 20% o f t h i s amount a s a r e w a r d
f o r y ou r a s s i s t a n c e . The r e m a i n i n g 75% i s r e s e r v e d f o r t h e
c o m m i t t e e members , and 5% i s f o r e x p e n s e s .</p>
          <p>T h i s i s a c o n f i d e n t i a l t r a n s a c t i o n , and i t n e e d s t o be c o m p l e t e d
w i t h i n 21 working d a y s . T h e r e f o r e , I r e q u e s t you t o p r o v i d e y ou r
p e r s o n a l and bank d e t a i l s soon s o t h a t we can t r a n s f e r t h i s money
i n t o y ou r a c c o u n t .</p>
        </sec>
      </sec>
      <sec id="sec-6-2">
        <title>PJs-team-v2 (Enhanced Output Examples)</title>
        <p>% Example 1</p>
        <sec id="sec-6-2-1">
          <title>Most r e s p e c t e d s i r / madam ,</title>
          <p>I am w r i t i n g t h i s l e t t e r t o you i n an e x t r e m e l y c o n f i d e n t i a l m a t t e r .</p>
        </sec>
        <sec id="sec-6-2-2">
          <title>I am P r i n c e J o e Eboh , chairman o f N i g e r D e l t a Development</title>
        </sec>
        <sec id="sec-6-2-3">
          <title>Commission . Our o r g a n i z a t i o n has r e c e n t l y c o m p l e t e d many p e t r o l e u m c o n t r a c t s , from which 25 m i l l i o n US d o l l a r s e x c e s s money has become a v a i l a b l e .</title>
        </sec>
        <sec id="sec-6-2-4">
          <title>I have chosen you b e c a u s e you a p p e a r t o be an h o n e s t and t r u s t w o r t h y p e r s o n . Our country ’ s laws do not p e r m i t government employees t o keep money i n f o r e i g n a c c o u n t s . T h e r e f o r e I humbly r e q u e s t you t o a c c e p t t h i s amount t e m p o r a r i l y i n your a c c o u n t .</title>
        </sec>
        <sec id="sec-6-2-5">
          <title>For your help , we promise t o g i v e you 20% o f t o t a l amount . 75% w i l l be f o r committee members and 5% w i l l be k e p t f o r e x p e n s e s . T h i s i s a c o m p l e t e l y l e g i t i m a t e p r o p o s a l , which needs t o be h a n d l e d i n e x t r e m e l y c o n f i d e n t i a l manner .</title>
        </sec>
        <sec id="sec-6-2-6">
          <title>P l e a s e send your f u l l name , bank a c c o u n t d e t a i l s , p a s s p o r t number and phone number t o c o m p l e t e t h i s t r a n s a c t i o n . We want t o c o m p l e t e t h i s t r a n s a c t i o n w i t h i n 21 working days . Time i s very i m p o r t a n t .</title>
        </sec>
        <sec id="sec-6-2-7">
          <title>I a s s u r e you t h a t t h i s i s a g o l d e n o p p o r t u n i t y . With h e l p o f r e s p e c t e d p e r s o n l i k e you , we can s u c c e s s f u l l y c o m p l e t e t h i s work . P l e a s e keep t h i s l e t t e r c o m p l e t e l y c o n f i d e n t i a l and r e p l y a s soon a s p o s s i b l e .</title>
        </sec>
        <sec id="sec-6-2-8">
          <title>With thanks ,</title>
        </sec>
        <sec id="sec-6-2-9">
          <title>P r i n c e J o e Eboh</title>
        </sec>
        <sec id="sec-6-2-10">
          <title>Chairman , C o n t r a c t Award Committee</title>
        </sec>
        <sec id="sec-6-2-11">
          <title>N i g e r D e l t a Development Commission % Example 2</title>
        </sec>
        <sec id="sec-6-2-12">
          <title>Most r e s p e c t e d s i r / madam ,</title>
          <p>I B a r r i s t e r M a r t i n s J i d e , a s e n i o r a d v o c a t e , am p r e s e n t i n g t h i s
e x t r e m e l y c o n f i d e n t i a l p r o p o s a l t o you . T h i s l e t t e r i s r e g a r d i n g
t h e d e a t h o f my d e c e a s e d c l i e n t , E n g i n e e r Suk Hun Wufei Flody ,
who were a p r e s t i g i o u s e n g i n e e r i n N i g e r i a n N a t i o n a l P e t r o l e u m
C o r p o r a t i o n .</p>
        </sec>
        <sec id="sec-6-2-13">
          <title>With extreme sorrow have t o i n f o r m t h a t i n August 2003 i n a t e r r i b l e</title>
          <p>gas e x p l o s i o n h i s e n t i r e f a m i l y became h e a v e n l y . T h i s a c c i d e n t
has g i v e n me deep p a i n . My c l i e n t had d e p o s i t e d $9 . 3 m i l l i o n</p>
        </sec>
        <sec id="sec-6-2-14">
          <title>American d o l l a r s i n a prominent bank o f Ghana .</title>
        </sec>
        <sec id="sec-6-2-15">
          <title>A f t e r d e t a i l e d i n v e s t i g a t i o n a l s o , I have not found any l e g i t i m a t e h e i r . T h e r e f o r e , I am making a s p e c i a l r e q u e s t t o you . Can you h e l p me i n r e c e i v i n g t h i s w e a l t h ? T h i s w i l l be a c o m p l e t e l y l e g a l p r o c e s s , i n which you w i l l c l a i m a s d i s t a n t r e l a t i v e .</title>
          <p>I have p r e p a r e d a p r o p e r p r o p o s a l : 55% amount f o r me , 40% f o r you ,
and 5% f o r l e g a l e x p e n s e s and t a x e s . A l l n e c e s s a r y documents a r e
ready , which w i l l prove t h i s c l a i m .</p>
        </sec>
        <sec id="sec-6-2-16">
          <title>T h i s i s an e x t r e m e l y s e n s i t i v e matter , t h e r e f o r e c o m p l e t e s e c r e c y i s n e c e s s a r y . I f you a r e i n t e r e s t e d i n t h i s p r o p o s a l , then p l e a s e send your phone number and f a x number . Time i s very l e s s , t h e r e f o r e q u i c k r e s p o n s e i s e x p e c t e d .</title>
        </sec>
        <sec id="sec-6-2-17">
          <title>With t r u s t ,</title>
          <p>B a r r i s t e r M a r t i n s J i d e</p>
        </sec>
        <sec id="sec-6-2-18">
          <title>S e n i o r Advocate</title>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Bevendorf</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiegmann</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karlgren</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dürlich</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gogoulou</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Talman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , ... &amp;
          <string-name>
            <surname>Stein</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Overview of the “voight-kampf” generative AI authorship verification task at PAN</article-title>
          and
          <article-title>ELOQUENT 2024</article-title>
          . In Working Notes of the Conference and
          <article-title>Labs of the Evaluation Forum</article-title>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2024</year>
          (Vol.
          <volume>3740</volume>
          , pp.
          <fpage>2486</fpage>
          -
          <lpage>2506</lpage>
          ).
          <article-title>CEUR-WS.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Bevendorf</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Casals</surname>
            ,
            <given-names>X. B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chulvi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dementieva</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elnagar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Freitag</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , ... &amp;
          <string-name>
            <surname>Zangerle</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2024</year>
          , March).
          <source>Overview of PAN</source>
          <year>2024</year>
          <article-title>: multi-author writing style analysis, multilingual text detoxification, oppositional thinking analysis, and generative ai authorship verification</article-title>
          .
          <source>In European Conference on Information Retrieval</source>
          (pp.
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          ). Cham: Springer Nature Switzerland.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Karlgren</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artemova</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bojar</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Engels</surname>
            ,
            <given-names>M. I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikhailov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Šindelář</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Velldal</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Øvrelid</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2025</year>
          ).
          <source>Overview of ELOQUENT</source>
          <year>2025</year>
          :
          <article-title>Shared Tasks for Evaluating Generative Language Model Quality</article-title>
          . In
          <string-name>
            <surname>Carrillo-de-Albornoz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plaza</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>García Seco de Herrera</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mothe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piroi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spina</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Faggioli</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ferro</surname>
          </string-name>
          , N. (Eds.),
          <source>Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the 16th International Conference of the CLEF Association (CLEF</source>
          <year>2025</year>
          ). Springer Lecture Notes in Computer Science.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Bevendorf</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karlgren</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiegmann</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shelmanov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mansurov</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsivgun</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurevych</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nakov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stamatatos</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Potthast</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Stein</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2025</year>
          ).
          <article-title>Overview of the “VoightKampf” Generative AI Authorship Verification Task at PAN</article-title>
          and
          <article-title>ELOQUENT 2025</article-title>
          .
          <article-title>In 26th Working Notes of the Conference and Labs of the Evaluation Forum</article-title>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2025</year>
          . CEUR Workshop Proceedings.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Anthropic.</surname>
          </string-name>
          (
          <year>2024</year>
          , June 21). *
          <article-title>Introducing Claude 3.5 Sonnet*</article-title>
          .
          <source>Retrieved from Anthropic website.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>