<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Does ChatGPT Have a Poetic Style?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>MelanieWalsh</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>AnnaPreus</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>ElizabethGronsk</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>English Department, University of Washington</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Information School, University of Washington</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>1201</fpage>
      <lpage>1219</lpage>
      <abstract>
        <p>Generating poetry has become a popular application of LLMs, perhaps especially of OpenAI's widelyused chatbot ChatGPT. What kind of poet is ChatGPT? Does ChatGPT have its own poetic style? Can it successfully produce poems idniferent styles? To answer these questions, we prompt the GPT-3.5 and GPT-4 models to generate English-language poems in 24 diferent poetic forms and styles, about 40 diferent subjects, and in response to 3 diferent writing prompt templates. We then analyze the resulting 5.7k poems, comparing them to a sample of 3.7k poems from the Poetry Foundation and the Academy of American Poets. We find that the GPT models, especially GPT-4, can successfully produce poems in a range of both common and uncommon English-language forms in superficial yet noteworthy ways, such as by producing poems of appropriate lengths for sonnets (14 lines), villanelles (19 lines), and sestinas (39 lines). But the GPT models also exhibit their own distinct stylistic tendencies, both within and outside of these specific forms. Our results show that GPT poetry is much more constrained and uniform than human poetry, showing a strong penchant for rhyme, quatrains (4-line stanzas), iambic meter, first-person plural perspectives (we, us, our), and specific vocabulary like “heart,” “embrace,” “echo,” and “whisper.” Generating poetry has become a popular application of LLMs, perhaps especially of OpenAI's widely-used ChatGPT. The chatbot's ability to produce poetry was frequently mentioned in early journalistic repor3t6s],[and the question of whether it produced “good” or “bad” poetry quickly became the subject of hot deba1t2e, [5]. While these debates point to the significance of LLM-generated poetry as a topic, in this paper we are less interested in subjectively evaluating the quality of ChatGPT-generated poems and more interested in quantifying and describing key patterns in the poetry produced. What kind of poet is ChatGPT? Does ChatGPT have its own poetic style? Can it successfully produce poems in diferent forms and styles, like sonnets (14 lines with regular rhyme and iambic pentameter) or villanelles (19 lines with alternating ifrst and third lines)? In this paper, we present initial results from a larger study of LLM-generated poetry. We prompt the GPT-3.5 and GPT-4 models to generate English-language poems in 24 diferent poetic forms or styles, about 40 diferent subjects, and in response to 3 diferent templates.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;LLMs</kwd>
        <kwd>ChatGPT</kwd>
        <kwd>poetry</kwd>
        <kwd>style</kwd>
        <kwd>form</kwd>
        <kwd>text generation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>GPT-4 Prompt: Write a poem about the subject of social Iambic End
commentaries in the following form or style: limerick. Meter Rhyme</p>
      <p>A world viewed through a smartphone's lens, A
Each day a stream of trends begins, A
Tales of fortune, tales of woes, B
Each one judged as the wind blows. B
In the glow of public opinions' glare,
Endless chatter fills the air.</p>
      <p>Russian bots to kitty memes,</p>
      <p>Nothing's truly as it seems.
itsraanu BTWheoheriilrndtdlhyotwhueigsshdcotrsmeet,hnoesyr, jtwuwshetoentco,aiinnsest?hpeeyebche?so free,
Q s Undiscovered truth, or toys?</p>
      <p>As they debate who's wrong or right,
Their fingers dance in virtual fight.</p>
      <p>From east to west, the judgments fly,
Sometimes truth, oftentimes lie.</p>
      <p>We see the world in constant chime,
A universe of views online,
In every tweet and post we glean,
A reflection of a world unseen.</p>
      <p>A
A
B
B
A
A
B
B
A
A
B
B
A
A
B
B</p>
      <p>We then analyze the resulting 5.7k poems, comparing them to a sample of 3.7k poems from
the Poetry Foundation and the Academy of American Poets. We find that the GPT models,
especially GPT-4, can successfully produce poems in a range of both common and uncommon
English-language forms in superficial yet noteworthy ways, such as by producing poems of
appropriate lengths for sonnets (14 lines), villanelles (19 lines), and sestinas (39 lines). But we
ifnd that the GPT models also exhibit their own distinct stylistic tendencies, both within and
outside of these specific forms.</p>
      <p>Our results show that the poetry produced by GPT-3.5 and GPT-4 is much more constrained
and uniform than human poetry. Unless otherwise prompted (and sometimes when otherwise
prompted), both GPT models have a tendency to produce rhymed lines in something like iambic
meter—a regular pattern of unstressed and stressed syllables that characterizes the majority of
English-language verse before the 20th centu29r]y. [Both models also have a tendency to
organize poetic lines into quatrains (4-line stanzas). And they display other distinct signatures,
such as a curiously dominant first-person plural perspective and a penchant for words like
“heart,” “embrace,” “echoes,” and “whispers.” We release our c1oadned, we plan to share our
ChatGPT poetry corpus at a later stage of the project, with the aim of encouraging further
analysis through computational and/or more traditional literary studies approaches.
1https://github.com/melaniewalsh/chatgpt_poetry</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        The history of computational poetry generation dates back to at least the mid-20t1h5]c.entury [
Poets, researchers, and hobbyists have experimented with a range of technical appr9]o,aches [
from rule-based systems19[
        <xref ref-type="bibr" rid="ref8">, 8</xref>
        ], to Markov chains2[], to most recently neural networks and
LLMs [
        <xref ref-type="bibr" rid="ref1 ref13 ref18 ref25 ref3 ref32 ref33">34, 35, 18, 13, 1, 3, 25</xref>
        ].
      </p>
      <p>
        While computational poetry has been an active area of inquiry for many d2e0c,a3d0,e1s0[
        <xref ref-type="bibr" rid="ref21 ref27 ref7">, 7,
27, 21</xref>
        ], it has arguably remained the purview of specialists until recently. But since the release
of ChatGPT in November 2022, hundreds of millions of people have used and experimented
with LLMs, opening up computational poetry generation to a broader public.
      </p>
      <p>
        Training data and memorization are key considerations for LLM-generated poetry from
ethical, legal, and technical perspectives. Models’ ability to produce poetry is intimately tied to
their training data, which partly consists of literary works by both living and dead writers.
Much popular and scholarly attention related to LLMs and literature has focused—rightfully—
on the ethics and legality of such literary trainin2g8d,a17t,a3[
        <xref ref-type="bibr" rid="ref1">1, 32</xref>
        ]. We believe these
concerns are vitally important to examinations of ChatGPT’s style, which is built from the words
of other writers. We also think that it is valuable to ask questions about LLMs’ poetic
capacities because it can help inform debates about LLMs and creativity while also advancing our
understanding of how poetry is being used and propagated in the contemporary world.
      </p>
      <p>In their work on poetry memorization in ChatGPT specifically, D’Souza and Mi6m]nshoo[w
that the most likely factor for a poem’s memorization by the model was its inclusion in the 1983
Norton Anthology of Literature. This finding suggests that canonical poetry is
disproportionately represented in the GPT models, which could influence the kind of poetry they produce.
In a similar vein, we show in prior wor3k3][that 41% of a curated sample of poems from the
Poetry Foundation and the Academy of American Poets (which we also use as a comparison
corpus in this study) are likely memorized by GPT-4. Our previous analysis suggests that this
memorization may enhance the models’ ability to classify the form of the poems, but the results
are not conclusive. More work is needed to evaluate the impact that memorization may have
on poetry generation.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Data + Methods</title>
      <sec id="sec-3-1">
        <title>3.1. Human Poetry Corpus</title>
        <p>To provide a baseline comparison for our ChatGPT-generated poetry and to guide our
prompting, we curate a dataset of poems, styles, and subjects from the Poetry Foundation and the
Academy of American Poets. Both organizations are well-respected poetry institutions with
websites that host tens of thousands of poems spanning hundreds of years, and many of the
poems are tagged by style and subject on the websites.</p>
        <p>
          We scrape up to 400 poems from these two sources for 23 diferent poetic forms or styles,
which we also use as prompts for our ChatGPT-generated poetry corpus. Following prior work
[
          <xref ref-type="bibr" rid="ref31">33</xref>
          ], we select poems in the following categories: fixed forms, unfixed forms, and formal
elements (which consists of both meters and stanza forms) (see T1a)b.lIen total, the sample
includes 3,874 poem/style pairs, or 3,692 unique poems.
        </p>
        <p>We manually remove prefatory text—such as dedications, dates, epigraphs, or other
contextual information—from human-authored poems with traditionally fixed lengths (e.g., sonnets,
villanelles, sestinas) if the poem is within 10 lines of the conventional length. We do not remove
prefatory material from other poems; however, based on our qualitative analysis and review,
we do not believe prefatory material is extensive in most other poems or significantly impacts
results.</p>
        <p>While the Poetry Foundation and the Academy of American Poets are among the largest
tagged poetry collections available, they are also defined by various kinds of bias that are
important to note. They both focus on English-language poetry, and the Academy of American
Poets focuses especially on American poetry. Not all of the poems in their collections are
tagged, and it is unclear why some poems are tagged and others are not. Neither site hosts a</p>
        <sec id="sec-3-1-1">
          <title>General: activities, arts &amp; sciences, living, love, mythology &amp; folk</title>
          <p>lore, nature, religion, relationships, social commentaries</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Occasions: anniversary, birth, birthdays, engagement, farewells,</title>
          <p>funerals, recovery, graduation, gratitude, toasts, weddings</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Holidays: cinco de mayo, christmas, easter, father’s day, hal</title>
          <p>loween, hanukkah, independence day, kwanzaa, memorial day,
mother’s day, new year, passover, ramadan, thanksgiving, yom
kippur</p>
        </sec>
        <sec id="sec-3-1-4">
          <title>Fixed: limerick, pantoum, ghazal, ballad, villanelle, sonnet, sestina,</title>
          <p>haiku</p>
        </sec>
        <sec id="sec-3-1-5">
          <title>Unfixed: epic, monologue, ars poetica, aubade, pastoral, ode, el</title>
          <p>egy, visual poetry, ekphrasis, prose poem</p>
        </sec>
        <sec id="sec-3-1-6">
          <title>Formal Elements: meters: common measure, blank verse, free</title>
          <p>verse
stanza forms: quatrain, tercet, couplet</p>
        </sec>
        <sec id="sec-3-1-7">
          <title>General: Write a poem about the subject of X in the following form</title>
          <p>or style: Y.</p>
        </sec>
        <sec id="sec-3-1-8">
          <title>Figurative: Write a poem about the subject of X in the following</title>
          <p>form or style: Y. Do not use the actual word(s) X or Y in the poem.</p>
          <p>Specific: Write a poem about the subject of X in the following form
or style: Y. Make the poem about something specific.
representative collection of poems, in terms of poets’ gender, race, sexuality, and time period
(it is also difÏcult to know what a representative collection would be). They also over-represent
prestigious and canonical poetry, which may be of particular note in comparison with ChatGPT
since the model may be trained on (and perhaps even encouraged to produce) more popular,
commercial, and colloquial poetry.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. ChatGPT-Generated Poetry Corpus</title>
        <p>
          To create our ChatGPT poetry corpus, we proGmPpTt-3.5 Turbo andGPT-4 [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] to generate
poems in response to 3 diferentwriting prompt templates, in 24 diferent styles/forms, and
about 40 diferentsubjects. The styles and subjects are selected from the tagging schema on
the Poetry Foundation’s website. We use zero-shot prompts (i.e., prompts that do not provide
desired example outputs) because we are interested in testing the model’s “out-of-the-box”
capabilities in a mostly unmediated form.
        </p>
        <p>We select styles and subjects from the Poetry Foundation because they ofer an extensive
and diverse poetic taxonomy that is developed by an authoritative external source and that is
reflective of one of the largest existing collections of human poetry. For our “subjects,” we select
the 40 broadest level “topics” from the Poetry Foundation’s tagging schema, which include the
subcategories “subjects,” “occasions,” and “holidays” (see T2a)b.lFeor our “styles,” we select
the 23 styles and poetic forms described in Sect3.i1o.nWe add the style of “a poem” because
we are interested in the models’ responses to the generic idea of a poem without a specified
form. These combinations result in 2,880 generated poems per model, with 120 poems per style
(per model) and 72 poems per subject (per model) (see Ta1b)l.e</p>
        <p>
          We model the construction of ouwrr3iting prompt templates on popular approaches
demonstrated on social media, in journalistic articles, and by LLM com1p4a,n1i2e,s2[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]:
1. General: Write a poem about the subject of X in the following form or style: Y.
2. Figurative: Write a poem about the subject of X in the following form or style: Y. Do
not use the actual word(s) X or Y in the poem.
3. Specific: Write a poem about the subject of X in the following form or style: Y. Make
the poem about something specific.
        </p>
        <p>We include our “figurative“ and “specific“ templates after observing the models’ tendency to
repeat the words in the prompts and to be vague. These templates push the model to create
more diverse outputs. We believe that prompting significantly impacts the kind of poetry that
the GPT models produce, and we reflect on this more in Sect5i.on</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. Poetic Length &amp; Structure</title>
        <p>We measure the number of lines and the number and kind of stanzas across all the poems by
parsing line breaks. We visualize these distributions as boxplots2()Fiagnudreheatmaps
(Figure3), revealing the most common lengths and shapes of the poems across styles and forms.
These results show that when we prompt the models to generate poems in forms with typically
ifxed lengths—such assonnets (14 lines)—they largely adhere to this convention, with notable
improvement in GPT-4. While GPT-3.5 and GPT-4 both generate sonnets with a median
average length of 14 lines, Figur2esand3 show that there is much more variability in GPT-3.5. As
displayed in the boxplot, the upper 75% quartile extends to 32 lines, and the range extends to
55 lines. By contrast, the entire range of GPT-4 sonnets (minus outliers) falls at exactly 14 lines.
Line lengths for sestinas (typically 39 lines) and villanelles (typically 19 lines) follow a
similar pattern. The median lengths are appropriately 39 and 19 lines for both models, but GPT-4
demonstrates much more consistency, displaying a smaller interquartile range and spread of
outliers (the same consistency is also displayed in the heatmap in3F).igure</p>
        <p>Interestingly, for these three forms, GPT-4 hues closer to “conventional” lengths than our
sample of poems from the Poetry Foundation and the Academy of American Poets. In a small
percentage of these human poems, we find that the longer lengths come from explicit or implicit
play with or defiance of the forms. For example, Bino A. Realuyo concludes his 15-line poem,
“Euler’s Equationw,”ith the line: “a rebellion, the fiteenth line of a sonnet.” More often, in these
longer poems, authors include a given form in multiples, such as Algernon Charles Swinburne’s
“double sestina” (12 stanzas of 12 lines each“)Tihne Complaint of Lisa.”</p>
        <p>An obvious aberration for the GPT models is their atypically long limerick style. Where
a traditional limerick is usually about 5 lines long, the median length for both GPT models
is 25 lines. Upon closer inspection, it is clear that both models frequently bundle multiple,
appropriately-lengthed limericks together. The heatmap in 3Fisghuorwes that the models
often produce several limericks in a row. This is also the case for GPT-3.5’s atypically long
sonnets, which are usually multiple sonnets packed into one. While these multiples resemble
the long poems that we observe in our human poetry sample, we think this tendency more
likely suggests that, in certain cases, the GPT models hkonwowto produce a particular kind
of poem but don’t knowwhen to stop.</p>
        <p>Aside from limericks, the GPT models can broadly produce poems of appropriately diverse
lengths for a range of fixed forms, but they demonstrate a penchant for producing poems of
an almost “default” size when left to their own devices. For both models, the median average
length for a generic “poem” is 36 lines (see Figu2)r,eand the overall median length across all
styles is 32 lines.</p>
        <p>Another striking feature of the GPT poems is the dominance of 4-line stanqzuaast,raoirns.
We find that while just 16.7% of the human-authored stanzas are quatrains, a whopping 66.8%
of all GPT-3.5 stanzas and 59.6% of all GPT-4 stanzas are quatrains3()T.aTbhlee heatmaps
in Figure4 visually demonstrate how common quatrains are across the GPT-generated poems,
showing clear line breaks in regular 4-line intervals, with no such regularity evident in the
human poems.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Collective Perspective</title>
        <p>We measure the normalized frequency of pronouns (T4a)bilne each corpus, expressed per 100
words. We find that poems produced by GPT-3.5 and GPT-4 tend to use more first-person plural
pronouns (“we,” “us,” “our”) and fewer first-person singular pronouns (“i”, “me,” “myself”) than
poems written by humans (see Figur5e). For example, GPT-4 produced the following limerick
about Memorial Day in response to ofiguurrative prompt (which specifies not to include the
style or subject words in the poem):</p>
        <p>In May we stand strong, hearts ablaze,
For those who’ve seen war’s smoky haze.</p>
        <p>We honor the brave,
Who life for us gave,</p>
        <p>In silence, we give them our praise.</p>
        <p>This limerick continues on for 20 more lines in 5-line, rhyming (AABBA) stanzas, consistent
with the results presented in Sect4.i1o.n</p>
        <p>Because there are a large number of “holiday” and “occasion” subject prompts like
“Memorial Day,” which perhaps encourage meditation on collective experiences, we also show
normalized frequency for the GPT-generated poems with these subjects removed (see the dotted
lines in Figure5). Without these subjects, the normalized frequency for the first-person plural
decreases slightly, and it increases slightly for the third-person. But the curious dominance
of the first-person plural is still present. We think this pattern may reflect the models’
preprogrammed attitudes toward inclusivity, as well as its obvious lack of first-person singular
experiences, but more work is needed to explore this trend further.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Most Distinctive Words</title>
        <p>
          We also analyze the most distinctive opening words and overall words across the poems using
Monroe, Colaresi, and Quin2n3][’s “fightin’ words” algorithm, which uses weighted log-odds
ratios with an informative Dirichlet prior. This method is designed to robustly compare word
usage across unevenly distributed text corpora. We specifically use an implementation by
Hessel [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]2 and restrict the vocabulary to words that appear in a minimum of 10 poems. We
remove stopwords for our overall word analysis but not for our first word analysis. In Figures
6 and7, we display words with the highest Z-scores for each category, representing the most
distinctive words.
        </p>
        <p>The most distinctive opening word in both the GPT-3.5- and GPT-4 generated poems is “In”
(see Figure6). This preposition is included across a wide range of poems and contexts, such as:
In autumn’s blaze of golden hue... (GPT-3.5 — pantoum, Thanksgiving)
In the darkest days, a flicker of light...(GPT-3.5 — ars poetica, Hanukkah)
In the girth of world-kaleidoscope, we are birthed into living, (GPT-4 — free verse,
living)
The next most distinctive first word in GPT-4 poems is “Upon,” which also seems to be a
frequent way for the model to initiate iambic meter:
Upon a stage where shadows nightly reign... (GPT-4 — sonnet, Halloween)
Upon this day, we sing the laborer’s song,... (GPT-4 — sonnet, Labor Day)</p>
        <p>Upon the chill of winter’s breath descends,... (GPT-4 — blank verse, Hanukkah)
The word “upon” is an iamb (the basic unit of iambic meter), meaning it consists of an
unstressed syllable followed by a stressed syllable. This is also the case for other distinctive
GPT-4 first words, such as “beneath,” “behold,” and “within.” First words in poems from the
Poetry Foundation and Academy of American Poets show no such distinctive patterns, mostly
consisting of articles and pronouns.</p>
        <p>Overall, the distinguishing vocabulary for the GPT models consists of words associated with
love (“heart,” “love,” “souls,”), words that rhyme (“grace”, “embrace”), and words that are
acoustic (“echo,” “whisper”). For GPT-3.5, words like “embrace,” “grace,” “dance,” and “dreams” are
touchstones. At least one of these words shows up in 87% of the GPT-3.5 poems. For GPT-4,
either “echo” or “whisper” shows up in 75% of the poems. For example, both words appear in
thisdramatic monologue about thearts &amp; sciences:</p>
        <p>Look upon me, ageless I stand, the crossing of arts and sciences,
In the echoing hallways of knowledge, beneath glimmering frescoed edifices.</p>
        <p>I am the whisperer in marbled alcoves, the scribe of thinkers’ existence,
Caught in endless dialogue, between creativity and discipline’s persistence.</p>
        <p>-GPT-4 (dramatic monologue, arts &amp; sciences)
4.4. Prosody Analysis (Rhyme &amp; Meter)
“Prosody” refers to patterns of sound in poetry, encompassing rhyme and meter. Analyzing
prosody across a large corpus poses challenges even in human-authored poems because it relies
on the pronunciation of particular words in relation to each other. The text of a poem does not
provide direct access to its prosody because the same word may have diferent pronunciations
in diferent forms of English, and even with the same pronunciation a word might be stressed or
unstressed depending on its context. For example, in Alfred, Lord Tennyson“’sUplyosesmes,”
the word “I” is unstressed at the beginning of a line and stressed toward the end: “I cannot rest
from travel: I will drink.” Analyzing prosody across GPT-generated poems poses additional
challenges because unlike human-authored poems, which are often either clearly free verse or
clearly aimed at embodying a particular metrical pattern, the meter of GPT outputs can be less
precise and harder to define with a single metrical label.</p>
        <p>To measure prosody, we thus conduct both a quantitaantdivqeualitative analysis. For our
manual analysis, we take a random sample of poems in each form and, drawing on our domain
expertise, hand-annotate various prosodic elements where they are discernible, including
dominant meter, line-length (in terms of poetic feet—tetrameter, pentameter, etc.), rhyme scheme,
and stanza patterns. We evaluate 144 poems produced by GPT-3.5 (6 in each form) and 144
poems produced by GPT-4 (6 in each form) for just over 5% of the GPT-generated corpus. We
also analyze 138 poems from the human-authored corpus (6 poems in 23 forms), making up
just over 3.7% of the human-authored corpus.</p>
        <p>Over 80% of the GPT-generated poems in our random sample contain patterns of end rhyme,
as compared with around 50% of the human-authored poems. Over 60% of the GPT-authored
poems had a dominant iambic meter, compared to just under 40% of poems from the human
corpus. When we break these results down between the GPT-3.5 and GPT 4 models, it appears
that the dominance of iambic meter is lessening somewhat in the newer model. Only around
53% of GPT-4-authored poems had a dominant iambic meter compared to almost 74% of
GPT3.5-authored poems. Rather than indicating a shift in the model’s default tendencies in relation
to poetry, we think this change may reflect GPT-4’s increased abilitnyottporoduce iambic
meter when it is prompted to produce poems in forms that do not traditionally include regular
meter—for example haiku, prose poetry, or free verse.</p>
        <p>We also conduct a quantitative analysis of rhyme across all poems using P2a6r]’rsish [
Python implementation of the CMU Pronouncing Dictio3nUarsiyn. g this dictionary, we
measure how many lines end with AA, ABAB, ABBA, or ABCB rhymes. Around 90% of the poems
generated by both the GPT-3.5 and GPT-4 models contained at least one end rhyme, compared
to around 65% of our human-authored corpus. Moreover, for certain forsmonsnleitkseand
ballads, the modelsalways produced poems with at least one end rhyme. This was also the
case for unfixed forms likeaubade orpastoral, which do not typically require specific patterns
of rhyme. The percentage of rhyming lines for all styles and sources is displayed in8.Figure</p>
        <p>Taken together, our analyses suggest that there is a sort of default poetic mode in GPT
models, which favors quatrains, iambic meter, and end rhyme. The models can be prompted to
produce writing in other styles, but sometimes the persistent iambic/quatrain/end rhyme style
still breaks through. For example, although prompted to prosdocuiacelcaommentary-related
limerick, which would typically be a five-line stanza with anapestic meter and AABBA rhyme,
the GPT-4 model produced the following five quatrains with iambic meter and AABB rhyme:
A world viewed through a smartphone’s lens, Worldly wisdom, or just noise?
Each day a stream of trends begins, Undiscovered truth, or toys?
Tales of fortune, tales of woes,
Each one judged as the wind blows.</p>
        <p>In the glow of public opinions’ glare,
Endless chatter fills the air.</p>
        <p>Russian bots to kitty memes,
Nothing’s truly as it seems.</p>
        <p>We see the world in constant chime,</p>
        <p>A universe of views online,
Their thoughts they tweet, in speech so free, In every tweet and post we glean,
Behind the screens, who can they be? A reflection of a world unseen.</p>
        <p>As they debate who’s wrong or right,
Their fingers dance in virtual fight.</p>
        <p>From east to west, the judgments fly,</p>
        <p>Sometimes truth, oftentimes lie.</p>
        <p>While this poem is a funny and topical take on a social commentary (“Russian bots to kitty
memes / Nothing’s truly as it seems”), it also highlights some of the model’s limitations. Even
when instructed to produce more specific or varied forms, the model’s default poetic mode—
quatrains, iambic meter, and end rhyme—often resurfaces, favoring structured patterns over
more diverse styles.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>
        By showing that GPT-generated poetry contains clear stylistic quirks and characteristics, we
build upon emerging research on the style of LLM-generated texts. For example, after
analyzing millions of biomedical article abstracts, Kobak, González-Márquez, Horvát, and Lause
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] show that “hundreds of words have abruptly increased their frequency after ChatGPT
became available.” They reveal particular spikes in the use of “style-afecting verbs and
adjectives that ChatGPT-like LLMs prefer,” such as “delve,” “significant,” and “crucial.” Although
Kobak, González-Márquez, Horvát, and Laus1e6][are primarily interested in LLMs’ impact
on academic research, their findings contribute to growing knowledge about LLM style in
specific genres. Other related research in this area has focused on analyzing bias and stereotypes
present in LLM-generated tex4t]s. [We believe there are exciting opportunities for digital
humanities scholars and language experts to study more of the artistic dimensions of
LLMgenerated texts.
      </p>
      <p>The results of our poetry prompting experiments highlight both advancements and notable
limitations with GPT-generated poetry. The models’ ability to produce poems of appropriate
lengths for a wide variety of forms and styles—without any fine-tuning—marks a significant
development in automatic poetry generation, especially since the models manage to do so while
incorporating rhyme and meter and maintaining general clarity. Yet, overall, the models also
exhibit far less variation, diversity, and creativity than the human-authored poems. However,
we want to flag thatprompting—what the user asks the model to generate and how that ask
is constructed—plays a major role in shaping the poetry that the models produce. In related
experiments, when we prompted the models with specific author names, our results seemed to
shift and become more complex. In this specific study, our goal was not to produce the most
creative and interesting poetry possible, but rather to understand the broad contours of the
models and their outputs. If we wanted to produce more interesting poetry, we would likely
use diferent prompts.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>We prompt the GPT-3.5 and GPT-4 models to generate English-language poems in 24 diferent
poetic forms or styles, about 40 diferent subjects, and in response to 3 diferent templates. We
compare these GPT-generated poems to a sample of poems from the Poetry Foundation and
the Academy of American Poets, showing that the GPT models are much more formulaic and
constrained than the human-authored poetry. We argue that the GPT models have a “default”
poetic mode, characterized by quatrains with rhymed lines in iambic meter; first-person plural
perspectives; and the repetition of words like “heart,” “embrace,” “echoes,” and “whispers.” This
default mode sometimes breaks through even when otherwise prompted. We share the code
that we used to conduct this anal4yasinsd, we share all the public domain human-authored
poems and form/style annotations from prior w5oWrke. plan to share our ChatGPT poetry
corpus at a later stage of the project. In future work, we plan to explore a wider range of
prompts (potentially including author names) and models, and to study the poems more closely
with traditional literary studies approaches.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>We would like to thank Zoe LeBlanc, John Ladd, Matt Lavin, and Gabi Kirillof for early
feedback on this project. This research was partly supported by the NEH-funded ”AI for
Humanists” project and by an ”AI, Creativity, and Humanities” collaboration grant from the Simpson
Center for the Humanities at the University of Washington.
[31] C. Veltman. “AI is contentious among authors. So why are some feeding it their own
writing?” InN:pr (2024). url: https://www.npr.org/2024/04/30/1246686825/authors-usi
ng-ai-artificial-intelligence-to-.write
[32] G. D. Vynck. “AI learned from their work. Now they want compensatioWn.”aIsnh:ington
Post (2023). url: https://www.washingtonpost.com/technology/2023/07/16/ai-program
s-training-lawsuits-fair- u.se/</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Kann</surname>
          </string-name>
          . “
          <article-title>Acrostic Poem Generation”P. rIonc:eedings of the 2020 Conference on Empirical Methods in Natural Language Processing</article-title>
          (EMNLP). Ed. by
          <string-name>
            <given-names>B.</given-names>
            <surname>Webber</surname>
          </string-name>
          , T. Cohn,
          <string-name>
            <given-names>Y.</given-names>
            <surname>He</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          . Online: Association for Computational Linguistics,
          <year>2020</year>
          , pp.
          <fpage>1230</fpage>
          -
          <lpage>1240</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .emnlp-main.
          <volume>94</volume>
          . url: https://aclanthology.org/202 0.emnlp-main.
          <volume>94</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Astigarraga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Marn</surname>
          </string-name>
          <article-title>́teız-</article-title>
          <string-name>
            <surname>Otzeta</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Rodriguez</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Sierra</surname>
            , and
            <given-names>E. Lazkano.</given-names>
          </string-name>
          “Markov Text Generator for Basque Poetry”T.eIxnt:, Speech, and Dialogue. Ed. by
          <string-name>
            <given-names>K.</given-names>
            <surname>Ekštein</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Matoušek</surname>
          </string-name>
          . Cham: Springer International Publishing,
          <year>2017</year>
          , pp.
          <fpage>228</fpage>
          -
          <lpage>236</lpage>
          .
          <year>10d</year>
          .
          <year>o1i0</year>
          :
          <volume>07</volume>
          /
          <fpage>978</fpage>
          -3-
          <fpage>319</fpage>
          -64206-2\_
          <fpage>26</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Chakrabarty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Padmakumar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>He</surname>
          </string-name>
          . “
          <article-title>Help me write a Poem - Instruction Tuning as a Vehicle for Collaborative Poetry WritinPgr”o</article-title>
          .
          <source>cInee:dings of the 2022 Conference on Empirical Methods in Natural Language Processing</source>
          . Ed. by
          <string-name>
            <given-names>Y.</given-names>
            <surname>Goldberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Kozareva</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          . Abu Dhabi,
          <source>United Arab Emirates: Association for Computational Linguistics</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>6848</fpage>
          -
          <lpage>6863</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2022</year>
          .emnlp-main.
          <volume>460</volume>
          . url: https://aclanthology .org/
          <year>2022</year>
          .emnlp-main.
          <volume>460</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cheng</surname>
          </string-name>
          , E. Durmus, and D. JurafskyM. arked Personas:
          <article-title>Using Natural Language Prompts to Measure Stereotypes in Language Models</article-title>
          .
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2305.18189. url: http://arxiv.org/abs/2305.181.89
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Clarke.ChatGPT Is Pretty Bad At Poetry</surname>
          </string-name>
          , According To Poets.
          <year>2023</year>
          . url: https://www.v ice.com/en/article/7kx9d9/chatgpt
          <article-title>-is-pretty-bad-at-poetry-according-</article-title>
          .to-poets
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>L. D'Souza</surname>
            and
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Mimno</surname>
          </string-name>
          . “
          <article-title>The Chatbot and the Canon: Poetry Memorization in LLMs”</article-title>
          .
          <source>In: Computational Humanities Research</source>
          .
          <year>2023</year>
          . url: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3558</volume>
          /
          <year>paper5</year>
          712.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Das</surname>
          </string-name>
          and
          <string-name>
            <surname>B. Gambäck. “</surname>
          </string-name>
          <article-title>Poetic Machine: Computational Creativity for Automatic Poetry Generation in Bengali”</article-title>
          . In:
          <year>2014</year>
          . uhrtlt:ps://www.semanticscholar.org/paper/Po etic- Machine%5C%
          <fpage>3A</fpage>
          -
          <string-name>
            <surname>Computational-</surname>
          </string-name>
          Creativity
          <string-name>
            <surname>-</surname>
          </string-name>
          for
          <string-name>
            <surname>- Poetry- Das- Gamb</surname>
          </string-name>
          %5C%
          <article-title>C3%5 C%A4ck/40c52a0ad0322ee0e02105d578d561c35edbb5e.2</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Gervás</surname>
          </string-name>
          . “
          <article-title>An expert system for the composition of formal Spanish poetry”</article-title>
          .
          <source>In: Knowledge-Based Systems 14.3</source>
          (
          <issue>2001</issue>
          ), pp.
          <fpage>181</fpage>
          -
          <lpage>188</lpage>
          . doi:
          <volume>10</volume>
          .1016/s0950-
          <volume>7051</volume>
          (
          <issue>01</issue>
          )
          <fpage>00095</fpage>
          -
          <lpage>8</lpage>
          . url: https://www.sciencedirect.com/science/article/pii/S095070510100.0958
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H. Gonçalo</given-names>
            <surname>Oliveira</surname>
          </string-name>
          . “
          <article-title>Automatic generation of poetry: an overview”</article-title>
          . In: (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Gonçalo Oliveira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cardoso</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Pereira</surname>
          </string-name>
          . “
          <article-title>Tra-la-</article-title>
          <string-name>
            <surname>Lyrics</surname>
          </string-name>
          :
          <article-title>An approach to generate text based on rhythm”</article-title>
          .
          <source>In: 2007</source>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hessel</surname>
          </string-name>
          .jmhessel/FightingWords.
          <year>2024</year>
          . url: https://github.com/jmhessel/FightingWord s.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>W.</given-names>
            <surname>HunterW</surname>
          </string-name>
          .hat Poets Know That ChatGPT Doesn't.
          <year>2023</year>
          . url: https://www.theatlantic .com/books/archive/2023/02/chatgpt-ai
          <string-name>
            <surname>-</surname>
          </string-name>
          technology
          <source>-writing-poetry/67</source>
          .3035/
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Jhamtani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Mehta</surname>
          </string-name>
          , J. Carbonell, and T. Berg-Kirkpatrick.
          <article-title>“Learning Rhyming Constraints using Structured AdversariePs”r</article-title>
          .
          <source>oIcne:edings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</source>
          . Ed. by
          <string-name>
            <given-names>K.</given-names>
            <surname>Inui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ng</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Wan</surname>
          </string-name>
          . Hong Kong, China: Association for Computational Linguistics,
          <year>2019</year>
          , pp.
          <fpage>6025</fpage>
          -
          <lpage>6031</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D19</fpage>
          -1621. url: https://aclanthology.org/D19-1.
          <fpage>621</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Jim</given-names>
            <surname>Al-Khalili</surname>
          </string-name>
          [
          <article-title>jimalkhalilIi]j.ust asked ChatGPT to write me a poem on the two-slit experiment in quantum mechanics in the style of Robert Burns. My work on this earth is done</article-title>
          . https://t.co/e4dTTodT62. Tweet.
          <year>2023</year>
          . url:https://x.com/jimalkhalili/status/162145 4981097209857.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Joyce</surname>
          </string-name>
          . “
          <article-title>Poetry Generation and Analysis”A. Idnv:ances in Computers</article-title>
          . Ed. by
          <string-name>
            <given-names>M.</given-names>
            <surname>Rubinof and M. C.</surname>
          </string-name>
          <article-title>Yovits</article-title>
          . Vol.
          <volume>13</volume>
          .
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          ,
          <year>1975</year>
          , pp.
          <fpage>43</fpage>
          -
          <lpage>72</lpage>
          .
          <year>do1i0</year>
          :.
          <volume>1016</volume>
          /s0065-
          <volume>2458</volume>
          (
          <issue>08</issue>
          )
          <fpage>60655</fpage>
          -
          <lpage>5</lpage>
          . url: https://www.sciencedirect.com/science/article/pii/S006524580860.6555
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kobak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>González-Márquez</surname>
          </string-name>
          , E.-Á. Horvát, and
          <string-name>
            <surname>J. LauDse.</surname>
          </string-name>
          <article-title>lving into ChatGPT usage in academic writing through excess vocabulary</article-title>
          .
          <year>2024</year>
          . doi:
          <volume>10</volume>
          . 48550 / arXiv . 2406 . 07016. url: http://arxiv.org/abs/2406.070.16
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          . “
          <article-title>Ethical Implications of Large Language Models in Content Generation”</article-title>
          .
          <source>In: Journal of Artificial Intelligence, Machine Learning and Data Science 1.8</source>
          (
          <issue>2022</issue>
          ), pp.
          <fpage>62</fpage>
          -
          <lpage>67</lpage>
          . doi:
          <volume>10</volume>
          .51219/JAIMLD/chinmay-shripad-kulkarni/.3u2rl: https://urfjournals.org
          <article-title>/o pen-access/ethical-implications-of-large-language-models-in-content-gener.ation</article-title>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Lau</surname>
          </string-name>
          , T. Cohn,
          <string-name>
            <given-names>T.</given-names>
            <surname>Baldwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brooke</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Hammond</surname>
          </string-name>
          . “
          <article-title>Deep-speare: A joint neural model of poetic language, meter and rhyme”P.Irnoc:eedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</article-title>
          . Ed. by
          <string-name>
            <given-names>I.</given-names>
            <surname>Gurevych</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Miyao</surname>
          </string-name>
          . Melbourne, Australia: Association for Computational Linguistics,
          <year>2018</year>
          , pp.
          <fpage>1948</fpage>
          -
          <lpage>1958</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P18</fpage>
          -1181. url: https://aclanthology.org/P18-1.
          <fpage>181</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>H.</given-names>
            <surname>Manurung</surname>
          </string-name>
          . “
          <article-title>Chart Generation of Rhythm Patterned TexFtir”s</article-title>
          .
          <source>tIInn:ternational Workshop on Literature in Cognition and Computers</source>
          <volume>1</volume>
          (
          <year>1999</year>
          ), pp.
          <fpage>15</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>H.</given-names>
            <surname>Manurung</surname>
          </string-name>
          , G. Ritchie, and H. ThompsonTo.
          <article-title>wards A Computational Model Of Poetry Generation</article-title>
          .
          <source>Technical Report</source>
          . The University of Edinburgh,
          <year>2000</year>
          . uhrtlt:ps://era.ed.ac .uk/handle/1842/3460.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mélanie-Becquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Plancq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Grunspan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maignant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rafard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Roussel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ghedini</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Poibeau</surname>
          </string-name>
          . “
          <article-title>Exploring Combinatorial Methods to Produce Sonnets: An Overview of the Oupoco Project”</article-title>
          .
          <source>IDni:gital Humanities Quarterly 18.1</source>
          (
          <year>2024</year>
          ). url: http s://www.digitalhumanities.org/dhq/vol/18/1/000734/000734.h.tml
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          .
          <article-title>How to Write Poetry Using Copilot | Microsoft Bing</article-title>
          .
          <year>2023</year>
          . url: https://www.m icrosoft.
          <article-title>com/en-us/bing/do-more-with-ai/write-poetry-with-bing-com</article-title>
          .pose
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>B. L.</given-names>
            <surname>Monroe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Colaresi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K. M.</given-names>
            <surname>Quinn</surname>
          </string-name>
          . “Fightin'
          <article-title>Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political ConfliPctol”i</article-title>
          .
          <source>tiIcna:l Analysis 16.4</source>
          (
          <issue>2017</issue>
          ), pp.
          <fpage>372</fpage>
          -
          <lpage>403</lpage>
          . doi:
          <volume>10</volume>
          .1093/pan/mpn018. url: https://www.cambridge.org/co re/journals/political
          <article-title>-analysis/article/fightin-words-lexical-feature-selection-and-evalu ation-for-identifying-the-content-of-political-conflict/81B3703230D21620B81EB6E226 6C7A66</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <article-title>OpenAI et alG</article-title>
          .PT-4
          <source>Technical Report</source>
          .
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2303.08774. url: http://a rxiv.
          <source>org/abs/2303.0877.4</source>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ormazabal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Artetxe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agirrezabal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Soroa</surname>
          </string-name>
          , and
          <string-name>
            <surname>E. APgoirerLeM.</surname>
          </string-name>
          :
          <article-title>A Meter-</article-title>
          and
          <string-name>
            <surname>Rhyme-Controllable Language Model for Unsupervised Poetry Generation</surname>
          </string-name>
          .
          <year>2022</year>
          . url: http: //arxiv.org/abs/2205.1220.6
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Parrish</surname>
          </string-name>
          .aparrish/pronouncingpy.
          <year>2024</year>
          . url: https://github.com/aparrish/pronouncin gpy.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Parrish</surname>
          </string-name>
          .Articulations.
          <article-title>Using electricity</article-title>
          . Denver, Colorado: Counterpath,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>H.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Duan</surname>
          </string-name>
          , D. Liu, HaojieLu, YuexinMu, L.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Tan</surname>
            , and
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Zhong</surname>
          </string-name>
          . “
          <article-title>Rethinking Literary Plagiarism in LLMs through the Lens of Copyright Laws”</article-title>
          . In:
          <year>2024</year>
          . url: https://openreview.net/forum?id=sWZy2Xi.rwt
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tarlinskaja</surname>
          </string-name>
          . “
          <source>Meter and Mode: English Iambic Pentameter, Hexameter, and Septameter and Their Period Variations”S.tIynl:e 21.3</source>
          (
          <issue>1987</issue>
          ), pp.
          <fpage>400</fpage>
          -
          <lpage>426</lpage>
          . url: https://www.jst or.
          <source>org/stable/4294621</source>
          .4
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tsan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hon</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Chun</surname>
          </string-name>
          . “
          <article-title>Automatic Haiku generation using vsm”</article-title>
          . In: (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Walsh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Preus</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Antoniak</surname>
          </string-name>
          . “
          <article-title>Sonnet or Not, Bot? Poetry Evaluation for Large Models and Datasets”. FInin:dings of the Association for Computational Linguistics: EMNLP 2024</article-title>
          . arXiv,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2406.18906. url: http://arxiv.org/abs /2406.18906.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <surname>E. Chen.</surname>
          </string-name>
          “
          <article-title>Chinese Poetry Generation with Planning based Neural NetworkP”</article-title>
          .
          <source>roInce:edings of COLING</source>
          <year>2016</year>
          , the 26th International Conference on Computational Linguistics: Technical Papers. Ed. by
          <string-name>
            <given-names>Y.</given-names>
            <surname>Matsumoto</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Prasad</surname>
          </string-name>
          . Osaka,
          <source>Japan: The COLING 2016 Organizing Committee</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1051</fpage>
          -
          <lpage>1060</lpage>
          . url: https://aclanthology.org/C16-1.
          <fpage>100</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>X.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Li</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          .
          <article-title>“Automatic Poetry Generation with Mutual Reinforcement Learning”</article-title>
          .
          <source>InP: roceedings of the 2018 Conference on Empirical Methods in Natural Language Processing</source>
          . Ed. by
          <string-name>
            <given-names>E.</given-names>
            <surname>Rilof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hockenmaier</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Tsujii</surname>
          </string-name>
          . Brussels, Belgium: Association for Computational Linguistics,
          <year>2018</year>
          , pp.
          <fpage>3143</fpage>
          -
          <lpage>31531</lpage>
          .
          <year>0d</year>
          .
          <year>o1i8</year>
          :
          <volume>653</volume>
          /v1/
          <fpage>D18</fpage>
          -1353. url: https://aclanthology.org/D18-1.
          <fpage>353</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zahn</surname>
          </string-name>
          . “
          <article-title>What is ChatGPT, the artificial intelligence text bot that went vAirBaCl?</article-title>
          ” In: News (
          <year>2022</year>
          ). url: https://abcnews.go.com/Technology/chatgpt-artificial
          <article-title>-intelligence-te xt-bot-viral/story?</article-title>
          <source>id=94857.599</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>