<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>PunDerstand @ CLEF JOKER 2024: Who's Laughing Now? Humor Classification by Genre &amp; Technique</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ryan Rony Dsilva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nidhi Bhardwaj</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Purdue University</institution>
          ,
          <addr-line>West Lafayette, IN</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <abstract>
        <p>Humor is subject to individual interpretation, with each person perceiving it diferently. Given that humor itself is subjective, this work explores classification of humor by genre and technique through three approaches: manual guided annotation, multi-class classification using BERT-based models with and without sampling, and prompting with large language models. Our experiments revealed insights into the performance of diferent models and approaches on the humor classification task and opens up further discussions on using guidelines from the annotation to aid large language models.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;humor classification</kwd>
        <kwd>BERT</kwd>
        <kwd>large language models</kwd>
        <kwd>humor theory</kwd>
        <kwd>guided annotation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>
        For this task, the dataset contained manually annotated examples from the JOKER 2023 corpus [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] as
well as new data.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Guided Annotation</title>
        <p>To classify the type of humor present in sentences, we implemented a guided annotation process.
This method involved developing a comprehensive codebook that provided explicit guidelines for
categorizing sentences into predefined humor types. To minimize bias arising from preconceived
notions of humor, we assigned pseudo names to the categories. This anonymization aimed to ensure
that annotators based their classification decisions solely on the structural and contextual cues detailed
in the codebook, rather than on any prior subjective understanding of the humor types. Two annotators
were tasked with categorizing humor based on guidelines outlined in the provided codebook. Sentences
where both annotators agreed on the humor category were considered final and included for submission,
while those with disagreement were excluded. Upon reaching consensus, a total of 350 sentences were
submitted as the final annotated dataset. The codebook outlined specific characteristics and markers for
each humor type. The instructions were derived from the definitions of each category and the patterns
observed in the training dataset. The detailed codebook used in this process is included in the appendix.</p>
        <sec id="sec-2-1-1">
          <title>Construction of the Codebook</title>
          <p>
            To classify wit in sentences, we adopted definitions from [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], which describe wit as involving an
unexpected twist or element that generates humor. In the training dataset, sentences containing wit
often exhibited patterns like the use of words with multiple meanings or homophones. These linguistic
features were incorporated to facilitate easier identification of wit.
          </p>
          <p>
            For categorizing sentences as incongruous or absurd, the training dataset revealed a common bipartite
structure: the first part typically posed a question, followed by an unexpected or unrealistic answer.
Absurd humor was detected by identifying nonsensical situations that elicited humor [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. Any humorous
characteristic that appeared illogical or unrealistic was classified under this category.
          </p>
          <p>
            Self-deprecation was identified by structural cues indicating that a sentence negatively addressed
oneself [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]. The humor in self-deprecating jokes arises from highlighting one’s weaknesses and flaws in
an embarrassing yet unexpected manner, as it is uncommon for individuals to discuss their shortcomings
openly [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ].
          </p>
          <p>
            To identify exaggeration, we focused on detecting hyperbolic terms [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] within sentences that
dramatically described situations as better or worse than they actually were [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ]. For sarcasm, annotators
identified elements of contempt, often indicated by negative polarity words used to criticize someone
[
            <xref ref-type="bibr" rid="ref12">12</xref>
            ]. Sarcasm, a form of irony, employs implied meanings to mock or deride [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]. We sought sentences
with implied meanings that aimed to ridicule weaknesses or events negatively.
          </p>
          <p>
            Irony was characterized by having two elements: a literal meaning and an implied meaning, with the
two needing to difer to produce a humorous efect [
            <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
            ].
          </p>
          <p>
            Annotators followed the sequence defined in the codebook to ensure that categories such as sarcasm
[
            <xref ref-type="bibr" rid="ref16">16</xref>
            ] and exaggeration [
            <xref ref-type="bibr" rid="ref17">17</xref>
            ], which are specific types of irony, were correctly classified only when their
unique elements were present. Irony served as an overarching category encompassing these specific
humor types, providing a structured framework for accurate annotation.
          </p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Multi-Class Classification with DeBERTa</title>
        <p>
          In our study, we employ the DeBERTa [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] model as the base model for our experiments. DeBERTa has
been recognized as one of the leading choices for encoder models due to its superior performance in
various natural language processing tasks. We fine-tune the DeBERTa model using our training dataset
and conduct two separate experimental runs. The first run involves using the dataset in its original
form, without any modifications to address class imbalances. In this approach, we aim to evaluate the
model’s performance on the raw, imbalanced data. In the second experimental run, we address the class
imbalance by implementing an under-sampling strategy. This method ensures that the representation
of each class is balanced, preventing any single class from having a disproportionately high number
of samples. Specifically, we cap the number of samples for the majority classes at  = 250. For the
ifne-tuning process, we utilize the deberta-v3-large model. The fine-tuning parameters are meticulously
chosen to optimize performance. The learning rate is set to 2 × 10− 5, with a training batch size of 8
and an evaluation batch size of 16. The model is trained for 5 epochs, and we apply a weight decay of
0.01 to regularize the training process.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Prompting with LLMs</title>
        <p>
          In our methodology involving large language models (LLMs), we utilized GPT-4o [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], the most recent
model developed by OpenAI. Our approach incorporated the few-shot prompting technique, which
involves providing the model with a limited number of examples to guide its responses. Specifically, we
included one example for each class, which served as a template to demonstrate the desired output format
and content. Detailed descriptions of these prompts can be found in the appendix. The methodology
was inspired from [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] where humor theories were embedded into the prompts. For reproducibility of
our results, we set the random seed to 2024 and configured the temperature parameter to 0 as outlined
in the OpenAI documentation. By setting the temperature to 0, we aimed to reduce the model’s output
variability, thereby enhancing consistency and repeatability in the generated responses.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>The metrics of precision, recall, accuracy, and F-score are reported and metrics in the tables below are
computed for both the training and test dataset.</p>
      <p>Table 1 and Table 2 present the weighted average performance metrics for the four approaches on
both the training and test sets. On the training set, Guided Annotation shows moderate performance
with an accuracy of 0.7126 and a balanced F-score of 0.7148, evaluated on a smaller subset (support of 87).
DeBERTa exhibits the highest performance across all metrics, with an accuracy of 0.7983 and an F-score
of 0.7939, indicating strong overall performance. DeBERTa has slightly lower accuracy (0.7854)
and F-score (0.7906) than DeBERTa but maintains high precision (0.8124). GPT-4o lags significantly with
the lowest accuracy (0.4496) and F-score (0.4563), suggesting a need for change with the methodolgy
with LLMs. On the test set, DeBERTa’s performance decreases compared to the training set with an
accuracy of 0.6870 and an F-score of 0.6731, indicating some loss in generalization. DeBERTa also
shows a decrease in performance with an accuracy of 0.6787 and an F-score of 0.6768, but it maintains
higher precision than recall, suggesting it still identifies relevant instances well. GPT-4o again shows
the lowest performance with an accuracy of 0.4668 and an F-score of 0.4733, consistent with its training
performance.</p>
      <p>Table 3 and Table 4 provide class-wise performance metrics for the approaches on both the training
and test sets. Guided Annotation shows varying performance across classes, with high F-scores in SC
(0.8718 on training, 0.9000 on test) and AID (0.8000 on training, 0.8333 on test), but very low performance
in EX (0.1538 on training, 0.2222 on test) and WS (0.5000 on training, 0.4000 on test). DeBERTa performs
consistently well across most classes, especially in AID (0.9671 on training, 0.8889 on test) and SD
(0.8996 on training, 0.7736 on test), but struggles in EX (0.3988 on training, 0.2614 on test) and WS
(0.5201 on training, 0.4762 on test). DeBERTa also shows strong performance, particularly in
AID (0.8974 on training, 0.8795 on test) and SD (0.8583 on training, 0.7773 on test), with improved
performance in EX (0.5745 on training, 0.4240 on test) and WS (0.6021 on training, 0.5000 on test)
compared to its non-sampled counterpart, indicating better generalization.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>The task of humor classification, particularly identifying specific types of humor, remains a complex
challenge due to the subjective nature of humor perception. Our study presented a comprehensive
approach involving guided annotation, fine-tuning the DeBERTa model, and using prompting with
GPT-4o. The results highlight the efectiveness of DeBERTa in both original and sampled forms,
showcasing its strong performance across various humor types. However, GPT-4o demonstrated
significant limitations, suggesting that current LLMs may require further refinement or alternative
methodologies to handle the nuances of humor classification efectively. Future research should focus
on integrating the guided annotation approach directly into the prompting process for large language
models (LLMs). By embedding detailed codebook guidelines and structural cues within the prompts, we
can provide LLMs with more context and specificity, potentially improving their performance in humor
classification tasks.</p>
    </sec>
    <sec id="sec-5">
      <title>A. Prompts</title>
      <p>### Instruction ###
You are an expert in linguistics and humor. Classify the text into one of the
appropriate types of humor from the following: Irony (IR), Sarcasm (SC),
Exaggeration (EX), Absurdity &amp; Incongruity (AID), Self-Deprecating (SD), Wit (WS).
You must respond with valid JSON with only one key ‘output‘, containing the correct
classification of the sentence.
### Text ###
I tried to learn how to make puns, but no pun in ten did.
### Humor Type ###
{
}</p>
      <p>"output": "WS"
### Text ###
Did you hear about the pasta that got locked out of the house? Gnocci.
### Humor Type ###
{
}</p>
      <p>"output": "AID"
### Text ###
Amazing how fast this team can go winning from 13 straight to losing three in a row. Lol
. Horrible managing tonight. I really hope this Boone experiment is over soon.
### Text ###
Ohio news station reminds viewers what day it is during coronavirus lockdown.
### Humor Type ###
{
### Text ###
Good day, this is your trashcan speaking.
### Humor Type ###
{</p>
    </sec>
    <sec id="sec-6">
      <title>B. Guided Annotation Codebook</title>
      <p>Please follow the following instructions to annotate the given sentences into the category most suitable
according to the instructions given. Follow the order in which each category is described and move
forward to the next category only if the previous category is eliminated.</p>
      <p>1. Identify words with multiple meanings or homophones (similar sounding words), supported by
contextual clues within the sentence.
2. Note any unexpected elements or sudden changes in the sentence.
1. Look for sentences structured as *?*. Or,
2. There exist words in the second part of the text which you might not expect on the basis of the
ifrst part of the sentence.
3. Identify phonetic incongruities in the second part of the text.
4. Detect illogical situations or events that are unrealistic or nonsensical.
1. Look for events which might be embarrassing. Or,
2. Look for human flaws or weakness described and,
3. Look for specific sentence structures indicating self-reference, such as:
a) Interjection followed by ’I’, ’we’, or ’you’.
b) Conjunction followed by ’I’, ’we’, or ’you’.
c) Question followed by ’I’ or ’we’.
d) ’I’ or ’we’ followed by a verb.
e) ’I’ or ’we’ followed by a negative model verb.
f) Frequency of ’my’, ’me’, and ’I’.</p>
      <p>g) Presence of negative polarity.</p>
      <sec id="sec-6-1">
        <title>Category 5</title>
        <p>1. Identify situations or events described in a manner better or worse than normal. And,
2. Assess the description of events and their impacts for overly dramatic elements.</p>
      </sec>
      <sec id="sec-6-2">
        <title>Category 6</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W.</given-names>
            <surname>Ruch</surname>
          </string-name>
          , Psychology of humor, in: V.
          <string-name>
            <surname>Raskin</surname>
          </string-name>
          (Ed.),
          <source>The Primer of Humor Research, number 8 in Humor Research</source>
          , Mouton de Gruyter, Berlin,
          <year>2008</year>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>100</lpage>
          . doi:
          <volume>10</volume>
          .1515/9783110198492. 17.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ermakova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-G.</given-names>
            <surname>Bosser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. M.</given-names>
            <surname>Palma Preciado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sidorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jatowt</surname>
          </string-name>
          , Overview of JOKER - CLEF
          <article-title>-2024 track on Automatic Humor Analysis</article-title>
          , in: L.
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Mulhem</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Quénot</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Schwab</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Soulier</surname>
            ,
            <given-names>G. M.</given-names>
          </string-name>
          <string-name>
            <surname>Di Nunzio</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Galuščáková</surname>
            ,
            <given-names>A. G.</given-names>
          </string-name>
          <string-name>
            <surname>Seco de Herrera</surname>
          </string-name>
          , G. Faggioli, N. Ferro (Eds.),
          <source>Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Fifteenth International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ),
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ermakova</surname>
          </string-name>
          , A.-G. Bosser,
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Thomas-Young</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. M.</given-names>
            <surname>Palma Preciado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sidorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jatowt</surname>
          </string-name>
          ,
          <article-title>CLEF 2024 JOKER lab: Automatic Humour Analysis</article-title>
          , in: N.
          <string-name>
            <surname>Goharian</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Tonellotto</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Lipani</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>McDonald</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Macdonald</surname>
          </string-name>
          , I. Ounis (Eds.),
          <source>Advances in Information Retrieval: 46th European Conference on Information Retrieval</source>
          ,
          <string-name>
            <surname>ECIR</surname>
          </string-name>
          <year>2024</year>
          , Glasgow, UK, March
          <volume>24</volume>
          -28, Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>VI</given-names>
          </string-name>
          , volume
          <volume>14613</volume>
          of Lecture Notes in Computer Science, Springer, Cham,
          <year>2024</year>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>43</lpage>
          . doi:
          <volume>10</volume>
          . 1007/978-3-
          <fpage>031</fpage>
          -56072-
          <issue>9</issue>
          _
          <fpage>5</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V. M.</given-names>
            <surname>Palma Preciado</surname>
          </string-name>
          , et al.,
          <article-title>Overview of the clef 2024 joker task 2: Humour classification according to genre and technique</article-title>
          , in: G.
          <string-name>
            <surname>Faggioli</surname>
          </string-name>
          , et al. (Eds.),
          <source>Working Notes of the Conference and Labs of the Evaluation Forum (CLEF</source>
          <year>2024</year>
          ), CEUR Workshop Proceedings, CEUR-WS.org,
          <year>2024</year>
          . URL: http://ceur-ws.
          <source>org.</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ermakova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-G.</given-names>
            <surname>Bosser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jatowt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>The joker corpus: English-french parallel data for multilingual wordplay recognition</article-title>
          ,
          <source>in: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23)</source>
          , Association for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          , pp.
          <fpage>2796</fpage>
          -
          <lpage>2806</lpage>
          . doi:
          <volume>10</volume>
          .1145/3539618. 3591885.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Graesser</surname>
          </string-name>
          ,
          <article-title>Wit and humor in discourse processing</article-title>
          ,
          <source>Discourse processes 11</source>
          (
          <year>1988</year>
          )
          <fpage>35</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>O.</given-names>
            <surname>Couder</surname>
          </string-name>
          ,
          <article-title>Problem solved? absurdist humour and incongruity-resolution</article-title>
          ,
          <source>Journal of Literary Semantics</source>
          <volume>48</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kamal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Abulaish</surname>
          </string-name>
          ,
          <article-title>Self-deprecating humor detection: A machine learning approach</article-title>
          , in: Computational Linguistics:
          <article-title>16th International Conference of the Pacific Association for Computational Linguistics</article-title>
          ,
          <string-name>
            <surname>PACLING</surname>
          </string-name>
          <year>2019</year>
          , Hanoi, Vietnam,
          <source>October 11-13</source>
          ,
          <year>2019</year>
          ,
          <source>Revised Selected Papers 16</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>483</fpage>
          -
          <lpage>494</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Puhlik-Doris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Larsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gray</surname>
          </string-name>
          , K. Weir,
          <article-title>Individual diferences in uses of humor and their relation to psychological well-being: Development of the humor styles questionnaire</article-title>
          ,
          <source>Journal of research in personality 37</source>
          (
          <year>2003</year>
          )
          <fpage>48</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Troiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Strapparava</surname>
          </string-name>
          , G. Özbal,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Tekiroğlu</surname>
          </string-name>
          ,
          <article-title>A computational exploration of exaggeration</article-title>
          ,
          <source>in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>3296</fpage>
          -
          <lpage>3304</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Patro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Baruah</surname>
          </string-name>
          ,
          <article-title>A simple three-step approach for the automatic detection of exaggerated statements in health science news</article-title>
          , in: P. Merlo,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tiedemann</surname>
          </string-name>
          , R. Tsarfaty (Eds.),
          <source>Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics:</source>
          Main Volume,
          <article-title>Association for Computational Linguistics</article-title>
          , Online,
          <year>2021</year>
          , pp.
          <fpage>3293</fpage>
          -
          <lpage>3305</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .eacl-main.
          <volume>289</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .eacl-main.
          <volume>289</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Bharti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. S.</given-names>
            <surname>Babu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Jena</surname>
          </string-name>
          ,
          <article-title>Parsing-based sarcasm sentiment recognition in twitter data</article-title>
          ,
          <source>in: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining</source>
          <year>2015</year>
          ,
          <year>2015</year>
          , pp.
          <fpage>1373</fpage>
          -
          <lpage>1380</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>J. D. Campbell</surname>
            ,
            <given-names>A. N.</given-names>
          </string-name>
          <string-name>
            <surname>Katz</surname>
          </string-name>
          ,
          <article-title>Are there necessary conditions for inducing a sense of sarcastic irony?</article-title>
          ,
          <source>Discourse Processes</source>
          <volume>49</volume>
          (
          <year>2012</year>
          )
          <fpage>459</fpage>
          -
          <lpage>480</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Grice</surname>
          </string-name>
          ,
          <article-title>Logic and conversation</article-title>
          , in: Speech acts, Brill,
          <year>1975</year>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D.</given-names>
            <surname>Sperber</surname>
          </string-name>
          , D. Wilson,
          <article-title>Irony and the use-mention distinction</article-title>
          ,
          <source>Philosophy</source>
          <volume>3</volume>
          (
          <year>1981</year>
          )
          <fpage>143</fpage>
          -
          <lpage>184</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Colston</surname>
          </string-name>
          ,
          <article-title>Irony and sarcasm, in: The Routledge handbook of language and humor</article-title>
          , Routledge,
          <year>2017</year>
          , pp.
          <fpage>234</fpage>
          -
          <lpage>249</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sperber</surname>
          </string-name>
          , On verbal irony,
          <source>Lingua</source>
          <volume>87</volume>
          (
          <year>1992</year>
          )
          <fpage>53</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          , W. Chen,
          <article-title>DeBERTav3: Improving deBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing</article-title>
          ,
          <source>in: The Eleventh International Conference on Learning Representations</source>
          ,
          <year>2023</year>
          . URL: https://openreview.net/forum?id=
          <fpage>sE7</fpage>
          -
          <lpage>XhLxHA</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>J.</given-names>
            <surname>Achiam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Adler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Akkaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. L.</given-names>
            <surname>Aleman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Almeida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Altenschmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Altman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Anadkat</surname>
          </string-name>
          , et al.,
          <source>Gpt-4 technical report, arXiv preprint arXiv:2303.08774</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Dsilva</surname>
          </string-name>
          ,
          <article-title>Augmenting Large Language Models with Humor Theory To Understand Puns (</article-title>
          <year>2024</year>
          ). URL: https://hammer.purdue.edu/articles/thesis/Augmenting_Large_Language_Models_ with_Humor_Theory_To_Understand_Puns/25674792. doi:
          <volume>10</volume>
          .25394/PGS.25674792.
          <year>v1</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          1.
          <article-title>Determine if the sentence conveys negative polarity, showing contempt or criticism</article-title>
          . And,
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          2.
          <article-title>Assess whether the sentence criticizes something or mocks a phenomenon or event</article-title>
          . And,
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          3.
          <article-title>Verify if the sentence's meaning difers from its literal interpretation</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <article-title>1. Identify the literal meaning of the sentence</article-title>
          . And,
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          2.
          <article-title>Discern any implied meanings, ensuring they difer from the literal interpretation</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>