<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Sarcasm Detection in Dravidian Languages</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aioshi Chowdhury</string-name>
          <email>aioshichowdhury2004@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Soumyajit Paul</string-name>
          <email>soumyajitpaul2002@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Souvik Kundu</string-name>
          <email>souvikkundu1284@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anurag Kumar Thakur</string-name>
          <email>iamanurag0708@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anuska Sarkar</string-name>
          <email>anuskasarkar06@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anusmita Ray Chaudhuri</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anirban Ray</string-name>
          <email>anirbanmark1429@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Debasree Mitra</string-name>
          <email>debasree.mitra2005@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Engineering, Adamas University</institution>
          ,
          <addr-line>Kolkata</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science and Engineering, JIS College of Engineering</institution>
          ,
          <addr-line>Kalyani</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The big challenge in NLP for local Indian languages, mainly Tamil and Malayalam, is to develop a strong sarcasm detection system. The linguistic characteristics of these Dravidian languages are quite diferent, and there are very few annotated datasets. This makes sarcasm detection challenging because it relies on hints that aren't always obvious and depends on a person's understanding of the situation or topic. Hence, a model capable of capturing those aspects is required. To address the above problem, a deep learning-based model is proposed where Convolution layers take advantage of automatic feature extraction, max-pooling for reducing dimensions, and bidirectional GRUs to capture long-range dependencies and contextual information in sentences. The model performs very well in non-sarcastic and sarcastic detection, scoring an F1 of 0.94 for non-sarcastic detection 0.71 for sarcastic detection in Malayalam, and 0.94 for non-sarcastic and 0.81 for sarcastic detection in Tamil. This indicates it was able to handle the complexities of sarcasm detection in these low-resource languages successfully. Our rank for sarcasm detection in the Tamil language was 10, followed by rank 12 in the Malayalam language for the same.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;NLP</kwd>
        <kwd>bidirectional GRUs</kwd>
        <kwd>deep learning</kwd>
        <kwd>linguistic characteristics</kwd>
        <kwd>Dravidian languages</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Survey</title>
      <p>
        Rajnish Pandey and Jyoti Prakash Singh [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], efectively described techniques by using a hybrid model
of BERT-LSTM for sarcasm detection in code-mixed social media posts for both English and Hindi
languages. Meanwhile, “Multi-modal sarcasm detection and humor classification in code-mixed
conversations” authored by Bedi, Manjot (2021) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], introduces a multi-modal approach to detect sarcasm by
using images, videos, or voices.
      </p>
      <p>
        Aggarwal, and Akshita [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], showed addressing the focus on detecting sarcasm in Hindi-English
code-mixed data by using bilingual word plantation among them.
      </p>
      <p>
        Rosid, and Mochamad Alfan [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], introduced a research model that combines both the convolutional
and the bi-directional GRU layers with a multi-headed attention mechanism for the detection of sarcasm
in Indonesian-English code-mixed text. While Chanda, Supriya et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], showed a method for detecting
sarcasm in Tamil and Malayalam Dravidian code-mixed text using a combination of both classical
machine learning and deep learning techniques by approaching the diferent challenges of Dravidian
languages for both grammar and spelling checks for the informal way of communication for both Tamil
and Malayalam.
      </p>
      <p>
        Bhaumik, Anik Basu, and Das, Mithun [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], used numerous transformer-based models in code-mixed
Tamil and Malayalam text to detect sarcasm. The usage of transformers activates the model to give better
performance to intricate sentence structures present in code-mixed languages. This paper showcases
the new challenge of identifying sarcastic comments and posts in code-mixed Dravidian languages.
      </p>
      <p>
        Maity, Krishanu et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], introduced a multitasking framework to detect sarcasm, emotion, and
sentiment in code-mixed memes simultaneously. The framework joined text and image data to enhance
detection accuracy in complex social media posts. To solve this task, they created a novel multi-modal
meme dataset called MultiBully and showcased a new architecture which is called CLIP-CentralNet,
which is an attention-based multi-task framework for sentiment, emotion, and sarcasm-aided
cyberbullying detection.
      </p>
      <p>
        Tejasvi, Koti, et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], focused on traditional NLP techniques in Hindi-English code-mixed tweets to
detect sarcasm. It speaks about the dificulties of using multiple languages in one context and suggests
using a feature-based method to better detect sarcasm.
      </p>
      <p>
        Shah, Aditya, and Maurya, Chandresh Kumar [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], examined discrepancies in detecting sarcasm in
code-mixed text. The authors explain that the discrepancy between literal and intended meaning is
a key indicator of sarcasm and demonstrate that it is efective in a code-mixed setting. This model is
efective in capturing incongruity through FastText sub-word embeddings to detect sarcasm in the text.
      </p>
      <p>
        Ratnavel, Rajalakshmi et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], suggested the use of transformer models in code-mixed Tamil
data to detect sarcasm. This model is better at understanding mixed-language sentences than existing
techniques. In this paper, they tested four models: BERT, mBERT, XLM-RoBERTa, and 2-way-20-shot
learning to identify sarcasm. The 2-way-20-shot approach works better for Malayalam-English data,
while Tamil-English data performs similarly to BERT.
      </p>
      <p>Bansal, Srijan, et al. [11], showcased how code-switching patterns are very impactful on the
performance of NLP applications such as humor, sarcasm, and hate speech detection. They explained how
these patterns can significantly improve model performance by identifying and utilizing them.</p>
      <p>N. Sripriya et al. [12], provided an in-depth exploration of the Dravidian Language, particularly
Malayalam and Tamil. The shared Task focuses on developing a dataset of code-mixed texts in
Malayalam, English, and Tamil. On the other hand, Chakravarthi, and Bharathi Raja [13], investigated the
problem of identifying hope speech detection which utters positive as well as supportive sentiment.</p>
      <p>Chakravarthi, Bharathi Raja et al. [14] showed the typology, i.e., identifying hate speeches of the
category Homophobia and Transphobia in code-mix. This posed an increased number of ofensive
languages towards the LGBTQIA+ community, and fewer methods focusing on its detection inspired
us to create approaches — solutions to enhance social life among these communities. In future work,
we target to optimize the loss functions and construct a multilingual Homophobia and Transphobia
detection system for numerous languages. In this paper, we provide a novel rationale for the detection
of sarcasm speech.</p>
      <p>Chakravarthi, Bharathi Raja, et al [15] introduced their work to extract humor in the Dravidian
language, mainly Tamil and Malayalam text The paper exon preprocessing techniques incorporating
pre-trained models, in particular, BERT and distilBERT and conventional method SVM, TF-IDF.</p>
      <p>Chakravarthi, et al [16] highlighted the linguistic complexities of these mixed-language texts and
discussed methodologies through their work in their shared task at FIRE 2024 conference to improve
sarcasm detection for better natural language understanding in Dravidian languages.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset Description</title>
      <p>The dataset from the DravidianCodeMix@FIRE-2024 shared task has been utilized for sarcasm detection,
consisting of Tamil-English and Malayalam-English posts and comments. This dataset is code-mixed,
meaning it includes a blend of words or phrases from both Tamil and English or Malayalam and English,
within a single sentence or text. For prior context, Vijay, Deepanshu, et al. [17], introduced a dataset
that is specifically created in Hindi-English code-mixed social media posts to detect irony. In this paper,
they presented an annotated corpus of Hindi-English code-mixed text which consists of tweet IDs and
the corresponding annotations. They have also presented a supervised system that is used for detecting
irony in code-mixed text.</p>
      <p>Suhaimin, Mohd Suhairi Md, et al. [18], focuses on issues within the public safety section in the
dataset. Constructing this type of dataset that includes English and Malay code-mixing elements.</p>
      <p>Swami, Sahil, et al. [19] created a collection for both English and Hindi to develop code-mixed tweets
specifically for sarcasm detection. The paper highlights the diferent aspects of social media, which
have numerous challenges in detecting sarcasm from a code-mixed language and the informal way of
writing style of the public.</p>
      <p>Chakravarthi, Bharathi Raja, et al. [20] showed an important benchmark dataset for sarcasm detection
in code mixing in the Dravidian language.</p>
      <p>Table 1 and 2 provided a summary of the key features of the datasets, presenting an in-depth look
at the statistics for each of the three sets (Training, Development, and Test) in both languages. This
breakdown ofers a clear understanding of how the data is distributed and the size of the datasets, giving
insight into the structure of the code-mixed texts used for this task.</p>
      <p>TAM(Train) TAM(Dev) TAM(Test)
Total no. of sentences</p>
      <p>Total no. of words</p>
      <p>Max. length of sentences
Non-sarcastic to sarcastic ratio</p>
      <p>Our data was processed and annotated at the sentence level, with each sentence labeled as either
sarcastic or non-sarcastic. To support model development and testing, the datasets for both Tamil and
Malayalam were divided into three distinct sets: the training set, the development set, and the test set.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>This all-inclusive methodology allows for a strong approach in the building, training, and evaluation
of a model capable of detecting sarcasm in both Malayalam and Tamil code-mixed and code-switched
languages, using a combination of Convolutional and Recurrent Neural Networks and advanced
preprocessing and evaluation techniques.</p>
      <p>Text pre-processing is critical before feeding the text data to the model in order to make sure that the
model can learn from the inputs at hand. The text is tokenized, and it thus transforms into sequences of
integers, with each integer indicating one word from Malayalam or Tamil languages. Such sequences
have been padded to equal uniform input sizes. Such uniformity is required by the model for consistent
processing of data. This preprocess will make the model strong at handling variable lengths of sentences,
as all input data would be in the expected format, thus helping detect sarcasm and allow generalization
using multilingual text data.</p>
      <p>The design is tailor-made for the specific nature of multilingual text data, including an embedding
layer that transforms integer sequences into dense vectors and a Conv1D that captures local patterns in
the data. For capturing both sides of dependencies that ultimately help in context understanding and
nuance detection in complicated sarcasm, there is a bidirectional GRU layer. These representations are
further refined by additional LSTM layers, and Dense layers having ReLU activation and Dropout have
been added to prevent overfitting; the final Dense layer uses Sigmoid activation for binary classification.
From Figure 1, it is observed that the process starts by loading the Malayalam and Tamil text data
mixed with both languages. Next, the text is broken down into numbers (tokenized) and padded to make
sure all input has the same length. The numbers are turned into dense vectors in building the model,
followed by layers that detect patterns and connections. Extra layers are added to prevent overfitting,
and the final layer helps classify the text into two categories. After this, the models are compared, and a
report is created to measure how well they perform using metrics like precision and F1 scores.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Result and Discussion</title>
      <p>An analysis of the model shows that quite a lot is known about its performance for both Tamil and
Malayalam. From the confusion matrix, it can be seen that the Malayalam model is successful in
correctly classifying 2236 non-sarcastic instances and 323 sarcastic ones but fails to detect 69
nonsarcastic instances, misclassifying them as sarcastic, and 198 sarcastic instances as non-sarcastic, as
shown in Figure 2.</p>
      <p>On the other hand, the Tamil model correctly classified 4528 non-sarcastic and 1225 sarcastic instances,
while labeling 102 non-sarcastic examples as sarcastic and missing 481 sarcastic examples, as indicated
in Figure 3. These results are further depicted in the confusion matrices, where it can be visually
observed how well the model performed in each category.</p>
      <p>Comparing the results, one can easily see that the results from both models are high in terms of
accuracy for detecting non-sarcastic content, with an F1 of 0.94. It is quite evident that the Tamil model
shows better detection of sarcastic material than the Malayalam model, with an F1 score of 0.81 as
compared to 0.71. This shows that although both models perform similarly in correctly classifying
non-sarcasm, the Tamil model tends to perform better in detecting sarcasm. The diference between
the sarcasm detection systems calls for further development areas, especially the model of Malayalam
because it will still need more improvement to increase its performance.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Scope</title>
      <p>The sarcasm detection model shows a very good performance for both Tamil and Malayalam code-mixed
texts, where both the languages showed a very good performance achieving a strong F1 score of 0.94
for detecting non-sarcastic content. However, there is a clear distinction in sarcasm detection, as the
Tamil model outperforms the Malayalam model, with an F1 score of 0.81 compared to 0.71. While the
model handles non-sarcastic classification eficiently, the gap in sarcastic content detection suggests
room for improvement, particularly in the Malayalam model.</p>
      <p>Looking ahead, future work could focus on developing a more robust fusion model that combines
the strengths of Transformers, RNNs, and GRUs. Such a hybrid approach may better capture the
nuances of sarcasm in code-mixed languages and improve overall detection accuracy. Incorporating
these advanced architectures could address the current limitations, enhancing performance for both
Tamil and Malayalam sarcasm detection tasks.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[11] S. Bansal, V. Garimella, A. Suhane, J. Patro, A. Mukherjee, Code-switching patterns can be an
efective route to improve performance of downstream nlp applications: A case study of humour,
sarcasm and hate speech detection, arXiv preprint arXiv:2005.02295 (2020).
[12] N. Sripriya, T. Durairaj, K. Nandhini, B. Bharathi, K. K. Ponnusamy, C. Rajkumar, P. K. Kumaresan,
R. Ponnusamy, C. Subalalitha, B. R. Chakravarthi, Findings of shared task on sarcasm identification
in code-mixed dravidian languages, FIRE 2023 16 (2023) 22.
[13] B. R. Chakravarthi, Hope speech detection in youtube comments, Social Network Analysis and</p>
      <p>Mining 12 (2022) 75.
[14] B. R. Chakravarthi, A. Hande, R. Ponnusamy, P. K. Kumaresan, R. Priyadharshini, How can
we detect homophobia and transphobia? experiments in a multilingual code-mixed setting for
social media governance, International Journal of Information Management Data Insights 2 (2022)
100119.
[15] B. R. Chakravarthi, N. Sripriya, B. Bharathi, K. Nandhini, S. Chinnaudayar Navaneethakrishnan,
T. Durairaj, R. Ponnusamy, P. K. Kumaresan, K. K. Ponnusamy, C. Rajkumar, Overview of the shared
task on sarcasm identification of Dravidian languages (Malayalam and Tamil) in DravidianCodeMix,
in: Forum of Information Retrieval and Evaluation FIRE - 2023, 2023.
[16] B. R. Chakravarthi, S. N, B. B, N. K, T. Durairaj, R. Ponnusamy, P. K. Kumaresan, K. K. Ponnusamy,
C. Rajkumar, Overview of sarcasm identification of dravidian languages in
dravidiancodemix@fire2024, in: Forum of Information Retrieval and Evaluation FIRE - 2024, DAIICT , Gandhinagar,
2024.
[17] D. Vijay, A. Bohra, V. Singh, S. S. Akhtar, M. Shrivastava, A dataset for detecting irony in
hindi-english code-mixed social media text., EMSASW@ ESWC 2111 (2018) 38–46.
[18] M. S. M. Suhaimin, M. H. A. Hijazi, E. G. Moung, Annotated dataset for sentiment analysis and
sarcasm detection: Bilingual code-mixed english-malay social media data in the public security
domain, Data in Brief 55 (2024) 110663.
[19] S. Swami, A. Khandelwal, V. Singh, S. S. Akhtar, M. Shrivastava, A corpus of english-hindi
code-mixed tweets for sarcasm detection, arXiv preprint arXiv:1805.11869 (2018).
[20] B. R. Chakravarthi, N. Sripriya, B. Bharathi, K. Nandhini, S. C. Navaneethakrishnan, T. Durairaj,
R. Ponnusamy, P. K. Kumaresan, K. K. Ponnusamy, C. Rajkumar, Overview of the shared task
on sarcasm identification of dravidian languages (malayalam and tamil) in dravidiancodemix, in:
Forum of Information Retrieval and Evaluation FIRE-2023, 2023.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Bert-lstm model for sarcasm detection in code-mixed social media post</article-title>
          ,
          <source>Journal of Intelligent Information Systems</source>
          <volume>60</volume>
          (
          <year>2023</year>
          )
          <fpage>235</fpage>
          -
          <lpage>254</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Akhtar</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          <article-title>Chakraborty, Multi-modal sarcasm detection and humor classification in code-mixed conversations</article-title>
          ,
          <source>IEEE Transactions on Afective Computing</source>
          <volume>14</volume>
          (
          <year>2021</year>
          )
          <fpage>1363</fpage>
          -
          <lpage>1375</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Aggarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wadhawan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Maurya</surname>
          </string-name>
          ,
          <article-title>" did you really mean what you said?": Sarcasm detection in hindi-english code-mixed data using bilingual word embeddings</article-title>
          , arXiv preprint arXiv:
          <year>2010</year>
          .
          <volume>00310</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Rosid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Siahaan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saikhu</surname>
          </string-name>
          ,
          <article-title>Sarcasm detection in indonesian-english code-mixed text using multihead attention-based convolutional and bi-directional gru</article-title>
          , IEEE Access (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chanda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <article-title>Sarcasm detection in tamil and malayalam dravidian code-mixed text</article-title>
          .,
          <source>in: FIRE (Working Notes)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>336</fpage>
          -
          <lpage>343</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Bhaumik</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Das</surname>
          </string-name>
          ,
          <article-title>Sarcasm detection in dravidian code-mixed text using transformer-based models</article-title>
          .,
          <source>in: FIRE (Working Notes)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>249</fpage>
          -
          <lpage>258</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Maity</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhattacharyya</surname>
          </string-name>
          ,
          <article-title>A multitask framework for sentiment, emotion and sarcasm aware cyberbullying detection from multi-modal code-mixed memes</article-title>
          ,
          <source>in: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1739</fpage>
          -
          <lpage>1749</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Tejasvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rishikesh</surname>
          </string-name>
          ,
          <article-title>Sarcasm detection for hindi english code mixed twitter data</article-title>
          ,
          <source>International Journal for Research in Applied Science and Engineering Technology</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <fpage>159</fpage>
          -
          <lpage>164</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. K.</given-names>
            <surname>Maurya</surname>
          </string-name>
          ,
          <article-title>How efective is incongruity? implications for code-mix sarcasm detection</article-title>
          ,
          <source>arXiv preprint arXiv:2202.02702</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Ratnavel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. G.</given-names>
            <surname>Joshua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Varsini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>Sarcasm detection in tamil code-mixed data using transformers</article-title>
          ,
          <source>in: International Conference on Speech and Language Technologies for Low-resource Languages</source>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>430</fpage>
          -
          <lpage>442</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>