<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Sarcasm Detection in Dravidian Languages Using Bi-directional LSTM</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sulaksha B K</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shruthi Priyaa G K</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amritha P</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>C. Jerin Mahibha</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Meenakshi Sundararajan Engineering College</institution>
          ,
          <addr-line>Chennai, Tamil Nadu</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Sarcasm is a form of speech in which one expresses a thing but intends the opposite, typically to express disdain, derision, or ridicule. Detecting sarcasm using machine learning is intricate due to its dependence on contextual nuances, lack of tone, and adverse subtlety, making it hard to interpret from text alone. This paper addresses this issue using a supervised learning model called bidirectional-Long Short Term Memory (LSTM) on Tamil and Malayalam text data to classify sarcastic texts, and saves the test results in a Comma Seperated Values (CSV) ifle. It uses tokenization and sequence padding for text preprocessing and is trained on a labeled dataset to capture semantic characteristics and features of the given text. For Detecting Sarcasm in ”Sarcasm Identification in Dravidian Languages in in Bi-Directional LSTM” the proposed best-performing LSTM model, achieved the Third position in the Tamil-English subtask (Macro-F1 Score: 0.72) and also in the Malayalam-English subtask (Macro-F1 Score: 0.74).</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Bi-directional LSTM</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Dravidian Languages</kwd>
        <kwd>Sequence Padding</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Sarcasm Detection</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Sarcasm is a word derived from the Greek verb ”Sark’azein,” which means to speak bitterly. These words
are often used in a humorous way to mock people [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Sarcasm requires some shared knowledge between
speaker and audience; it is a profoundly contextual phenomenon. Most computational approaches to
sarcasm detection, however, treat it as a purely linguistic matter, using information such as lexical cues
and their corresponding sentiment as predictive features [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Detecting the sarcastic comments on
social media has received much attention in the recent days, social media comments frequently use
include positive words that represent negative attributes or characteristics.
      </p>
      <p>
        Generally, if something is said like, "Wow, thanks a lot for arriving on time." it’s easy to tell it’s
not sarcastic, this normal sentence sincerely expresses gratitude. But if the sentence is observed, the
positive words "thanks a lot" expresses that individual is often late, it’s a sign that the person is trying
to be sarcastic. For Detecting Sarcasm in "Sarcasm Identification in Dravidian Languages using
Bi-Directional LSTM" the proposed best-performing LSTM model, achieved the Third position in the
Tamil-English subtask (Macro-F1 Score: 0.72) and also in the Malayalam-English subtask (Macro-F1
Score: 0.74). This paper talks about the project and research in various sections and subsections
such as Related Works which talks about existing systems and previously present models, Dataset
Description which provides a detailed description about the input, development and test datasets for
both Malayalam and Tamil, Proposed Methodology which talks about the method and models used in
this project, the Results section features the prediction dataset generated by the developed model and
classification reports for both the Dravidian Languages, the Future Enhancement section addresses the
ares of improvement in the project, and the project finally concludes with addressing the challenges
present in the field and the References for this paper are cited and addressed.[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        S. M. Sarsam, H. Al-Samarraie, A. I. Alzahrani, B. Wright [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] had proposed a model to detect sarcasm
in twitter using machine learning algorthms like Support Vector Machine (SVM). The review results
revealed that Support Vector Machine (SVM) was the most commonly used Adapted Machine Learning
Algorithms (AMLA) for sarcasm detection in Twitter. In addition, combining Convolutional Neural
Network (CNN) and SVM was found to ofer a high prediction accuracy. Mondher Bouazizi et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] had
come up with a pattern-based approach to detect sarcasm on Twitter. They had divided words into two
classes: a first one referred to as “CI” containing words of which the content is important and a second
one referred to as “GFI” containing the words of which the grammatical function is more important. If
a word belongs to the first category, it is lemmatized; otherwise, it is replaced by certain expression.
The approach reached an accuracy of 83.1% with a precision equal to 91.1%. Meriem et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] had come
up with a fuzzy approach to solve the task. This approach focused mainly on predicting the right label
based on a measure known as the Sarcasm Score Measure, which calculates the measure of sarcasm
based on which the prediction was made. This model had been implemented on two datasets: one is
SemEval2014, and the other is the Bamman et al. dataset. They had attained an F1-score of 75.9 and
74.8 percent, respectively. Both binary and multi-class classifications were used in this task. The work
of Santiago Castro et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] explored the role of multimodality and conversational context in sarcasm
detection and introduced a new resource to further enable research in this area. More specifically, the
paper maked the following contributions: 1. Curate a new dataset, MUStARD, for multimodal sarcasm
research with high-quality annotations, including both mutlimodal and conversational context features;
2. Exemplify various scenarios where incongruity in sarcasm is evident across diferent modalities, thus
stressing the role of multimodal approaches to solve this problem; 3. To introduce several baselines
and show that multimodal models are significantly more efective when compared to their unimodal
variants; and 4. They also provided preceding turns in the dialogue which act as context information.
The spawn of internet and communication technologies, distinctly the online social networks have
modernized how people interact and communicate with each other digitally. People tend to express
more on Social Media and as anonomous reveiws compared to real life interactions and communications
Amir et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposed the use of IndicBERT for the process of detecting sarcasm from social media
text, the model efectively captured sarcasm cues and contextual information within the text S. Amir, B.
C. Wallace, H. Lyu, P. C. M. J. Silv [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] . A pragmatic and intelligent model for sarcasm detection in
social media text has been proposed by Mayank Shrivastava and Shishir Kumar. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] which was based
on Google BERT (Bidirectional Encoder Representations from Transformers) that can handle volume,
velocity and veracity of data. Madhumitha M et al. had used diferent transformer models for detecting
sarcasm from Tamil text M. M, K. Akshatra M, T. J, C. Mahibha, D. Thenmozhi, [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Agrawal et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
had formulated the task of sarcasm detection as a sequence classification problem by leveraging the
natural shifts in various emotions over the course of a piece of text. The proposed model by D. Jain,
A. Kumar, G. Garg [14] is a hybrid of bidirectional long short-term memory with a softmax attention
layer and Convolution Neural Network for real-time sarcasm detection. The methodology had used
ELMo embedding based Convolutional Neural Network model, TF-IDF based Gaussian Naive Bayes
classifier. D. Krishnan, J. M. C, T. Durairaj, [ 15] used the dataset provided by SemEval-2022 to discern
and classify diferent types of irony within textual content. Kalaivani and Thenmozhi [ 16] had done
sentiment analysis on the Dravidian-CodeMix-FIRE2021 dataset [17], where comments in 3 languages
were trained: Tamil, Malayalam, and Kannada. They had used the pre-defined BERT model to perform
this task.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset Description</title>
      <p>This research integrates datasets of Tamil and Malayalam texts, meticulously designed to advance
sarcasm analysis models. These datasets encompass a broad spectrum of text types, including film
reviews, social media comments, and general online interactions, and are annotated to diferentiate</p>
      <p>Screenshot edukkan vannth njan</p>
      <p>Mollyhood is getting bigger and bigger
Adukala oru aan kutiye palathum padipikum.....parasyam vannavar</p>
      <p>Sha rukh Khan nte fan padam pole undallo</p>
      <p>Waiting from die hard rajuvetan fan..</p>
      <p>Raju ettan fansinte watsapp group undenkil pls add me..</p>
      <p>Label
Non-sarcastic
Sarcastic
Non-sarcastic
Non-sarcastic
Sarcastic
Sarcastic
Label
Apadilam nadakathu nadakavum koodathu.. Mass dialogue</p>
      <p>Avara Contol Pannunga Pls... Vera level expression... Thala...
2:01 ( sound missing ) ennada ivlo kodi potu padam eduthurukeenga.</p>
      <p>Thala Ajith sir....mass konjam kammiya irukuravanga like pannunga</p>
      <p>Sun pictures nalla panringa THALAIVAR DHARISANAM</p>
      <p>ASURAN trailer .... Kollla massu.... asuran dhanush
between sarcastic and non-sarcastic texts. Texts were systematically collected from Tamil and Malayalam
iflm review platforms, social media networks, and fan forums, ensuring comprehensive representation
across various contexts where sarcasm, is conveyed. The model was trained with two datasets for each
language
:• Training Dataset
• Development Dataset</p>
      <p>The datasets are categorized into two primary classes. Non-sarcastic texts communicate direct
sentiments or opinions without employing irony. For instance, these may include well-wishes for an
actor’s opportunity in a film or positive feedback on a trailer’s quality. In contrast, sarcastic texts
utilize irony to convey sentiments that often contradict their literal meaning. Such texts may mock the
exaggerated aspects of a film’s presentation or deride its quality despite seemingly favorable remarks.</p>
      <p>The datasets reveal that sarcastic remarks frequently use hyperbolic expressions and present stark
contrasts with literal statements, creating complex challenges for the analysis. Table 1 and Table 2 are
some of the example instances from the training and development dataset given for prediction of labels.
The Dataset provided for this task is referenced from multiple datasets [18] [19] [20] [21] [22]</p>
      <p>These annotated datasets are crucial for developing advanced machine learning models capable of
efectively identifying and interpreting both sarcasm and nuanced sentiments in Tamil and Malayalam
texts. The insights and tools derived from this research have substantial implications for automated
sarcasm analysis applications, including social media monitoring, customer feedback evaluation, and
iflm reviews.</p>
      <p>By addressing the intricacies of sarcasm in both languages, this study makes a significant contribution
to the broader field of natural language processing.</p>
      <p>The dataset is categorized into two primary labels.</p>
      <p>• Sarcastic
• Non-Sarcastic
Id 30 Mammookka fans inu like adikkan ulla comment
Id 31 Now 5.2k dislikes Trailer varunnathine "Andha bayam irukkanam"
Id 32 Lucifer le item dance video song erakkan patvo... Illa le
Id 33 Pls support me pls My channel subscribe pls Pls Pls
Id 34 Prithviraj or Mammootty cheyyanda role aanu yenna</p>
      <p>Id 35 Ippo penpidi kazhinju.! Kunju pidipikaan thudungayo? manasilayo yeda mone</p>
    </sec>
    <sec id="sec-4">
      <title>4. Proposed Methodology</title>
      <p>In this section, the methodology used for the complex task of detecting sarcasm in Dravidian
languages(Tamil and Malayalam) is explored. The aim is to dissect the detailed process, emphasizing its
diferent phases, and clarify how each step plays a significant role in achieving the main objective. The
proposed strategy utilizes Natural Language Processing (NLP) techniques and machine learning models
to address this linguistic challenge.</p>
      <sec id="sec-4-1">
        <title>4.1. Data Preparation</title>
        <p>The Dataset Preparation involves loading and combining the training and development dataset and
extracting the text and its labels which is used for prediction.
4.1.1.</p>
        <sec id="sec-4-1-1">
          <title>Loading and Combining the Dataset</title>
          <p>The training, development and testing datasets are imported and loaded in the program. The training
dataset is combined with the development dataset to create a larger dataset, labels merged which helps
improve the model’s ability to generalize.</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>4.1.2. Text and Label Extraction</title>
          <p>The texts in the dataset is tokenized using the tokenizer class in the keras model which converts the
text to integer sequences. It is used to create a dictionary of the most frequent words which is padded
to a uniform length which is necessary for input into neural networks. The extracted labels are then
encoded into numeric format using Label Encoder. This step converts labels into binary format for
classification.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Model Design</title>
        <p>The model begins with the embedding layer which converts input tokens into dense vectors. It captures
their semantic meanings. This is followed by three Bi-Directional Layers, each processing in both
forward and backward directions. The final Bi-Directional LSTM layer integrates the representations
before passing the output to the dense layer.</p>
        <sec id="sec-4-2-1">
          <title>4.2.1. Embedding Layer</title>
          <p>It helps the deep learning models to understand real world data domains very efectively. It maps each
word index to 256-dimensional vector which helps the model understand the semantic relationships
between the words. These high-level embeddings facilitate more precise and meaningful analysis, thus
improving the model’s overall performance.</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>4.2.2. Bi-Directional LSTM Layers</title>
          <p>Bidirectional LSTM or Bi-LSTM is used for a sequence model. It contains two LSTM layers, for processing
input in both forward and backward directions.The bidirectional LSTM is better when compared with
unidirectional LSTM.Three Bi-Directional LSTM Layers are used for capturing forward and backward
contextual information from text. The first layer consists of 128 units, second layer consists of 64 units
and the third layer consists of 32 units. The first two layers returns sequences and the third layer returns
the final output.</p>
        </sec>
        <sec id="sec-4-2-3">
          <title>4.2.3. Dense Layer</title>
          <p>The dense layer with 32 units and ReLU activation is added to reduce the dimensionality of the output
after the LSTM layers. The final dense layer with 1 unit and sigmoid activation is added for binary
classification. The combination of these layers helps the model generalize by balancing feature extraction
and classification.</p>
        </sec>
        <sec id="sec-4-2-4">
          <title>4.2.4. Dropout Layer</title>
          <p>A dropout layer is added after each LSTM and Dense layers to prevent overfitting by reducing 50% of
the neurons during training. This helps the model become too relient on any specific set of neurons.
By applying Dropout, the model becomes more generalized, thereby improving the performance on
unseen data.</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Model Compilation and Training</title>
        <p>The model is compiled using Loss function:Binary Crossentropy, Optimizer: optimizer and
Metric:Accuracy.The model is trained using combined dataset of padded-sequences-combined for
input and label-encoded-combined for target labels. 10% of the training data is reserved for validation.
Training is performed over 10 epochs and 32 units(batch size) which controls the number of samples to
be processed before updating the model’s internal parameters.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Predictions on Test Data</title>
        <p>The labels in the test data are predicted using the trained model using the padded-sequences-test. The
predictions are threshold at 0.5 to convert them into binary classes(0 or 1). The predicted labels are
mapped to their original labels(Sarcastic and Non-Sarcastic). The predicted labels are saved to the
CSV file and classification report is generated. Together, tokenization, model training, and meticulous
evaluation helped to unravel the intricacies of sarcasm detection in Dravidian languages.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>In Tables 5 and 6. a sample of the predicted dataset by the model is given. The dataset contains two
columns that is ID and Predicted Labels. In Tables 7 and 8, the performance for both models across both
languages are presented. From the classification reports, it is observed that the key statistics related to
the performance of the model is accuracy. It could be observed that the accuracy of 0.78 and 0.82 is
achieved in the Predicted Tamil and Malayalam dataset models respectively.</p>
      <p>The classification result plays a major role in identifying the strengths and weaknesses of the
model which helps in fine-tuning process and can thus improve the performance of the model. The
performance of the proposed models is examined using a range of evaluation metrics, with a primary
focus on F1-Score, accuracy, recall, macro-averaged F1-score, and weighted average F1-score. The
organizers thoughtfully provided test data for both Dravidian languages, which served as the foundation
for the model evaluation.</p>
      <p>The results of the Sarcasm detection task from the organizers is shown in Table 5. The model has
achieved excellence by securing the third rank in both Tamil and Malayalam classification procedures.
Using the bi-directional LSTM model, the labels were predicted for the comments given in the dataset.
It provided an accuracy of 0.78 and 0.82 for the Tamil and Malayalam datasets, respectively. Macro-F1
scores of 0.72 and 0.45 were achieved in the Tamil and Malayalam datasets, respectively. It is evident
Precision</p>
      <p>Recall F1-Score</p>
      <p>Support
Precision</p>
      <p>Recall F1-Score</p>
      <p>Support
that out of 6339 comments, 1717 comments are sarcastic and 4621 comments are non-sarcastic in the
Tamil dataset. Similarly, in the Malayalam dataset, out of 2827 comments, 512 are Sarcastic and 2314
are Non-sarcastic.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Future Enhancements</title>
      <p>The computer science field is not static, it is subjected to be dynamic. The technology which is popular
today becomes outdated the next day itself. The future enchancement refer to the improvements which
can be done in the future stages of the project to increase the accuracy, efciency and performance
metrics.</p>
      <p>In this sarcasm detection model, the enhancement could be done by using the advanced method
SMOTE (Synthetic Minority Over-Sampling Technique) or by oversampling the minority class (sarcastic
comments) and undersampling the majority class (non-sarcastic comments) especially in the malayalam
dataset. Pre-trained language models like BERT (Bidirectional Encoder) or mBERT (Multilingual BERT)
could also be incorporated which can enhance the model’s ability to detect the sarcastic comments.</p>
      <p>Another enhancement could be adding features to the model such as user behavior, replies in
conversation can be useful for detecting the sarcastic comments, which are contextual features.
Pretrained sarcasm detection models on English dataset could also be fine-tuned and used for Tamil and
Malayalam languages.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>Detecting sarcasm in Dravidian languages (Tamil and Malayalam), presents challenges due to the
languages’ linguistic diversity, cultural diferences, and complex sentence structures. Using advanced
deep learning models like Bidirectional LSTMs, along with specialized tokenization and embedding
techniques, has led to significant progress in sarcasm detection. Sarcasm in Tamil and Malayalam
often hinges on deeper contextual cues, including tone, cultural context, and socio-political factors.
Incorporating language-specific factors, grammers etc., and improving the model’s ability to understand
context can significantly enhance accuracy of the model. Moreover, extending the work to include
other languages promises to broaden the scope and applicability of the methodology . In conclusion,
the research represents a valuable step forward in the field of sarcasm detection for Dravidian
languages(Tamil and Malayalam). By comparing the proposed approach and findings with prior studies, it
could contribute to the ongoing discourse and innovation in this area, helping to drive the development
of more precise and robust sarcasm detection systems.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in
Information Retrieval, 2020, pp. 1505–1508.
[14] D. Jain, A. Kumar, G. Garg, Sarcasm detection in mash-up language using soft-attention based
bi-directional lstm and feature-rich cnn, Applied Soft Computing 91 (2020) 106198.
[15] D. Krishnan, J. M. C, T. Durairaj, GetSmartMSEC at SemEval-2022 task 6: Sarcasm detection using
contextual word embedding with Gaussian model for irony type identification, in: G. Emerson,
N. Schluter, G. Stanovsky, R. Kumar, A. Palmer, N. Schneider, S. Singh, S. Ratan (Eds.), Proceedings
of the 16th International Workshop on Semantic Evaluation (SemEval-2022), Association for
Computational Linguistics, Seattle, United States, 2022, pp. 827–833. URL: https://aclanthology.
org/2022.semeval-1.114. doi:10.18653/v1/2022.semeval-1.114.
[16] A. Kalaivani, D. Thenmozhi, Multilingual sentiment analysis in tamil, malayalam, and kannada
code-mixed social media posts using mbert, FIRE (Working Notes) (2021) 1020–1028.
[17] C. J. Mahibha, S. Kayalvizhi, D. Thenmozhi, Sentiment analysis using cross lingual word embedding
model (2021).
[18] N. Sripriya, T. Durairaj, K. Nandhini, B. Bharathi, K. K. Ponnusamy, C. Rajkumar, P. K. Kumaresan,
R. Ponnusamy, C. Subalalitha, B. R. Chakravarthi, Findings of shared task on sarcasm identification
in code-mixed dravidian languages, FIRE 2023 16 (2023) 22.
[19] B. R. Chakravarthi, N. Sripriya, B. Bharathi, K. Nandhini, S. C. Navaneethakrishnan, T. Durairaj,
R. Ponnusamy, P. K. Kumaresan, K. K. Ponnusamy, C. Rajkumar, Overview of the shared task
on sarcasm identification of dravidian languages (malayalam and tamil) in dravidiancodemix, in:
Forum of Information Retrieval and Evaluation FIRE-2023, 2023.
[20] B. R. Chakravarthi, Hope speech detection in youtube comments, Social Network Analysis and</p>
      <p>Mining 12 (2022) 75.
[21] B. R. Chakravarthi, A. Hande, R. Ponnusamy, P. K. Kumaresan, R. Priyadharshini, How can
we detect homophobia and transphobia? experiments in a multilingual code-mixed setting for
social media governance, International Journal of Information Management Data Insights 2 (2022)
100119.
[22] B. R. Chakravarthi, N. Sripriya, B. Bharathi, K. Nandhini, S. Chinnaudayar Navaneethakrishnan,
T. Durairaj, R. Ponnusamy, P. K. Kumaresan, K. K. Ponnusamy, C. Rajkumar, Overview of the shared
task on sarcasm identification of Dravidian languages (Malayalam and Tamil) in DravidianCodeMix,
in: Forum of Information Retrieval and Evaluation FIRE - 2023, 2023.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Reyes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Veale</surname>
          </string-name>
          ,
          <article-title>A multidimensional approach for detecting irony in twitter, Language resources</article-title>
          and evaluation
          <volume>47</volume>
          (
          <year>2013</year>
          )
          <fpage>239</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bamman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Contextualized sarcasm detection on twitter</article-title>
          ,
          <source>in: proceedings of the international AAAI conference on web and social media</source>
          , volume
          <volume>9</volume>
          ,
          <year>2015</year>
          , pp.
          <fpage>574</fpage>
          -
          <lpage>577</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Indirakanth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Udayakumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Durairaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bharathi</surname>
          </string-name>
          ,
          <article-title>Sarcasm identification of dravidian languages (malayalam and tamil</article-title>
          )., in: FIRE (Working Notes),
          <year>2023</year>
          , pp.
          <fpage>327</fpage>
          -
          <lpage>335</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. N</given-names>
            , B. B,
            <surname>N. K</surname>
          </string-name>
          , T. Durairaj,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Rajkumar, Overview of sarcasm identification of dravidian languages in dravidiancodemix@fire2024, in: Forum of Information Retrieval and Evaluation FIRE -</article-title>
          <year>2024</year>
          , DAIICT , Gandhinagar,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Sarsam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Al-Samarraie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. I.</given-names>
            <surname>Alzahrani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wright</surname>
          </string-name>
          ,
          <article-title>Sarcasm detection using machine learning algorithms in twitter: A systematic review</article-title>
          ,
          <source>International Journal of Market Research</source>
          <volume>62</volume>
          (
          <year>2020</year>
          )
          <fpage>578</fpage>
          -
          <lpage>598</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bouazizi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. O.</given-names>
            <surname>Ohtsuki</surname>
          </string-name>
          ,
          <article-title>A pattern-based approach for sarcasm detection on twitter</article-title>
          ,
          <source>IEEE Access 4</source>
          (
          <year>2016</year>
          )
          <fpage>5477</fpage>
          -
          <lpage>5488</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Meriem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hlaoua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Romdhane</surname>
          </string-name>
          ,
          <article-title>A fuzzy approach for sarcasm detection in social networks</article-title>
          ,
          <source>Procedia Computer Science</source>
          <volume>192</volume>
          (
          <year>2021</year>
          )
          <fpage>602</fpage>
          -
          <lpage>611</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Castro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hazarika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pérez-Rosas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zimmermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mihalcea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Poria</surname>
          </string-name>
          ,
          <article-title>Towards multimodal sarcasm detection (an _obviously_ perfect paper)</article-title>
          , arXiv preprint arXiv:
          <year>1906</year>
          .
          <year>01815</year>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Sulaksha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Krishnaveni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Steeve</surname>
          </string-name>
          , et al.,
          <article-title>Sis@ lt-edi-2023: Detecting signs of depression from social media text</article-title>
          ,
          <source>in: Proceedings of the Third Workshop on Language Technology for Equality, Diversity and Inclusion</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>131</fpage>
          -
          <lpage>137</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Amir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Wallace</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lyu</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. C. M. J. Silva</surname>
          </string-name>
          ,
          <article-title>Modelling context with user embeddings for sarcasm detection in social media</article-title>
          ,
          <source>arXiv preprint arXiv:1607.00976</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shrivastava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>A pragmatic and intelligent model for sarcasm detection in social media text</article-title>
          ,
          <source>Technology in Society 64</source>
          (
          <year>2021</year>
          )
          <fpage>101489</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. M</surname>
            , K. Akshatra
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          J,
          <string-name>
            <surname>C. Mahibha</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Thenmozhi</surname>
          </string-name>
          ,
          <article-title>Sarcasm detection in dravidian languages using transformer models (</article-title>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>An</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Papagelis</surname>
          </string-name>
          ,
          <article-title>Leveraging transitions of emotions for sarcasm detection</article-title>
          , in:
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>