<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Approaches for Accurate Sarcasm Identification in Tamil and Malayalam Languages</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Motheeswaran K</string-name>
          <email>motheeswarank.22aid@kongu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kogilavani Shanmugavadivel</string-name>
          <email>kogilavani.sv@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Malliga Subramanian</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sanjai R</string-name>
          <email>sanjair.22aid@kongu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohammed Sameer B</string-name>
          <email>mohammedsameerb.22aid@kongu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Transformer-LSTM Network.</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of AI, Kongu Engineering College</institution>
          ,
          <addr-line>Perundurai, Erode</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Sarcasm Detection</institution>
          ,
          <addr-line>Natural Language Processing, Bidirectional Long Short-Term Memory (Bi-LSTM), Hybrid</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Sarcasm recognition is one of the most dificult problems in natural language processing (NLP), especially for low-resource languages like Tamil and Malayalam that have unique linguistic and cultural traits. A hybrid model that includes several neural network architectures and a Bidirectional Long Short-Term Memory (Bi-LSTM) model are the two deep learning techniques for sarcasm identification in various languages that are studied in this work. It takes skill to record context in both forward and backward directions, which the Bi-LSTM model aptly demonstrates, in order to comprehend the intricate linguistic structure of sarcastic statements. Using long-range dependencies in addition to local feature extraction, the hybrid model combines multiple architectures to improve sarcasm detection. We use comprehensive preprocessing techniques using Malayalam and Tamil sarcasm datasets, including tokenization, padding, and label encoding. Our hybrid model outperformed the Bi-LSTM in accuracy and F1-scores, ranking 5th in the Tamil dataset with an MF1 score of 0.70 and 7th in the Malayalam dataset with an MF1 score of 0.67. These findings demonstrate how dificult it is to use irony in these languages and how important hybrid architectures are for overcoming dificulties in low-resource languages. This paper shows the efectiveness of deep learning models in sarcasm detection and lays the groundwork for future sentiment analysis research.</p>
      </abstract>
      <kwd-group>
        <kwd>Malayalam</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Sarcasm is a sophisticated and subtle style of speech in which a sentence’s literal meaning frequently
difers from its intended meaning. Sarcasm is commonly employed in everyday interactions, especially
on social media, where people use it to convey irony, humor, or criticism. Natural language processing
(NLP) systems find it dificult to identify sarcasm in written text because it necessitates a grasp of the
words themselves as well as their context, tone, and very subtle cultural undertones.</p>
      <p>When it comes to tackling NLP tasks, deep learning models have demonstrated a lot of promise.
Long Short-Term Memory (LSTM) networks, for example, have proven to be useful in capturing
contextual information in sequential data. Bidirectional LSTMs, or Bi-LSTMs, have drawn particular
attention because of its capacity to recognize both past and future context in a sentence. This property
makes Bi-LSTMs perfect for identifying complicated expressions like sarcasm, which frequently require
comprehending context from many perspectives. Sarcasmic expressions, however, can be buried in
linguistic or cultural quirks that are dificult for a single model architecture to represent, which makes
sarcasm identification in low-resource languages more dificult.</p>
      <p>This research investigates two deep learning-based methods for sarcasm identification in Tamil and
Malayalam, a hybrid model and a bidirectional LSTM model, in order to overcome these issues. The
BiLSTM model makes use of its capacity to sequentially record context from both sides, whereas the hybrid
model incorporates several neural network designs to improve performance through the simultaneous</p>
      <p>CEUR</p>
      <p>
        ceur-ws.org
capture of textual information that are local and global[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. We hope to increase sarcasm detection
accuracy and provide insights into the linguistic nuances of Tamil and Malayalam by employing these
models.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>
        The paper by Farhan [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] utilizes Glove embeddings in conjunction with the Bi-LSTM model, contextual
information is eficiently captured, as evidenced by the 86.35% accuracy rate in sarcasm detection
across several examples. The author Ramkumar[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] employs RNN (LSTM or GRU) to capture temporal
relationships and CNN (spectrograms) to extract features.Utilizing spectral characteristics, the
CNNRNN Hybrid Model achieves 91% accuracy in classifying Tamil slang words when combined with LSTM.
Shelke [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] combined Transformer and FCNets model achieves 93.130% accuracy in sarcasm identification
in Tamil and Malayalam, outperforming Bi-LSTM and hybrid models.
      </p>
      <p>
        In a shared challenge on code-mixed Dravidian languages, [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] study tests various models for
sarcasm identification in Tamil and Malayalam, obtaining accurate results utilizing Bi-LSTM and hybrid
techniques.The macro-F1 score is used to measure the system’s performance.The study by rizwana
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] employs RNN, LSTM, and BiLSTM architectures with attention techniques to improve Malayalam
accented Automatic Speech Recognition (AASR). Word Error Rate (WER) was lowered by 50-65% using
attention mechanisms that used Tempogram and MFCC feature extraction[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        The paper by [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] addresses dialect-based ambiguity in sentiment analysis in Tamil regional languages
by focusing on hybrid optimal models like M-BERT, M-Roberta, and M-XLM-Roberta. These models
use dynamic parameter changes, adaption mechanisms, and fine-tuning techniques to attain 95%
accuracy.The study of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] ofers a hybrid deep learning architecture that leverages word- and
characterlevel features to identify ofensive posts in Dravidian languages. It uses CNN and DNN. It improves
language-specific word embeddings to improve hate speech identification in code-mixed MIoT posts[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        The study by Gulecha [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] introduces a CNN-BiGRU model for Tamil and Tanglish objectionable
text identification that makes use of fastText embeddings. For real-world use, Twitter data is used for
real-time testing.The study by [12] improves the sentiment categorization of Malayalam tweets by using
a hybrid deep learning strategy that combines CNN with LSTM, Bi-LSTM, and GRU models. Using deep
neural network approaches, this design achieves significant performance gains over baseline models.
      </p>
      <p>This paper, Thandil [13] develops an end-to-end multi-dialect Malayalam speech recognition system
with machine learning, LSTM-RNN, and deep-CNN. Accent diferences in speech are addressed by the
hybrid technique, which improves recognition performance[14]. In order to identify abusive language
in Tamil, Malayalam, and Kannada, this study used deep learning models like Bi-LSTM and hybrid
networks with convolutional layers and Bidirectional RNNs. The training and prediction performance
of these models was enhanced by[15].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Problem and System Description</title>
      <p>The challenge lies in identifying sarcasm in Tamil and Malayalam, given their distinct linguistic
characteristics. To increase F1-scores and the accuracy of sarcasm detection, the system makes use of a
hybrid model that combines several neural networks and a bidirectional LSTM.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset Description</title>
        <p>The dataset used for this study came from the Codalab website, which has YouTube video comments
with code-mixed Tamil-English and Malayalam-English sentences. This dataset helps develop a machine
learning model for Malayalam and Tamil sarcasm detection [16].</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Preprocessing</title>
        <p>Tokenization is used in text preprocessing to divide text into tokens, padding is used to guarantee a
consistent sequence length, and label encoding is used to translate categorical labels into numerical
values. Once tokens are converted into dense vectors appropriate for model input, word embeddings
are applied.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>The methodology uses a hybrid model and a bidirectional LSTM (Bi-LSTM) model to identify sarcasm
in Tamil and Malayalam text. These methods are intended to capture contextual elements in text data,
both local and global, in order to handle the special dificulties associated with sarcasm identification.</p>
      <sec id="sec-4-1">
        <title>4.1. Bidirectional LSTM</title>
        <p>Embedding Layer: Dense word embeddings are created from raw text in this first layer. In a
continuous vector space, every word is represented as a vector that captures its semantic meaning and
relationships with other words. The model can comprehend and analyze text in a numerical format
thanks to this representation.</p>
        <p>Bidirectional LSTM Layer: The Bidirectional Long Short-Term Memory (Bi-LSTM)
network is the central component of the Bi-LSTM concept. A Bi-LSTM simultaneously
processes sequences in both directions, in contrast to a normal LSTM, which processes
text in one direction (either forward or backward). Understanding complex expressions
like sarcasm requires the model to be able to capture context from both the past and
the future inside a sequence, which is made possible by this bidirectional processing.
• Forward LSTM: Completely processes the input sequence.
• Backward LSTM: Processes the input sequence from end to beginning.</p>
        <p>By combining the results from both sides, the model is able to comprehend context more
thoroughly.</p>
        <p>Dropout Layer: Applied after the LSTM layer, this layer randomly eliminates a portion of the neurons
during training in order to minimize overfitting. By doing this, the model is kept from becoming overly
dependent on particular neurons and is able to more efectively generalize to new data.</p>
        <p>Dense Layers: The last classification is carried out using dense layers, which come after the Bi-LSTM
and dropout layers. Using activation functions like Softmax or Sigmoid, these layers transfer the
information collected by the LSTM layers to output classes (such as sarcastic or non-sarcastic).</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Hybrid Transformer-LSTM Network</title>
        <p>Embedding Layer: Text is first transformed into dense vectors that represent the semantic meaning
of words by the model’s embedding layer. As a result, the network can operate using numerical
representations of the word relationships.</p>
        <p>Components of Multi-Head Attention: The hybrid model includes Multi-Head Attention, which
enables the model to focus on several textual elements at once. This facilitates the identification of
subtle sarcastic patterns and long-range dependencies within the text. The attention mechanism makes
sure that important terms or phrases that indicate sarcasm are highlighted in the text so that it can be
understood more quickly.</p>
        <p>LSTM Components: The model incorporates bidirectional LSTM layers to capture the sequential
lfow and relationships between words. These layers ensure that the model comprehends the context
around the text’s sarcastic expressions by processing information both forward and backward.</p>
        <p>Transformer Block: Enhancing the attention mechanism even further, the hybrid model allows for
more efective learning by incorporating a transformer block with Add &amp; Normalize layers. Through
improved feature extraction from the input sequence, this stabilizes the network.</p>
        <p>Feed Forward Network (FFN): To extract higher-level patterns, an attention mechanism and LSTM
are followed by a tiny, fully connected network. at order to classify sarcasm at the end, this layer assists
the model in integrating the features acquired from the attention and LSTM components.</p>
        <p>Layer of Output: The last layer employs a sigmoid activation function to produce the likelihood
that the text in question includes irony. In order to assist the model recognize sarcasm accurately, it
uses a binary classification.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Result</title>
      <p>5.1. Bi-LSTM
The sarcasm detection datasets in Tamil and Malayalam were used to assess the Bi-LSTM model. The
model performed somewhat better, with an accuracy of 0.80 for the Malayalam dataset, than it did for
the Tamil dataset, where it had an accuracy of 0.76. The Bi-LSTM model showed a moderate degree
of success in identifying sarcasm in both languages due to its capacity to capture both forward and
backward dependencies. Though it worked well, it could not fully capture the complex patterns of
sarcasm.</p>
      <sec id="sec-5-1">
        <title>5.2. Hybrid Transformer-LSTM Network</title>
        <p>The hybrid model scored better in both datasets than the Bi-LSTM model, combining elements of
Transformer and LSTM. Outperforming the Bi-LSTM with an accuracy of 0.78 on the Tamil dataset,
the hybrid model performed better. Even better, the hybrid model achieved an accuracy of 0.83 on
the Malayalam dataset. Sarcasm might be detected more accurately thanks to this model’s capacity
to record both local characteristics using Transformer and long-range dependencies using LSTM. In
addition, with an F1 score of 0.70 on the Tamil dataset and 0.67 on the Malayalam dataset, the hybrid
model came in fith place.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>The hybrid model demonstrated its efectiveness by ofering a reliable method for recognizing sarcastic
phrases in Malayalam and Tamil. Hybrid Transformer-LSTM Network and Long Short-Term Memory
(LSTM) networks were integrated, and this greatly improved performance. The model’s accuracy
levels were noteworthy. Because of its dual architecture, the model is an excellent candidate for jobs
needing a sophisticated grasp of sarcasm. It can successfully capture both local aspects and long-range
contextual relationships. Furthermore, the Bi-LSTM model showed its ability to identify languages and
extract features, although it had trouble understanding the nuances of sarcasm in languages with little
resources. Overall, the hybrid model outperforms the other models, demonstrating both its versatility
for diferent language tasks and its promise for more accurate sarcasm detection.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used ChatGPT in order to: drafting content, grammar
and spelling check, etc. After using this tool/service, the author(s) reviewed and edited the content as
needed and take(s) full responsibility for the publication’s content.
for tamil language, in: International Conference on Speech and Language Technologies for
Low-resource Languages, Springer, 2023, pp. 225–235.
[12] S. S, P. K. V, Hybrid deep learning approach for sentiment classification of malayalam tweets,
International Journal of Advanced Computer Science and Applications (2022). doi:10.14569/
ijacsa.2022.01304103.
[13] R. K. Thandil, K. Mohamed Basheer, V. Muneer, End-to-end multi-dialect malayalam speech
recognition using deep-cnn, lstm-rnn, and machine learning approaches, in: International Conference
on Computational Intelligence and Data Engineering, Springer, 2022, pp. 37–49.
[14] B. R. Chakravarthi, A. Hande, R. Ponnusamy, P. K. Kumaresan, R. Priyadharshini, How can
we detect homophobia and transphobia? experiments in a multilingual code-mixed setting for
social media governance, International Journal of Information Management Data Insights 2 (2022)
100119.
[15] S. K, B. Premjith, S. Kp, Amrita_cen_nlp@dravidianlangtech-eacl2021: Deep learning-based
ofensive language identification in malayalam, tamil, and kannada, 2021.
[16] B. R. Chakravarthi, Hope speech detection in youtube comments, Social Network Analysis and
Mining 12 (2022) 75.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>S. N</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          , B. B,
          <string-name>
            <surname>N. K</surname>
            , S. C. Navaneethakrishnan,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Durairaj</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Ponnusamy</surname>
            ,
            <given-names>P. K.</given-names>
          </string-name>
          <string-name>
            <surname>Kumaresan</surname>
            ,
            <given-names>K. K.</given-names>
          </string-name>
          <string-name>
            <surname>Ponnusamy</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>Rajkumar, Overview of sarcasm identification of dravidian languages in dravidiancodemix@fire-2024, in: Forum of Information Retrieval and Evaluation FIRE -</article-title>
          <year>2024</year>
          , DAIICT , Gandhinagar,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Farhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Shoukat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aslam</surname>
          </string-name>
          ,
          <article-title>Automatic sarcasm detection on cross-platform social media datasets: A glove and bi-lstm based approach</article-title>
          .,
          <source>Journal of Universal Computer Science (JUCS) 30</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Ramkumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Anne</surname>
          </string-name>
          ,
          <article-title>Cnn-rnn hybrid model to classify a local language slangs using spectral features</article-title>
          ,
          <source>in: 2024 International Conference on Inventive Computation Technologies (ICICT)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>600</fpage>
          -
          <lpage>607</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P. P.</given-names>
            <surname>Shelke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Wagh</surname>
          </string-name>
          ,
          <article-title>Enhanced sarcasm and emotion detection through unified model of transformer and fcnets</article-title>
          ,
          <source>Journal of Electrical Systems</source>
          <volume>20</volume>
          (
          <year>2024</year>
          )
          <fpage>551</fpage>
          -
          <lpage>565</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Sripriya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Durairaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nandhini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bharathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rajkumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Navaneethakrishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <article-title>Findings of shared task on sarcasm identification in code-mixed dravidian languages</article-title>
          .,
          <source>in: FIRE</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>22</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Thandil</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. B. K.P,</surname>
          </string-name>
          <article-title>Exploring deep spectral and temporal feature representations with attention-based neural network architectures for accented malayalam speech- a low-resourced language</article-title>
          ,
          <source>European Chemical Bulletin</source>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .53555/ecb/
          <year>2023</year>
          .12.
          <year>si5a</year>
          .
          <fpage>0388</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sripriya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bharathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nandhini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Chinnaudayar</given-names>
            <surname>Navaneethakrishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Durairaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Rajkumar, Overview of the shared task on sarcasm identification of Dravidian languages (Malayalam and Tamil) in DravidianCodeMix, in: Forum of Information Retrieval and Evaluation FIRE -</article-title>
          <year>2023</year>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>N. K.</surname>
          </string-name>
          ,
          <article-title>Unravelling emotional tones: A hybrid optimized model for sentiment analysis in tamil regional languages</article-title>
          ,
          <source>Journal of machine and computing (</source>
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .53759/7669/jmc202404012.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saumya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Detecting dravidian ofensive posts in miot: A hybrid deep learning framework</article-title>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1145/3592602.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sripriya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bharathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nandhini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Navaneethakrishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Durairaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Rajkumar, Overview of the shared task on sarcasm identification of dravidian languages (malayalam and tamil) in dravidiancodemix, in: Forum of Information Retrieval and Evaluation FIRE-</article-title>
          <year>2023</year>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Gulecha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Neelamegam Rajaram Subramanian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Abirami</surname>
          </string-name>
          , Ofensive text detection
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>