<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.18653/v1</article-id>
      <title-group>
        <article-title>2: Sentiment and Emotion Analysis of Memes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Parth Patwa</string-name>
          <email>parthpatwa@g.ucla.edu</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sathyanarayanan Ramamoorthy</string-name>
          <email>sathyanarayanan.r18@iiits.in</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nethra Gunti</string-name>
          <email>nethra.g18@iiits.in</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shreyash Mishra</string-name>
          <email>shreyash.m19@iiits.in</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>S Suryavardan</string-name>
          <email>suryavardan.s19@iiits.in</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aishwarya Reganti</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amitava Das</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tanmoy Chakraborty</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amit Sheth</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Asif Ekbal</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chaitanya Ahuja</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Memes, Sentiment Analysis, Dataset, Multimodality</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AI Institute, University of South Carolina</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>IIIT Delhi</institution>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>IIIT Sri City</institution>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>IIT Patna</institution>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of California Los Angeles</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>Wipro AI labs</institution>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>1</volume>
      <fpage>1532</fpage>
      <lpage>1543</lpage>
      <abstract>
        <p>Memes are an important part of the Internet culture and their popularity has increased in the recent years. Memes can be used to express humor, opinions or to even spread hate and misinformation. Hence, it is of research interest to analyze them. In this paper, we describe the Memotion 2 shared task, which is organized as a part of the De-Factify workshop at AAAI'22. The shared task includes study of memes in three sub-tasks - Task A: sentiment analysis, Task B: emotion analysis, Task C: emotion intensity detection. A total of 44 teams participated in the Memotion 2.0 shared task, and of them, 8 teams submitted their predictions on test set for Tasks A an B, and 7 teams for Task C. Use of BERT-like models was a popular choice to extract text features among the participants. Models like ResNet50, VGG-16, EficientNet were used by the participants to extract text features. Most of the systems combine the modalities (text,image) in a late fusion. The best F1 scores achieved for the Tasks A, B and C are 0.53, 0.82 and 0.55, respectively.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The word ‘meme’ was first used by Richard Dawkins in his 1976 book, “The Selfish Gene” [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
calling it as cultural units that replicate, mutate and evolve. Modern memes are an extension of
the original idea of a meme, but the mode of spread is through online platforms. Memes have
become a very popular mode of broadcast and communication over social media platforms these
(A. Das)
days. The usage of simple language to convey a message resonates with the general public,
and the reach for such posts is humongous. There is a psychological side of this phenomenon
of rapid sharing of memes. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] examined responses of individuals towards Internet videos,
memes and found that strong afective responses to a video reported greater intent to spread it
across social platforms. Usually the content of a meme is derived from day-to-day activities,
college life, work environment, food, relationships, etc. Therefore, people develop more afinity
towards such posts that explains the reason behind the viral reach of memes across countries in
a shorter span of time.
      </p>
      <p>The study of evolution of memes over the last ten years helps us to also understand the
changes in online culture. Initially, memes served as an expression of an individual’s take on
a subject with some humor and little sarcasm. Because of the freedom that one enjoys while
creating memes, they were then used to vent one’s feelings on socio-political issues. Currently,
we are in a stage where the memes are being used by social media users to share their opinions
on any topic, which further helps them connect with millions of people all over the globe.
Memes are succinct and explicit in their messages. A flip side of this powerful medium is that
it is being misused to spread hatred in the community. The work by Moody and Church [3]
analyzes the role that Facebook meme pages and trolls had in the 2016 US Presidential Elections.
There has been an increase in the number of online abuses, especially on the oppressed and
weaker sections of the society. [4] shows that many Internet memes featuring fake news are
specifically directed with political agendas by agencies. An article [ 5] unpacking the vulgarity
of Internet memes targeting aboriginality in relation to skin colour and other racist stereotypes,
is a reminder to the research community that identifying and preventing toxic news present in
social media is necessary. Recently, a few models have been proposed to detect harmful memes
and their targets [6, 7]</p>
      <p>This paper presents the findings of the shared task Memotion 2, which was organized as
part of the workshop “Defactify - A workshop on Multimodal Fact-Checking and Hate Speech
Detection” at AAAI 2022. Our work is an attempt to leverage the information present in Internet
memes and encourage research teams to develop robust computational methods to classify the
sentiment, emotion and emotion intensity of multi-modal posts (memes) accurately.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Mining and understanding social media content has become a very significant task in recent
times. There exists an abundance of data in diferent forms and tenor. Extracting this information
can help examine difusion, recommendations, analyze behaviour, etc. This ever-growing flow
of diverse content has attracted researchers to several applications using online data, one of
of which is sentiment and emotion analysis. The task aims to identify and quantify subjective
information from given data. Most of the studies in this area focus on collecting relevant
public data and applying it to binary, polarity or scale based classification task [ 8]. Some of
the notable contributions include textual and multi-modal datasets [9] [10] [11], workshops
[12], and modelling approaches [13]. The second aspect is hate speech detection. Automation
of such a task is challenging because of the vast variety of content and the blurry line between
hate and free speech. Research towards hate speech detection presents workshops [14] [15]
and tasks with multi-lingual [
        <xref ref-type="bibr" rid="ref3">16</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">17</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">18</xref>
        ] and multi-modal [
        <xref ref-type="bibr" rid="ref6">19</xref>
        ] [
        <xref ref-type="bibr" rid="ref7">20</xref>
        ] data [
        <xref ref-type="bibr" rid="ref8">21</xref>
        ] [
        <xref ref-type="bibr" rid="ref9">22</xref>
        ]
      </p>
      <p>
        An extensive amount of multi-modal social media data is in the form of memes. The attention
to understanding and extracting sentiment, emotion and profanity from memes is growing.
Meme analysis highlights the importance of considering both visual and textual cues to
understand the context, ofensiveness, humour, etc. The previous iteration of our task – Memotion
1.0 [
        <xref ref-type="bibr" rid="ref10">23</xref>
        ] provided an annotated dataset with labels capturing humour, sarcasm and hate speech.
Other significant work in this area is from the Hateful memes challenge by Facebook [
        <xref ref-type="bibr" rid="ref11">24</xref>
        ],
the MultiOFF dataset [
        <xref ref-type="bibr" rid="ref12">25</xref>
        ] and other such tasks [
        <xref ref-type="bibr" rid="ref13 ref14">26, 27</xref>
        ]. These tasks have brought attention
to analysis of memes and (CNN, BERT, CLIP etc. based) multi-modal modelling approaches
[
        <xref ref-type="bibr" rid="ref15 ref16 ref17">6, 28, 29, 30</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Task Details</title>
      <p>The idea behind the shared task is to facilitate the research community to analyze memes across
multiple dimensions.
3.1. Tasks
We conduct and evaluate participants in three tasks:
• Task A: Sentiment Analysis - The task is to classify a given meme’s sentiment as
positive, negative or neutral.
• Task B: Emotion Classification - Identifying particular emotions associated with a
given meme is the motive of this task. The system/model should indicate if the meme is
humorous, sarcastic, ofensive and motivational. A meme can belong to more than one
category.
• Task C: Scales/Intensity of Emotion Classes - As humans, we express the same
emotion in diferent levels of intensity . Hence, the third task is to quantify the extent to
which a particular emotion is being expressed by a given meme. Diferent intensities of
each emotion are:
– Humour: Not funny, funny, very funny and hilarious
– Sarcasm: Not Sarcastic, little sarcastic, very sarcastic and extremely sarcastic
– Ofensive : Not ofensive, slightly ofensive, very ofensive and hateful ofensive
– Motivation: Not motivational, motivational</p>
      <p>Tasks A, B and C are explained using the meme in Figure 1.</p>
      <sec id="sec-3-1">
        <title>3.2. Dataset</title>
        <p>
          The tasks were conducted as a part of Memotion 2 [
          <xref ref-type="bibr" rid="ref18">31</xref>
          ]. The memes were collected from various
public platforms like Reddit, Instagram, etc. and annotated with the help of Amazon Mechanical
Turk workers. The dataset consists of 10,000 meme images divided into a train-val-test split
with 7000-1500-1500. Each meme is annotated for its overall sentiment, emotion and scale of
each emotion. Images also have their corresponding OCR text extracted with the help of Google
Vision APIs. For a detailed data description, please refer to [
          <xref ref-type="bibr" rid="ref18">31</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.3. Evaluation</title>
        <p>The challenge involves three tasks, with Task A being a multi-label classification problem of
identifying a sentiment (positive, neutral or negative) for a meme. Tasks B and C both are
multi-label classification problems of emotion detection. Scoring is done for each task separately,
and separate leaderboards are generated. For each task, we use weighted average F1 score to
measure the performance of a model. The participants had access to only train and validation
set. They were asked to submit a maximum of 3 submissions on the test set for each task, the
best of which was selected as part of leaderboard.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.4. Baselines</title>
        <p>
          The baseline models for the tasks were created by keeping in mind the multi-modal nature
of the dataset. BERT was used to extract text features from the OCR text for each meme and
ResNet-50 for image features from the meme image. The features were then concatenated and
passed on to a classification head to predict labels for each task For more details about the
baseline, please refer [
          <xref ref-type="bibr" rid="ref18">31</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Participating systems</title>
      <p>Total 44 teams participated in the shared task, out of which 8 teams submitted their results for
Tasks A and B, and 7 teams submitted papers for Task C. We received 6 system description
papers. In this section, we provide a summary of the methods that the teams used.</p>
      <p>
        HCIlab [
        <xref ref-type="bibr" rid="ref19">32</xref>
        ] used EficientNet-v2 [
        <xref ref-type="bibr" rid="ref20">33</xref>
        ] for learning image embeddings and RoBERTa [
        <xref ref-type="bibr" rid="ref21">34</xref>
        ] for
learning text embeddings. These embeddings were fused using a multihop attention
mechanism [
        <xref ref-type="bibr" rid="ref22">35</xref>
        ], which as followed by a fully connected layer and a classifier. They also use auto
augmentation [
        <xref ref-type="bibr" rid="ref23">36</xref>
        ] and CCA [
        <xref ref-type="bibr" rid="ref24">37</xref>
        ] to improve performance of their system.
      </p>
      <p>
        BROWALLIA [
        <xref ref-type="bibr" rid="ref25">38</xref>
        ] used ResNet50 [
        <xref ref-type="bibr" rid="ref26">39</xref>
        ] and LSTM [
        <xref ref-type="bibr" rid="ref27">40</xref>
        ] to extract image and text embeddings,
respectively. These embeddings are concatenated and given to a classifier. Further, they use
ofline-gradient-blending [
        <xref ref-type="bibr" rid="ref28">41</xref>
        ] to decrease overfitting. In this, they calculate the
Overfiting-toGeneralization ratio and use it to weigh the loss function.
      </p>
      <p>
        Yet [
        <xref ref-type="bibr" rid="ref29">42</xref>
        ] used VGG-16 [43] to extract image features and GloVe [44] followed by LSTM [
        <xref ref-type="bibr" rid="ref27">40</xref>
        ]
to extract text features. These features are fused using fully-connected layers.
      </p>
      <p>
        Little flower [45] used VGG-16 [43] followed by multi-head attention and dense layer along
with residual connections to extract image features. For extracting text features, they used
BiLSTM [
        <xref ref-type="bibr" rid="ref27">40</xref>
        ] followed by attention mechanism and fully-connected layers along with residual
connection. The text and image features are concatenated in a late fusion. To get the final
prediction, they used an ensemble method. Further, they used a weighted loss function to
account for class imbalance.
      </p>
      <p>BLUE [46] used a 3-branch network, where the branches use EficientNetV4 [ 47], CLIP [48]
and sentence transformer [49] respectively, for feature extraction. These features are given to a
multi-task transformer encoder which makes the prediction. They trained the models using
CORAL [50] loss function to predict the intensity of emotions.</p>
      <p>Amazon PARS [51] used VisualBERT [52] to extract image features and BERT [53] to extract
text features. These features are fed to a transformer. The transformer is trained in a two stage
[54] multi-task manner, where the predictions of task B are fed to predict on Task C.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>Table 2 shows the leaderboard for Task B. Five teams managed to cross the overall baseline
score, whereas three teams could not cross the baseline. The maximum scores for Humor,
Sarcasm, Ofense, and Motivation are 0.9384, 0.8190, 0.5540, 0.9800, respectively, which are at an
increment of 14.41%, 16.14%, 1.94%, 2.25% from the baseline scores of each emotion, respectively.
The overall top score of 0.8229, with an increment of 8.71% from baseline score of 0.7358, is
achieved by Little Flower [45]. We can see that the ‘Motivation’ class is the easiest class to
detect. Humor is also easy to detect, possibly because most of the memes are meant to be funny.
Sarcasm is the hardest to detect and its scores vary considerably, whereas for other classes, the
teams’ scores are closer to each other.</p>
      <p>Table 3 shows the leaderboard for Task C. Four teams cross the overall baseline score for
this task, and four teams are below the baseline. The maximum scores for Humor, Sarcasm,
Ofense, and Motivation are 0.4611, 0.3083, 0.5275, 0.9800, respectively which are at an increment
of 37.69%, 21.74%, 9.94%, 2.349% from the baseline scores of each emotion, respectively. The
overall top score of 0.5564, with an increment of 9% from baseline score of 0.5105, is achieved by
Amazon PARS [51]. The performance on ‘Motivation’ is much higher than on other emotions
because motivation has only 2 intensities whereas other emotions have 4 intensities. All the
teams perform poorly when detecting the intensity of sarcasm, which shows that most neural
models fail to understand sarcasm. The best overall score is 0.554, which shows that there is a</p>
      <p>
        Amazon PARS
[51]
BROWALLIA
[
        <xref ref-type="bibr" rid="ref25">38</xref>
        ]
BLUE [46]
HCILab [
        <xref ref-type="bibr" rid="ref19">32</xref>
        ]
BASELINE
Yet [
        <xref ref-type="bibr" rid="ref29">42</xref>
        ]
weipengfei
Greeny
lot of scope of improvement.
      </p>
      <p>
        Four teams, namely, BLUE [46], HCIlab [
        <xref ref-type="bibr" rid="ref19">32</xref>
        ], Amazon PARS [51], and BROWALLIA [
        <xref ref-type="bibr" rid="ref25">38</xref>
        ];
crossed the baseline scores for all the three tasks. Since Task B is a multi-task binary classification
task, its results are better than those of Task A, which is a multi-class classification task. Results
are better on Task B than Task C, because Task C is more fine grained.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>In this paper, we report the findings of the Memotion 2. We see that all the teams used deep
learning based architectures, and most of the teams use BERT based models to extract language
features. On the other hand, we see more variety in models used to extract image features
(VGG, ResNet, EficientNet, etc). Further, most systems use late fusion to combine image and
text modalities. The best results on Task A (Sentiment Analysis), Task B (Emotion detection),
and Task C (Emotion Intensity Detection) are 0.53, 0.82, 0.55, respectively, which shows that
there is a large scope of improvement. We also find that detecting the intensity of sarcasm is
very dificult for neural systems.</p>
      <p>Future work could involve adding more data and/or more languages. On the model side,
learning joint embedding or early fusion of modalities could be interesting directions. Memotion
analysis is a relatively new problem and is far from completion. We hope our work attracts
more research attention towards the analysis of memes.
Behavior 29 (2013) 2312–2319. URL: https://www.sciencedirect.com/science/article/pii/
S0747563213001192. doi:https://doi.org/10.1016/j.chb.2013.04.016.
[3] M. Moody-Ramirez, A. B. Church, Analysis of facebook meme groups used during
the 2016 us presidential election, Social Media + Society 5 (2019) 2056305118808799.
URL: https://doi.org/10.1177/2056305118808799. doi:10.1177/2056305118808799.
arXiv:https://doi.org/10.1177/2056305118808799.
[4] C. A. Smith, Weaponized iconoclasm in internet memes featuring the
expression ‘fake news’, Discourse &amp; Communication 13 (2019) 303–319. URL:
https://doi.org/10.1177/1750481319835639. doi:10.1177/1750481319835639.
arXiv:https://doi.org/10.1177/1750481319835639.
[5] R. Al-Natour, The digital racist fellowship behind the anti-aboriginal internet memes,
Journal of Sociology 57 (2021) 780–805. URL: https://doi.org/10.1177/1440783320964536. doi:10.
1177/1440783320964536. arXiv:https://doi.org/10.1177/1440783320964536.
[6] S. Pramanick, S. Sharma, D. Dimitrov, M. S. Akhtar, P. Nakov, T. Chakraborty, Momenta:
A multimodal framework for detecting harmful memes and their targets, arXiv preprint
arXiv:2109.05184 (2021).
[7] S. Pramanick, D. Dimitrov, R. Mukherjee, S. Sharma, M. Akhtar, P. Nakov, T. Chakraborty,
et al., Detecting harmful memes and their targets, arXiv preprint arXiv:2110.00413 (2021).
[8] L. Yue, W. Chen, X. Li, W. Zuo, M. Yin, A survey of sentiment analysis in social
media 60 (2019) 617–663. URL: https://doi.org/10.1007/s10115-018-1236-4. doi:10.1007/
s10115-018-1236-4.
[9] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, C. Potts, Learning word
vectors for sentiment analysis, in: Proceedings of the 49th Annual Meeting of the
Association for Computational Linguistics: Human Language Technologies,
Association for Computational Linguistics, Portland, Oregon, USA, 2011, pp. 142–150. URL:
http://www.aclweb.org/anthology/P11-1015.
[10] L.-P. Morency, R. Mihalcea, P. Doshi, Towards multimodal sentiment analysis: Harvesting
opinions from the web, ICMI ’11, Association for Computing Machinery, New York,
NY, USA, 2011, p. 169–176. URL: https://doi.org/10.1145/2070481.2070509. doi:10.1145/
2070481.2070509.
[11] A. Pak, P. Paroubek, Twitter as a corpus for sentiment analysis and opinion mining, in:</p>
      <p>LREC, 2010.
[12] Z. Kozareva, B. Navarro, S. Vázquez, A. Montoyo, UA-ZBSA: A headline emotion
classification through web information, in: Proceedings of the Fourth International Workshop on
Semantic Evaluations (SemEval-2007), Association for Computational Linguistics, Prague,
Czech Republic, 2007, pp. 334–337. URL: https://aclanthology.org/S07-1072.
[13] N. C. Dang, M. N. Moreno-García, F. De la Prieta, Sentiment analysis based on deep
learning: A comparative study, Electronics 9 (2020) 483. URL: http://dx.doi.org/10.3390/
electronics9030483. doi:10.3390/electronics9030483.
[14] A. Mostafazadeh Davani, D. Kiela, M. Lambert, B. Vidgen, V. Prabhakaran, Z. Waseem (Eds.),
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), Association
for Computational Linguistics, Online, 2021. URL: https://aclanthology.org/2021.woah-1.0.
[15] P. Patwa, M. Bhardwaj, V. Guptha, G. Kumari, S. Sharma, S. PYKL, A. Das, A. Ekbal,
S. Akhtar, T. Chakraborty, Overview of constraint 2021 shared tasks: Detecting english</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Dawkins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <article-title>The selfish gene</article-title>
          ,
          <source>Macat Library</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Guadagno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Rempala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Murphy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. M.</given-names>
            <surname>Okdie</surname>
          </string-name>
          ,
          <article-title>What makes a video go viral? an analysis of emotional contagion and internet memes, Computers in Human covid-19 fake news and hindi hostile posts</article-title>
          ,
          <source>in: Proceedings of the First Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation (CONSTRAINT)</source>
          , Springer,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Patwa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pykl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Guptha</surname>
          </string-name>
          , G. Kumari,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Akhtar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ekbal</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <article-title>Fighting an infodemic: Covid-19 fake news dataset, in: Combating Online Hostile Posts in Regional Languages during Emergency Situation (CONSTRAINT) 2021</article-title>
          , Springer,
          <year>2021</year>
          , p.
          <fpage>21</fpage>
          -
          <lpage>29</lpage>
          . URL: http://dx.doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -73696-
          <issue>5</issue>
          _3. doi:
          <volume>10</volume>
          . 1007/978- 3-
          <fpage>030</fpage>
          - 73696-
          <issue>5</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Siegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ruppenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegand</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Klenner, Overview of germeval task 2, 2019 shared task on the identification of ofensive language</article-title>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>V.</given-names>
            <surname>Basile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bosco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Fersini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nozza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Patti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Rangel Pardo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          , M. Sanguinetti, SemEval
          <article-title>-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter</article-title>
          ,
          <source>in: Proceedings of the 13th International Workshop on Semantic Evaluation</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Minneapolis, Minnesota, USA,
          <year>2019</year>
          , pp.
          <fpage>54</fpage>
          -
          <lpage>63</lpage>
          . URL: https://aclanthology.org/S19-2007. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>S19</fpage>
          - 2007.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>R.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gibert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Karatzas</surname>
          </string-name>
          ,
          <article-title>Exploring hate speech detection in multimodal publications</article-title>
          ,
          <year>2019</year>
          . arXiv:
          <year>1910</year>
          .03814.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Ojha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Malmasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zampieri</surname>
          </string-name>
          ,
          <article-title>Benchmarking aggression identification in social media</article-title>
          ,
          <source>in: Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Santa Fe, New Mexico, USA,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . URL: https://aclanthology.org/W18-4401.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>MacAvaney</surname>
          </string-name>
          , H.
          <string-name>
            <surname>-R. Yao</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Goharian</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Frieder</surname>
          </string-name>
          ,
          <article-title>Hate speech detection: Challenges and solutions</article-title>
          ,
          <source>PLOS ONE 14</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . URL: https://doi.org/10.1371/journal. pone.0221152. doi:
          <volume>10</volume>
          .1371/journal.pone.
          <volume>0221152</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Jahan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Oussalah</surname>
          </string-name>
          ,
          <article-title>A systematic review of hate speech automatic detection using natural language processing</article-title>
          ,
          <year>2021</year>
          . arXiv:
          <volume>2106</volume>
          .
          <fpage>00742</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>C.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bhageria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Scott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. PYKL</given-names>
            ,
            <surname>A. Das</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pulabaigari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gamback</surname>
          </string-name>
          , Semeval
          <article-title>-2020 task 8: Memotion analysis - the visuo-lingual metaphor</article-title>
          !,
          <year>2020</year>
          . arXiv:
          <year>2008</year>
          .03781.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kiela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Firooz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Goswami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ringshia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Testuggine</surname>
          </string-name>
          ,
          <article-title>The hateful memes challenge: Detecting hate speech in multimodal memes</article-title>
          ,
          <year>2021</year>
          . arXiv:
          <year>2005</year>
          .04790.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>S.</given-names>
            <surname>Suryawanshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Arcan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Buitelaar</surname>
          </string-name>
          ,
          <article-title>Multimodal meme dataset (MultiOFF) for identifying ofensive content in image and text</article-title>
          ,
          <source>in: Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, European Language Resources Association (ELRA)</source>
          , Marseille, France,
          <year>2020</year>
          , pp.
          <fpage>32</fpage>
          -
          <lpage>41</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          . trac-
          <volume>1</volume>
          .6.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Miliani</surname>
          </string-name>
          , G. Giorgi, I. Rama, G. Anselmi, G. Lebani, DANKMEMES @ EVALITA
          <year>2020</year>
          :
          <article-title>The Memeing of Life: Memes, Multimodality</article-title>
          and Politics,
          <year>2020</year>
          , pp.
          <fpage>275</fpage>
          -
          <lpage>283</lpage>
          . doi:
          <volume>10</volume>
          .4000/ books.aaccademia.
          <volume>7330</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>S.</given-names>
            <surname>Suryawanshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <article-title>Findings of the shared task on troll meme classification in Tamil</article-title>
          ,
          <source>in: Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, Association for Computational Linguistics</source>
          , Kyiv,
          <year>2021</year>
          , pp.
          <fpage>126</fpage>
          -
          <lpage>132</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .dravidianlangtech-
          <volume>1</volume>
          .
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xu</surname>
          </string-name>
          , Guoym at SemEval
          <article-title>-2020 task 8: Ensemble-based classification of visuo-lingual metaphor in memes</article-title>
          ,
          <source>in: Proceedings of the Fourteenth Workshop on Semantic Evaluation</source>
          , International Committee for Computational Linguistics,
          <source>Barcelona (online)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1120</fpage>
          -
          <lpage>1125</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .semeval-
          <volume>1</volume>
          .148. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .semeval-
          <volume>1</volume>
          .
          <fpage>148</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>G.-A.</given-names>
            <surname>Vlad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.-E.</given-names>
            <surname>Zaharia</surname>
          </string-name>
          , D.-C. Cercel, C.-G. Chiru,
          <string-name>
            <given-names>S.</given-names>
            <surname>Trausan-Matu</surname>
          </string-name>
          , Upb at semeval
          <article-title>-2020 task 8: Joint textual and visual modeling in a multi-task learning architecture for memotion analysis</article-title>
          ,
          <year>2020</year>
          . arXiv:
          <year>2009</year>
          .02779.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Enhance multimodal transformer with external label and in-domain pretrain: Hateful meme challenge winning solution</article-title>
          ,
          <year>2020</year>
          . arXiv:
          <year>2012</year>
          .08290.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ramamoorthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gunti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. S</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Reganti,
          <string-name>
            <given-names>P.</given-names>
            <surname>Patwa</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Chakraborty</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Sheth</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ekbal</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Ahuja</surname>
          </string-name>
          ,
          <article-title>Memotion 2: Dataset on sentiment and emotion analysis of memes (</article-title>
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>T. T.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. T.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. N. Ngoc</given-names>
            <surname>Duy Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          , Y.-G. Kim,
          <source>HCILab at Memotion 2.0</source>
          <year>2022</year>
          :
          <article-title>Analysis of sentiment, emotion and intensity of emotion classes from meme images using single and multi modalities</article-title>
          , in: Proceedings of De-Factify: Workshop on Multimodal Fact Checking and
          <article-title>Hate Speech Detection</article-title>
          ,
          <string-name>
            <surname>CEUR</surname>
          </string-name>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>Eficientnetv2: Smaller models and faster training</article-title>
          ,
          <source>in: International Conference on Machine Learning, PMLR</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>10096</fpage>
          -
          <lpage>10106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <article-title>Roberta: A robustly optimized bert pretraining approach (</article-title>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pramanick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Akhtar</surname>
          </string-name>
          , T. Chakraborty,
          <article-title>Exercise? i thought you said 'extra fries': Leveraging sentence demarcations and multi-hop attention for meme afect analysis</article-title>
          ,
          <year>2021</year>
          . arXiv:
          <volume>2103</volume>
          .
          <fpage>12377</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Cubuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vasudevan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>Autoaugment: Learning augmentation policies from data</article-title>
          ,
          <year>2019</year>
          . URL: https://arxiv.org/pdf/
          <year>1805</year>
          .09501.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>G.</given-names>
            <surname>Andrew</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Arora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bilmes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Livescu</surname>
          </string-name>
          ,
          <article-title>Deep canonical correlation analysis</article-title>
          , in: S. Dasgupta,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>McAllester (Eds</article-title>
          .),
          <source>Proceedings of the 30th International Conference on Machine Learning</source>
          , volume
          <volume>28</volume>
          <source>of Proceedings of Machine Learning Research</source>
          , PMLR, Atlanta, Georgia, USA,
          <year>2013</year>
          , pp.
          <fpage>1247</fpage>
          -
          <lpage>1255</lpage>
          . URL: https://proceedings.mlr.press/v28/andrew13.html.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>B.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <source>BROWALLIA at Memotion 2</source>
          .
          <article-title>0 2022 : Multimodal memotion analysis with modified ogb strategies</article-title>
          , in: Proceedings of De-Factify: Workshop on Multimodal Fact Checking and
          <article-title>Hate Speech Detection</article-title>
          ,
          <string-name>
            <surname>CEUR</surname>
          </string-name>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep residual learning for image recognition</article-title>
          ,
          <source>in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2016</year>
          .
          <volume>90</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schmidhuber</surname>
          </string-name>
          ,
          <article-title>Long short-term memory</article-title>
          ,
          <source>Neural Comput. 9</source>
          (
          <year>1997</year>
          )
          <fpage>1735</fpage>
          -
          <lpage>1780</lpage>
          . URL: https://doi.org/10.1162/neco.
          <year>1997</year>
          .
          <volume>9</volume>
          .8.1735. doi:
          <volume>10</volume>
          .1162/neco.
          <year>1997</year>
          .
          <volume>9</volume>
          .8. 1735.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Feiszli</surname>
          </string-name>
          ,
          <article-title>What makes training multi-modal classification networks hard?</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>12695</fpage>
          -
          <lpage>12705</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Yet at Memotion 2.
          <fpage>0</fpage>
          <lpage>2022</lpage>
          :
          <article-title>Hate speech detection combining bilstm and fully connected layers</article-title>
          , in: Proceedings of De-Factify: Workshop on Multimodal Fact Checking and
          <article-title>Hate Speech Detection</article-title>
          ,
          <string-name>
            <surname>CEUR</surname>
          </string-name>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>