<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Homophobia and Transphobia Detection of Youtube Comments in Code-Mixed Dravidian Languages using Deep learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>P Pranith</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>V Samhita</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>D Sarath</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Durairaj Thenmozhi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Sri Sivasubramaniya Nadar College of Engineering</institution>
          ,
          <addr-line>Kalavakkam, Chennai</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Homophobia and Transphobia Detection is the task of identifying homophobia, transphobia, and nonanti-LGBT+ content from the given corpus. Homophobia and transphobia are both toxic languages directed at LGBTQ+(Lesbian, Gay, Bisexual, Transgender, and Queer community ) individuals that are described as hate speech. The shared task we worked on was to try and predict if a given comment was homophobic in nature. We were provided with a corpus of comments in Tamil, Malayalam, TamilEnglish, and English. We used the IndicBERT and LaBSE machine learning models to predict the content. The results were as follows: IndicBERT was used to train Tamil, Malayalam, and Tamil-English and LaBSE was used to predict the content in English. We achieved weighted average F1 scores of 0.46, 0.54, 0.39, and 0.28 for English, Malayalam, Tamil-English, and Tamil respectively.[1]</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Homophobia</kwd>
        <kwd>Transphobia</kwd>
        <kwd>LGBTQ+</kwd>
        <kwd>IndicBERT</kwd>
        <kwd>Transformers</kwd>
        <kwd>LaBSE</kwd>
        <kwd>Tokenizer</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        With the advancing reach of social media in today’s world, an increasing number of people are
getting online to consume content, share messages and consequently express their views and
opinions.[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] However, people often misuse this freedom to make comments propagating hatred
and toxicity. YouTube, especially, is a popular platform due to the ease with which users can
share content (videos, posts, shots) and like, share, and comment on said content. The
downside to this is that it gives room for more online harassment and overt cyberbullying.[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] This
often has a drastic impact on the lives of the afected individuals/communities. The LGBTQ+
community especially has often been subjected to such hate and bullying. Sexual orientation
and gender identity[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] are imperative elements that constitute a person’s identity and must
hence, never be discriminated against. Yet, individuals of the community are victims of
harmful and ignorant comments[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] , abuse and threats. The number of hate crimes against the
community continues to be on the rise with the increasing use of social media.
      </p>
      <p>The shared task focuses on trying to identify comments that are homophobic or transphobic
and could potentially cause a lot of harm. Hence, we employ the use of deep learning-based
NLP models that can be modeled to detect such homophobic and transphobic content to help
the social media platforms better deal with the same and ensure social media remains safe for
all.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Task Description</title>
      <p>
        Homophobia and Transphobia are toxic languages aimed at the LGBTQ+ community. The
goal of shared task (task B) is to develop systems to identify such homophobic, transphobic,
and non-anti-LGBT+ content from the given corpus[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and thus accordingly predict the nature
of subsequent data fed into the system to put into place measures to deal with hate towards the
LGBTQ+ community. The task is pursued from our viewpoint as binary labelled classification:
non-anti-LGBT+ ; homophobic/transphobic.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset Description</title>
      <p>
        The corpus[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] consisted of a collection of comments from Youtube and other social media
platforms. The data consisted of four variants of the comments; namely Tamil, Malayalam, English,
and Tamil-English. Each comment was provided a label indicating its nature ( Homophobic,
Transphobic, Non-anti-LGBT+ )
      </p>
      <p>The data was split into three sections; one to be used for training and one other for
development. The Tamil, English, and Tamil-English files were provided in CSV format, while the
Malayalam file was provided in TSV format.</p>
      <p>The third and final section consisted of the test data which was provided in CSV format. The
test data consisted of only the comment and the respective ID, no labels were provided. The
test data was fed into the fully developed model to output the required results.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>
        The task was approached as a binary-label text classification task with the use of transformers.
The data is preprocessed and then data is made suitable for training by creating a data frame
for all the instances of the data. Model training is done by fine-tuning the parameters of two
transformers that include LaBSE[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and IndicBERT[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <sec id="sec-4-1">
        <title>4.1. Data Preparation</title>
        <p>Data processing is important for any machine learning problem. The data available is often
incoherent and unorganized and hence must be cleaned to bring it to a more organized format
that can easily be processed and used to draw results from. The irregularities in the data are
removed in the preprocessing stage and made suitable for training. Preprocessing steps for this
machine learning problem involved diferent steps as follows:
• Removal of special character such as ‘#’ , ‘@’, ’;’ and ’!’ etc - Special characters such
as ’#,’:’ and ’@’ are very commonly used in comments in social media platforms such
as YouTube. These characters can afect the training of the model as the words that end
with special characters are treated diferently by the model. Hence they must be removed
before the training so as to not interfere with the training of the model.
• Replacement of emojis with the appropriate words : Emoticons convey emotional
expression in a text and hence must be preserved. In order to take into account the
emotions that they bring to a message, they are replaced with appropriate words that
convey the same meaning.</p>
        <p>The training and development data were concatenated and put into a data frame. The
eightytwenty rule was then used and eighty percent of the data was used as training data and twenty
percent of the data was used to test the model’s performance. Label encoding was done to the
training dataset and testing dataset to convert them into machine-readable form. The
subsequent f1 score, recall, and precision obtained were evaluated.</p>
        <p>Sample input data:</p>
        <p>Key: 0: Homophobic or Transphobic ; 1: Non-Anti LGBTQ+. Sample input data</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Training the models</title>
        <p>
          The transformer models namely IndicBERT and LaBSE were used whose implementation is
explained below. Both the models were trained for 3 epochs. (the rest of the hyper parameters
were restricted to their default values) The IndicBERT uses an autotokeniser with the accents
set to true.
4.2.1. AI4Bharat/IndicBERT
IndicBERT[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] is a multilingual ALBERT[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] model pretrained in 12 of the most important Indian
languages. It is pre-trained on the monolingual corpus of AI4Bharat, which contains about 9
billion tokens, and then assessed on a variety of tasks. IndicBERT works on par with or better
than other multilingual models (mBERT, XLM-R, etc.) despite using fewer parameters.
        </p>
        <p>The languages that IndicBERT covers are: Assamese, Bengali, English, Gujarati, Hindi,
Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.</p>
        <p>
          For our implementation, the AI4Bharat/IndicBERT model was used on the vernacular
languages ( Tamil, Malayalam and Tamil-English)
4.2.2. LaBSE
LaBSE, a multilingual embedding model[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] is a powerful tool that may be used for a variety of
downstream tasks, including text classification, clustering, and others. It does this by
encoding text from several languages into a common embedding space. It accomplishes this while
utilising semantic data to comprehend language. In order to promote consistency amongst
the sentence embeddings, existing methods for creating these embeddings, such as LASER or
mUSE, rely on parallel data, mapping a sentence from one language to another directly.
        </p>
        <p>The LaBSE model was used for the English dataset.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Post processing</title>
        <p>After training, the sample test data was generated using twenty percent of the concatenated
training and development and fed into the model to evaluate its performance. Once the
performance metrics were verified, the model was run on the actual test data provided.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>
        LaBSE model that was used on the English dataset achieved a mean F1-score of 0.4625 and our
team was ranked third for the same. IndicBERT model was used for Tamil-English, Malayalam
and Tamil datasets. It achieved a mean F1-score of 0.393 for the Tamil-English dataset and
we were placed third. It achieved a mean F1-score of 0.542 for the Malayalam dataset and we
were placed seventh. For the tamil dataset, the model achieved a mean F1-score of 0.228 and
was ranked eighth. The following performance metrics[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] were used to evaluate the predicted
labels.The required formulas are provided below:
      </p>
      <p>Precision: Precision refers to the probability that a correct classification has been done. It
is the ratio of true positives to total positives predicted.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Error Analysis</title>
      <p>Both the AI4Bharat/IndicBERT and LaBSE fail to attain perfect scores in identifying
homophobic or transphobic comments (0 label). These scores were improved by making changes to
the data preprocessing. Initially, when the emojis were completely removed from the text, the
obtained scores were unsatisfactory. Thus, emoji replacement was done instead, as emoticons
provide meaning and emotion to each sentence.</p>
      <p>
        The IndicBERT model had dificulty always predicting the correct label. This
Malayalam sentence, for instance, was incorrectly identified as a non-anti-LGBTQ+ statement. A
possible explanation for this might be that the IndicBERT model is trained on native Indian
language scripts and not their transliterated counterparts such as Malayalam sentences
transliterated into English[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Hence, the model may be finding it dificult to identify the
transliterated sentences in the corpus properly. One solution could be to try out models such
as MuRIL which is trained on a corpus of traditional Indian scripts and their transliterated
versions as well. This could be done for all the vernacular language datasets.
eg. Mal-06 - Mallu boy Boy avark purushan aayi purusha lingathodu koodi janichavark sthree
hormon kooduthal aanenkilum oru poorna sthree aayi jeevikan orikkalum kazhiyilla athukond
avante sareera ghadana enthano athinu anusrich manassine pakappeduthi jeevikuka allathe
ith randum ketta oru vibhakamayi enthinu jeevikanam daivam 2 vibhakathine mathrame
create cheythitullu aanineyum pennineyum
      </p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>The paper discusses our approach to trying and identifying comments as homophobic,
transphobic, or non anti LGBT. Our implementation included a mixture of IndicBert for the
vernacular languages and LaBSE for English. The submitted IndicBert model got an F1 score
of 0.228 for Tamil, 0.542 for Malayalam, and 0.393 for Tamil-English. The LaBSE submission
received an F1 score of 0.4625.</p>
      <p>On the whole, LaBSE seems to have performed better when compared to the IndicBERT
model on the respective datasets. In the future, the performance could be improved by
performing additional pre-processing steps on the data provided and perhaps, even making
use of external datasets to improve the aforementioned accuracy of the transfer model
implementations. As a future implementation, an ensemble approach could also be adopted to
achieve a more accurate transfer model.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shumugavadivel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Subramanian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          , B. B,
          <string-name>
            <given-names>S. Chinnaudayar</given-names>
            <surname>Navaneethakrishnan</surname>
          </string-name>
          , L. S.K, T. Mandl,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Palanikumar</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Balaji</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <article-title>Overview of the Shared Task on Sentiment Analysis and Homophobia Detection of YouTube Comments in Code-Mixed Dravidian Languages</article-title>
          , in: Working Notes of FIRE 2022 -
          <article-title>Forum for Information Retrieval Evaluation</article-title>
          ,
          <string-name>
            <surname>CEUR</surname>
          </string-name>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Priyadharshini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Durairaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>McCrae</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Buitelaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <article-title>Overview of the shared task on homophobia and transphobia detection in social media comments</article-title>
          ,
          <source>in: Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>369</fpage>
          -
          <lpage>377</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Priyadharshini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sampath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Thenmozhi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thangasamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nallathambi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>McCrae</surname>
          </string-name>
          ,
          <article-title>Dataset for identification of homophobia and transophobia in multilingual youtube comments</article-title>
          ,
          <source>arXiv preprint arXiv:2109.00227</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>J. B. PhD</surname>
            , J.
            <given-names>S.</given-names>
            PhD, S.
          </string-name>
          <string-name>
            <surname>Catalan</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Gómez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Longueira</surname>
          </string-name>
          ,
          <article-title>Discrimination and victimization: Parade for lesbian, gay, bisexual, and transgender (lgbt) pride, in chile</article-title>
          ,
          <source>Journal of Homosexuality</source>
          <volume>57</volume>
          (
          <year>2010</year>
          )
          <fpage>760</fpage>
          -
          <lpage>775</lpage>
          . URL: https: //doi.org/10.1080/00918369.
          <year>2010</year>
          .
          <volume>485880</volume>
          . doi:
          <volume>10</volume>
          .1080/00918369.
          <year>2010</year>
          .
          <volume>485880</volume>
          . arXiv:https://doi.org/10.1080/00918369.
          <year>2010</year>
          .
          <volume>485880</volume>
          , pMID:
          <fpage>20582801</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C. G.</given-names>
            <surname>Escobar-Viera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Whitfield</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. B.</given-names>
            <surname>Wessel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shensa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Sidani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Chandler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. L.</given-names>
            <surname>Hofman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Marshal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Primack</surname>
          </string-name>
          ,
          <article-title>For better or for worse? a systematic review of the evidence on social media use and depression among lesbian, gay, and bisexual minorities</article-title>
          ,
          <source>JMIR mental health 5</source>
          (
          <year>2018</year>
          )
          <article-title>e10496</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Arivazhagan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Language-agnostic bert sentence embedding</article-title>
          , arXiv preprint arXiv:
          <year>2007</year>
          .
          <year>01852</year>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kakwani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kunchukuttan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Golla</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. N.C.</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bhattacharyya</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Khapra</surname>
          </string-name>
          , P. Kumar, IndicNLPSuite: Monolingual Corpora,
          <article-title>Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages</article-title>
          , in: Findings of EMNLP,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Goodman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gimpel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sharma</surname>
          </string-name>
          , R. Soricut,
          <string-name>
            <surname>Albert:</surname>
          </string-name>
          <article-title>A lite bert for self-supervised learning of language representations</article-title>
          , arXiv preprint arXiv:
          <year>1909</year>
          .
          <volume>11942</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Bharathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Varsha</surname>
          </string-name>
          , Ssncse_nlp@
          <fpage>tamilnlp</fpage>
          -acl2022:
          <article-title>Transformer based approach for emotion analysis in tamil language</article-title>
          ,
          <source>in: Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>125</fpage>
          -
          <lpage>131</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Swaminathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Divyasri</surname>
          </string-name>
          , G. Gayathri,
          <string-name>
            <given-names>T.</given-names>
            <surname>Durairaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bharathi</surname>
          </string-name>
          ,
          <article-title>Pandas@ abusive comment detection in tamil code-mixed data using custom embeddings with labse</article-title>
          ,
          <source>in: Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>112</fpage>
          -
          <lpage>119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>Hate speech and ofensive language identification on multilingual code mixed text using bert</article-title>
          , in: Working Notes of FIRE 2021-
          <article-title>Forum for Information Retrieval Evaluation (Online)</article-title>
          .
          <source>CEUR</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>