<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Symposium on the irreproducible science, June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Multi-label Classification using BERT and Knowledge Graphs with a Limited Training Dataset</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Malick Ebiele</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lucy McKenna</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Malika Bendechache</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rob Brennan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ADAPT Centre, School of Computer Science, University College Dublin</institution>
          ,
          <addr-line>Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ADAPT Centre, School of Computing, Dublin City University</institution>
          ,
          <addr-line>Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>0</volume>
      <fpage>7</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>This paper provides a new approach combining BERT and Knowledge Graphs (KGs) to solve a multi-label classification problem with limited training data. The paper introduces a method of using taxonomies and a dataset with 518 entries and 340 concepts to nfie-tune BERT. It also introduces a new data augmentation technique called Perfect Binary Tree (PBT)-Flow to deal with limited or imbalanced training data. The proposed approach obtained a recall@10 of 61.12%, a precision@10 of 11.86% and F1score@10 of 18.83%. While these results seem low, they are promising because of the simple architecture of the model used (BERT+2xFC), the limited size of the training data, and the large number of output concepts.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multi-label classification</kwd>
        <kwd>BERT</kwd>
        <kwd>Knowledge graphs</kwd>
        <kwd>Data augmentation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        ARK-Virus Project [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] was selected as a use-case. The ARK-Virus Project is an extension of
the ARK Platform [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] for risk management of personal protective equipment in healthcare
settings during the COVID-19 pandemic. This is discussed in more detail in the Use Case and
Requirements Section below (Section 3).
      </p>
      <p>This paper has two main contributions. First, a method which uses KGs to simplify
multilabel classification by reducing the number of output concepts. Second, the presentation of
the PBT-Flow data augmentation technique for dealing with limited or unbalanced training
datasets.</p>
      <p>The remainder of this paper is structured as follows; Section 2 presents the related work.
Section 3 describes the use case and requirements. Section 4 depicts the design of the proposed
approach. Section 5 details the experimental settings. Section 6 provides an evaluation and
Section 7 presents the conclusion.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Diferent mechanisms have been used to solve multi-label classification problems. Rios and
Kavuluru [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] proposed a model combining a Convolutional Neural Network (CNN) and a 2-Layer
Graph CNN (GCNN) to perform experiments on MIMIC II and MIMIC III. Heo et al [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], on the
other hand, proposed D2SBERT, a sequence of n BERT+Multilayer Perceptrons (MLPs) with an
attention layer in between for medical discharge summary code prediction. Finally, Khezrian et
al [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], introduced TagBERT (BERT+CNN+MLP) to produce tag recommendations for Online
Q&amp;A communities and performed experiments on the Freecode dataset. None of these previous
works leveraged the possible hierarchy of the label space or did not perform an optimised data
augmentation. This work difers from them at two levels. One, by leveraging the hierarchy of
the label space by using KG. Two, by introducing and using an optimised data augmentation
technique.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Use Case and Requirements</title>
      <p>The ARK-Virus Project uses the ARK-Platform, a socio-technical risk governance system, to
manage and analyse risk projects in the area of infection prevention and control. Data entered on
the ARK Platform can be annotated with concepts from controlled SKOS1 taxonomies - the ARK
Risk Terminology and the ARK Health Terminology2. These taxonomies contain a combined
total of 525 concepts plus definitions. The taxonomies use a three-layer hierarchy with the top
level having a total of 141 concepts. It is also worth mentioning that these taxonomies have
been built, used, and validated by domain experts over the past couple of years. The annotation
of text on the ARK Platform is currently a manual process which, given the large number of
concepts, can be time-consuming. Providing a set of suggested concepts, based on text entered
into the ARK Platform, would be extremely useful to users. This paper demonstrates how KGs
and BERT can be used together to solve this multi-label classification problem.</p>
      <sec id="sec-3-1">
        <title>1Simple Knowledge Organization System - https://www.w3.org/TR/skos-reference/ 2Taxonomies and platform demo available at https://openark.adaptcentre.ie/</title>
        <p>
          The approach presented in this paper can be applied to other use cases. The only requirement
that needs to be met to apply the approach introduced here is to have an hierarchical label
space. However, the ontology of the label space should be well formed, semantically consistent,
and high quality [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] with a reasonable amount of concepts in the top layer. The KG’s structure
can negatively afect the model performance, for example, if a broad domain is modelled with
a narrow taxonomy. This will lead to a loss of the semantic which will negatively impact the
performance of the proposed approach. Future work will assess how much this loss will impact
the performance of the proposed approach. In this paper, the top level labels have been used for
one main reason. The labels’ taxonomies only have three layer hierarchy which makes the loss
of semantics acceptable compared to a much complex multi-label classification task given the
low resources. One could have used the second top or the second last layer or any other layer
for a deeper taxonomy. The idea is to leverage the taxonomies hierarchy to simply the problem
at hand.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Design</title>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental Settings</title>
      <p>
        Given the limited training data, data augmentation was used to acquire more data. Data
augmentation is a technique for increasing the diversity of training data without explicitly
collecting new data [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The PBT-Flow3 Data Augmentation Technique consists of applying a
set of data augmentation techniques to the input data following the Perfect Binary Tree (PBT)
structure. The nodes of the tree represent the datasets and the edges indicate whether or not an
augmentation technique was applied on the data. Each node has two children - one child is the
result of applying an augmentation on the node (data) and the other child is a copy of the parent
(no augmentation applied). Every node of the same depth is applied the same augmentation
technique and each augmentation technique is applied once. The output of PBT-Flow is the
concatenation of the leaves of the PBT. PBT-Flow generates a new dataset of n2m entries from
an input data of n entries and a set of m data augmentation techniques.
      </p>
      <p>
        PBT-Flow used five augmentation techniques: two Synonym Replacements [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] (k-words,
with k between 1 and 10, from the original text are replaced with their respective synonyms
using PPDB and WORDNET vocabulary databases , Back Translation [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] (the original text in
English is translated to Deutch language then translated back), Random Swap [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] (Randomly
swap k-words of the original text), Contextual Augmentation [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] (k-words in the original text
are replaced with other words with paradigmatic relations). Applying the PBT-Flow technique
to the original training data of 259 entries resulted in a training data of 5232 entries (after
removing duplicated entries and NAs). For GAN-BERT [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], entries with less than 4 concepts
were unlabelled which resulted in sets of 809 labelled and 4423 unlabelled training data.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Evaluation</title>
      <p>
        Two approaches have been used to fine-tune BERT: a supervised approach and a semi-supervised
approach based on GAN-BERT [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The Generator and the Discriminator of GAN-BERT are
defined as discussed in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. However, the soft-max function has been replace by the sigmoid
function and the cross-entropy-loss by the binary-cross-entropy-loss. The classifier of supervised
learning model is the same as the Discriminator minus one neuron in the output layer (because
the output layer of the Discriminator has 116 + 1 neurons; one extras neuron to output the
probability of the input text being fake or real). The data was augmented using the five
augmentation techniques mentioned above. BERT run for 34 epochs and GAN-BERT for 11
epochs. Early stopping monitoring the validation loss with patience and min_delta equal to 5
and 5e-05, respectively, have been used. From Table 1 it can be seen that GAN-BERT+PBT-Flow
model outperformed other models by 0.24% (Recall@10) up to 11.89% (Recall@5). In general,
models using PBT-Flow outperformed the others. These results validate the experimental results
presented in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Those results showed that GAN-BERT outperformed BERT when both models
are fine tuned using very limited labelled data.
      </p>
      <p>One can notice that the margin of improvement in GAN-BERT is greater than in BERT. This
could be due to the filtering of inputs labelled with less than 4 concepts for this model.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>This paper demonstrates that combining KGs and PBT-Flow improve BERT models’ performance
for multi-label classification, in both supervised and semi-supervised approaches. These results</p>
      <sec id="sec-7-1">
        <title>3Code source available at https://github.com/malick-jaures/research/tree/main/PBT-flow</title>
        <p>
          are interesting; combining with a threshold could suggest good concepts. While the results
cannot be directly compared to the state-of-the-art models, they are similar to previously
published works especially in terms of recall@10 [
          <xref ref-type="bibr" rid="ref6 ref8">8, 6</xref>
          ]. TagBERT [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] is the state-of-the-art
in term of Precision@10 and F1score@10 on the Freecode dataset with 40.25% and 46.5%,
respectively. On the other hand, the same model obtained a Recall@10 of 64.42% while TagCNN
[
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] achieved 94.9%.
        </p>
        <p>In future work, we envisage to run experiments which will consist of testing our approach on
public benchmarks along with TagBERT, TagCNN and other models. We also intent to retrain
our model with data extracted from the ARK platform as soon as more data will be available on
the platform.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This research has received funding from the ADAPT Centre for Digital Con- tent Technology,
funded under the SFI Research Centres Programme (Grant 13/RC/2106 P2), co-funded by the
European Regional Development Fund. For the purpose of Open Access, the author has applied
a CC BY public copyright licence to any Author Accepted Manuscript version arising from this
submission.</p>
      <p>We would also like to express our gratitude to Dr. Brian Davis for his advice and support.</p>
    </sec>
    <sec id="sec-9">
      <title>Appendix</title>
    </sec>
    <sec id="sec-10">
      <title>A. Statistics of our dataset</title>
    </sec>
    <sec id="sec-11">
      <title>B. Our dataset compared to Freecode dataset</title>
      <p>Our dataset has 518 entries with 116 unique labels while Freecode4 dataset has 46995 entries
with 9000 unique labels. This means that Freecode dataset contains about 77.6 times more
labels but also 90.7 times more entries than ours. In other words, Freecode has a ratio of 5.22
entries per label while our dataset has a ratio of 4.46 entries per label. Moreover, the top 3 most
representative set of labels in Freecode have respectively 1390, 711, and 571 entries. In our
dataset, the top 3 most representative set of labels have respectively 60, 19, and 13 entries. The
least representative set of labels in both datasets have only 1 entry.</p>
      <p>Table 3 below gives the statistics of the number of concepts/tags per entry of our dataset and
the Freecode dataset.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Tsoumakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Katakis</surname>
          </string-name>
          <article-title>, Multi-label classification: An overview</article-title>
          ,
          <source>Int. J. Data Warehous. Min</source>
          .
          <volume>3</volume>
          (
          <issue>2007</issue>
          )
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          ,
          <article-title>BERT: pre-training of deep bidirectional transformers for language understanding</article-title>
          , CoRR abs/
          <year>1810</year>
          .04805 (
          <year>2018</year>
          ). URL: http://arxiv. org/abs/
          <year>1810</year>
          .04805.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Croce</surname>
          </string-name>
          , G. Castellucci,
          <string-name>
            <given-names>R.</given-names>
            <surname>Basili</surname>
          </string-name>
          , GAN-BERT:
          <article-title>Generative adversarial learning for robust text classification with a bunch of labeled examples</article-title>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>McKenna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Duda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>McDonald</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brennan</surname>
          </string-name>
          ,
          <article-title>Ark-virus: An ark platform extension for mindful risk governance of personal protective equipment use in healthcare</article-title>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>McDonald</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. McKenna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vining</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Doyle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Ward</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ulfvengren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Geary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guilfoyle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shuhaiber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fogarty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Healy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tallon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brennan</surname>
          </string-name>
          ,
          <article-title>Evaluation of an access-risk-knowledge (ark) platform for governance of risk and change in complex socio-technical systems</article-title>
          ,
          <source>International Journal of Environmental Research and Public Health</source>
          <volume>18</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kavuluru</surname>
          </string-name>
          ,
          <article-title>Few-shot and zero-shot multi-label learning for structured label spaces</article-title>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Heo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Jo</surname>
          </string-name>
          ,
          <article-title>Medical code prediction from discharge summary: Document to sequence BERT using sequence attention</article-title>
          ,
          <source>CoRR abs/2106</source>
          .07932 (
          <year>2021</year>
          ). URL: https: //arxiv.org/abs/2106.07932.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khezrian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Habibi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Annamoradnejad</surname>
          </string-name>
          ,
          <article-title>Tag recommendation for online q&amp;a communities based on BERT pre-training technique</article-title>
          , CoRR abs/
          <year>2010</year>
          .04971 (
          <year>2020</year>
          ). URL: https://arxiv.org/abs/
          <year>2010</year>
          .04971.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zaveri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maurino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pietrobon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          ,
          <article-title>Quality assessment for linked data: A survey (</article-title>
          <year>2016</year>
          ). doi:
          <volume>10</volume>
          .3233/SW-150175.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. Y.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gangal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chandar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vosoughi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mitamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. H.</given-names>
            <surname>Hovy</surname>
          </string-name>
          ,
          <article-title>A survey of data augmentation approaches for NLP</article-title>
          ,
          <source>CoRR abs/2105</source>
          .03075 (
          <year>2021</year>
          ). URL: https://arxiv.org/abs/2105.03075.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          <article-title>LeCun, Character-level convolutional networks for text classification</article-title>
          , volume
          <volume>28</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2015</year>
          . URL: https://proceedings.neurips.cc/paper/2015/ ifle/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sennrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Haddow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Birch</surname>
          </string-name>
          ,
          <article-title>Improving neural machine translation models with monolingual data</article-title>
          ,
          <source>CoRR abs/1511</source>
          .06709 (
          <year>2015</year>
          ). URL: http://arxiv.org/abs/1511.06709.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zou</surname>
          </string-name>
          , EDA:
          <article-title>Easy data augmentation techniques for boosting performance on text classification tasks</article-title>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kobayashi</surname>
          </string-name>
          ,
          <article-title>Contextual augmentation: Data augmentation by words with paradigmatic relations</article-title>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kalchbrenner</surname>
          </string-name>
          , E. Grefenstette,
          <string-name>
            <given-names>P.</given-names>
            <surname>Blunsom</surname>
          </string-name>
          ,
          <article-title>A convolutional neural network for modelling sentences</article-title>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>