<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>De-Identification through Named Entity Recognition for Medical Document Anonymization</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hermenegildo Fabregat</string-name>
          <email>gildo.fabregat@lsi.uned.es</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andres Duque</string-name>
          <email>aduque@scc.uned.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juan Martinez-Romo</string-name>
          <email>juaner@lsi.uned.es</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lourdes Araujo</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Departamento de Sistemas de Comunicacio ́n y Control</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Instituto Mixto de Investigacio ́n - Escuela Nacional de Sanidad</institution>
          ,
          <addr-line>IMIENS</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>663</fpage>
      <lpage>670</lpage>
      <abstract>
        <p>This paper introduces the system developed by the NLP UNED team participating in MEDDOCAN (Medical Document Anonymization) task, framed in the IberLEF 2019 evaluation workshop. The system DINER (De-Identification through Named Entity Recognition) consists of a deep neural network based on a core BI-LSTM structure. Input features have been modeled in order to suit the particular characteristics of medical texts, and especially medical reports, which can combine short semi-structured information with long free text paragraphs. The first results of the system on a synthetic test corpus corpus of 1000 clinical cases, manually annotated by health documentalists, indicate the potential of the DINER system.</p>
      </abstract>
      <kwd-group>
        <kwd>named entities recognition</kwd>
        <kwd>deep learning</kwd>
        <kwd>medical document anonymization</kwd>
        <kwd>electronic health record</kwd>
        <kwd>natural language processing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Nowadays, the use of digitized medical records of patients has allowed progress
in biomedical research in different areas of interest through the use of natural
language processing techniques. However, one of the main problems in
distributing these records is the personal information that appears in them. These records
are stored with great security measures to prevent them from being public and
their anonymization is a challenge that still has a long way to go. In response
to this need, the MEDDOCAN task [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] has been organized, oriented to the
anonymization of medical documents with protected health information (PHI)
within IberLEF 2019 (Iberian Languages Evaluation Forum).
      </p>
      <p>This task has distributed a corpus of 1000 documents with medical records
artificially created. This corpus was manually selected by a practicing physician
and augmented with information from discharge reports and genetic medical
records. MEDDOCAN in turn is composed of two subtasks; one of them whose</p>
    </sec>
    <sec id="sec-2">
      <title>Fabregat et al.</title>
      <p>objective is the identification and classification of named entities; and the other
subtask that focuses on the detection of sensitive tokens.</p>
      <p>This paper describes the participation of the NLP UNED team using the
DINER system in the MEDDOCAN task.</p>
      <p>The importance of anonymization or de-identification of clinical texts has
been addressed in the past in two shared tasks, the 2006 uzuner2007evaluating
and 2014 stubbs2015automated de-identification tracks, organized by the i2b2
tranSMART Foundation1.</p>
      <p>
        In addition, the problem of de-identification has been addressed recently by
neural networks as in the work by [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] where this kind of systems is used for the
first time on the corpus i2b2 2014, outperforming the state of the art systems.
The work by [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] presents another approach based on LSTM (long-short term
memory). The LSTM model consists of three layers: input layer — generates
a vectorial representation of each word of a sentence; LSTM layer — outputs
another word representation sequence that captures the context information of
each word in this sentence; Inference layer — makes tagging decisions according
to the output of LSTM layer, that is, outputting a label sequence. In the work by
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] have developed a hybrid system composed of four individual subsystems, that
is, a subsystem based on bidirectional LSTM, a subsystem-based on bidirectional
LSTM with features, a subsystem based on conditional random fields (CRF) and
a rule-based subsystem, are used to identify PHI instances. Then, an ensemble
learning-based classifier was deployed to combine all PHI instances predicted
by the above three machine learning-based subsystems. Finally, the results of
the ensemble learning-based classifier and the rule-based subsystem are merged
together. Finally in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] authors propose an algorithm based on a deep learning
architecture. Authors implement and compare different variants of the RNN
architecture, including Elman and Jordan and a CRF based model with the
traditional features. They observe that the variants of the RNN architecture
outperform the baseline built using a popular CRF based model.
2
      </p>
      <sec id="sec-2-1">
        <title>System description</title>
        <p>The proposed system for the detection of PHI consists of: 1) a pre-processing
phase where data is adapted and prepared, 2) a supervised learning model based
on deep learning and 3) a phase of application of rules for the correcting of
recurrent errors.
2.1</p>
        <sec id="sec-2-1-1">
          <title>Pre-processing</title>
          <p>
            The corpus has been pre-processed and re-annotated following the BILOU
annotation schema [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. The BILOU scheme challenges classifiers to learn the Start,
Inside and Last token of the different annotations, as well as the Unit length
segments.
1 www.i2b2.org
Regarding the tokenization of each document, a sentence splitter was tested
using CoreNLP [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ], but having obtained worse results and lack of coherence
in the BILOU format this splitter was discarded. Instead, basic split of the
documents was performed, by just taking into account the line breaks and a
maximum sentence size of 150 words. In the case of larger sentences only the
first 150 words are labelled and those of shorter length are filled in using a
padding approach.
          </p>
          <p>After the pre-processing phase, a total of 79 classes have been obtained
(BILOU Annotation + Type of entity).
2.2</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>Supervised Learning Model</title>
          <p>In this section we present the supervised learning model and the different
attributes considered to be the input of the deep learning stack. Four attributes
have been used and are as follows:
Words A representation based on pre-trained word embeddings has been used.</p>
          <p>
            The word vectors presented in [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ] have been selected due to the richness of
the sources from which they were generated and to their high recall. These
vectors have a total of 300 dimensions and gather around 1,000,653 unique
tokens.
          </p>
          <p>
            Part-of-speech This feature has been used due to its importance in different
natural language processing tasks. The PoS-Tagging model used was the
one provided by the CoreNLP [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ] library for Spanish. This feature has been
represented in the model by embeddings generated during training. The
resulting embeddings consist of 25 dimensions.
          </p>
          <p>Casing This feature satisfies the need to minimize the impact of the
simplification process applied to complex expressions found in the different instances.
This is achieved by modeling each term with an additional 8-position
onehot vector which represents different cases: term ending in comma or in dot,
uppercased first letter or uppercased term, digits within the term, etc.
Chars Each term has been modeled by means of a representation based on
character embeddings since a complex tokenization has not been applied
(only a space splitting, trying to respect the offset in every case). The aim of
this technique is to increase the recall of the word embeddings also used as
feature. As can be seen in the following example, making use of this feature,
cases without a char embedding are represented.</p>
          <p>Example:
“Nombre:Pedro Garca-Lopez” ⇒
“nombrepedro” “garcialopez”
As you can see in the previous example, after tokenizing by spaces and
removing alphanumeric characters, the resulting token sequence is not
interpretable in most cases by word embeddings. These char embeddings have
been trained on the corpus.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Fabregat et al.</title>
      <p>Deep Learning model The model implemented for PHI detection, as shown
in the Figure 1, consists of a Bi-LSTM (Bidirectional Long short-term memory)
followed by two Dense layers. The inputs of the architecture are the vectors C,
P, W, and CH that represent the information of Casing, Pos-Tag, word, and
characters per word respectively. The last dense layer corresponds to the output
layer.</p>
      <p>
        The convolutional model proposed by [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] has been used for character
sequence processing.
      </p>
      <p>xch</p>
      <p>Embedding layer
Convolutional layer</p>
      <p>Max Pooling</p>
      <p>Flatten</p>
      <p>Time distributed</p>
      <p>Bidirectional
Ch : Characters
W : Word
P : PoS-tagging</p>
      <p>C : Casing
xw
xP</p>
      <p>xC</p>
      <p>Embedding layers
T 1</p>
      <p>T2</p>
      <p>T3
...</p>
      <p>Tz
Convolutional layer As Figure 1 shows this model applies a series of
convolutions and stacks in order to extract the most important characteristics of the
sequence of characters and reduce the dimensionality of the resulting vector.
As a result, a summary vector is obtained with the information obtained for
each word.</p>
      <p>
        Bi-LSTM. LSTMs [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] are proven to offer good performance in sequential NLP
tasks. This layer responds to the need to process each term according to its
context. Each LSTM is configured with 150 neurons and a ReLU [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] as an
activation function. In order to avoid over-adjustment, dropouts of 0.5 and
recurrent dropouts of 0.3 have been applied.
      </p>
      <p>De-Identification through NER for Medical Document Anonymization
Dense (middle). This layer has been added in order to simplify the
information generated by the previous layers, thus reducing the solution space in
subsequent layers.</p>
      <p>Dense (output). The output layer is configured with 79 neurons and a softmax
activation function.</p>
      <p>Hyper-parameters The hyper-parameters used in the system are the
following:
– Maximum size of instance and word: 150 words and 50 characters.
– Char embedding of 50 dimensions.
– Postag embedding of 25 dimensions.
– Word embedding of 300 dimensions.
– The Bi-LSTM layer is composed of two LSTM of 150 dimensions.
– Each LSTM layer uses a dropout of 0.25 and a recurrent-dropout of 0.15.
– Dense layer of 50 dimensions.
2.3</p>
      <sec id="sec-3-1">
        <title>Rules</title>
        <p>Two types of rules have been applied to the output of the DL architecture,
both for error correction. On the one hand, the first set of rules is oriented to
the correction of frequent errors. The vast majority of rules of this type that
have been applied aim to increase accuracy by filtering out those cases that do
not meet a certain format. As an example, there is the format that telephone
numbers must comply with (minimum 9 numbers, country codes being optional)
and the format that e-mails must comply with (at least it must contain an @).</p>
        <p>On the other hand, the second set of rules aims to ensure that the final output
of the system correctly follows the output BILOU format.</p>
        <p>Results on the Development Set We have carried out a set of experiments
in order to analyze the effect of the use of the implemented rules. Tables 1, 2,
and 2.3 show the results on the development set using the system without and
with the use of rules.</p>
        <p>As the tables show, the use of rules improves the system performance in
the two tasks proposed in MEDDOCAN. This improvement is reflected in an
increase between 0.023 and 0.025 points in the F1-measure. Analyzing the results
in more depth, one can also appreciate how the effect of the rules produces a
higher improvement in the precision than in the recall.</p>
        <p>Error analysis After an in-depth analysis of the system’s performance, we
detected a couple of errors that the system usually makes. The errors are as
follows:
– The system makes labeling errors when confusing the hospital address with
the name of the hospital itself.
– The system is not able to identify some of the expressions of the class
”Familiar sujeto asistencia”.
This section shows the official results of the DINER system in the two tasks
organised in MEDDOCAN. At the time of writing this paper there were no
baselines or results available from other participants, so only the results of
participation of the DINER system are shown.</p>
        <p>Regarding the task, 18 teams participated, submitting a total number of 63
system runs.</p>
        <p>Taking into account the results on the development set shown in the previous
section, our system has used rules for all the MEDDOCAN sub-tasks. Our team
only sent one run for each task and therefore only this information is shown in
the results.</p>
        <p>Tables 4, 5, and 6 show the official results on the test set.</p>
        <p>As the tables show, the results between the development and the test set are
very similar. On the one hand, this fact reflects the correct split of the corpus
into different sets and on the other hand the robustness of the DINER system.</p>
        <p>The system was trained for the task 1, so such optimal results in task 2 are
the result of a balanced system. The fact that the results in the two evaluations
Development Set</p>
        <p>Task 2 (Merged)</p>
        <p>De-Identification through NER for Medical Document Anonymization
of the task 2 (Strict and Merged) are so similar to each other, means that the
implemented rules have provided a great coherence to the system.</p>
        <p>On the other hand, the differences between these two evaluations could be
due to the fact that in the documents of the corpus some entities could be written
in a free format. In this way, spaces between the digits of a telephone number
or the format of an e-mail could be the reason for the difference between these
two evaluations of the task 2.
In this paper we have described our system DINER, and its performance in the
MEDDOCAN task of the IberLEF 2019 competition. The proposed system is
divided into two phases, the first of them making use of deep neural networks,
and the second one using hand-crafted rules.</p>
        <p>The DINER system has obtained a score of 0.943 in the F1-measure for the
task 1 and 0.949 in the F-measure for task 2 (strict evaluation).</p>
        <p>In spite of not knowing the performance of other systems or a baseline, we
could say that at least the performance of the DINER system is optimal. One
of the characteristics of the system could be its robustness since its performance
has been very similar between the development and test sets. On the other hand,
we would like to highlight the coherence provided to the system by the use of
rules.</p>
        <p>We plan to address improvements in the PHI extraction as a future line of
work, especially by studying how more valuable syntactic and semantic
information can be added to the network that performs PHI identification, and also
how systematic post-processing rules can be automatically extracted from the
obtained results.</p>
        <sec id="sec-3-1-1">
          <title>Agreements</title>
          <p>This work has been partially supported by the Spanish Ministry of Science and
Innovation within the projects PROSA-MED (TIN2016-77820-C3-2-R) and
EXTRAE (IMIENS 2017).</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Cardellino</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Spanish billion words corpus and embeddings</article-title>
          (march
          <year>2016</year>
          ). URL http://crscardellino. me/SBWCE (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Dernoncourt</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uzuner</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szolovits</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>De-identification of patient notes with recurrent neural networks</article-title>
          .
          <source>Journal of the American Medical Informatics Association</source>
          <volume>24</volume>
          (
          <issue>3</issue>
          ),
          <fpage>596</fpage>
          -
          <lpage>606</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Hahnloser</surname>
            ,
            <given-names>R.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sarpeshkar</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mahowald</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Douglas</surname>
            ,
            <given-names>R.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seung</surname>
            ,
            <given-names>H.S.:</given-names>
          </string-name>
          <article-title>Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit</article-title>
          .
          <source>Nature</source>
          <volume>405</volume>
          (
          <issue>6789</issue>
          ),
          <volume>947</volume>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Hochreiter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>Long short-term memory</article-title>
          .
          <source>Neural computation 9(8)</source>
          ,
          <fpage>1735</fpage>
          -
          <lpage>1780</lpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          :
          <article-title>De-identification of clinical notes via recurrent neural network and conditional random field</article-title>
          .
          <source>Journal of biomedical informatics 75</source>
          ,
          <fpage>S34</fpage>
          -
          <lpage>S42</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          :
          <article-title>Entity recognition from clinical texts via recurrent neural network. BMC medical informatics and decision making 17(2</article-title>
          ),
          <volume>67</volume>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Manning</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Surdeanu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bauer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finkel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bethard</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McClosky</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>The stanford corenlp natural language processing toolkit</article-title>
          . In:
          <article-title>Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations</article-title>
          . pp.
          <fpage>55</fpage>
          -
          <lpage>60</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Marimon</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalez-Agirre</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Intxaurrondo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rodrguez</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lopez</surname>
            <given-names>Martin</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.A.</given-names>
            ,
            <surname>Villegas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Krallinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            :
            <surname>Automatic</surname>
          </string-name>
          de
          <article-title>-identification of medical texts in spanish: the meddocan track, corpus, guidelines, methods and evaluation of results</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ). vol.
          <source>TBA</source>
          , p.
          <source>TBA. CEUR Workshop Proceedings (CEUR-WS.org)</source>
          , Bilbao,
          <source>Spain (Sep</source>
          <year>2019</year>
          ), TBA
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Ratinov</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Design challenges and misconceptions in named entity recognition</article-title>
          .
          <source>In: Proceedings of the thirteenth conference on computational natural language learning</source>
          . pp.
          <fpage>147</fpage>
          -
          <lpage>155</lpage>
          . Association for Computational Linguistics (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Yadav</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ekbal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saha</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bhattacharyya</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Deep learning architecture for patient data de-identification in clinical records</article-title>
          .
          <source>In: Proceedings of the clinical natural language processing workshop (ClinicalNLP)</source>
          . pp.
          <fpage>32</fpage>
          -
          <lpage>41</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>J.J.</given-names>
          </string-name>
          , LeCun, Y.:
          <article-title>Character-level convolutional networks for text classification</article-title>
          .
          <source>CoRR abs/1509</source>
          .01626 (
          <year>2015</year>
          ), http://arxiv.org/abs/1509.01626
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>