<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Predicting movie-elicited emotions from dialogue in screenplay text: A study on “Forrest Gump”</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Benedetta Iavarone?</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Felice Dell'Orletta ? Scuola Normale Superiore</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pisa ItaliaNLP Lab</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pisa benedetta.iavarone@sns.it</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>felice.dellorletta@ilc.cnr.it</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <abstract>
        <p>We present a new dataset of sentences1 extracted from the movie Forrest Gump, annotated with the emotions perceived by a group of subjects while watching the movie. We run experiments to predict these emotions using two classifiers, one based on a Support Vector Machine with linguistic and lexical features, the other based on BERT. The experiments showed that contextual embeddings are effective in predicting human-perceived emotions.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Emotional intelligence, described as the set of skills
that contributes to the accurate appraisal,
expression and regulation of emotions in oneself and
in others
        <xref ref-type="bibr" rid="ref10">(Salovey and Mayer, 1990)</xref>
        , is
recognised to be one of the facets that make us humans
and the fundamental ability of human-like
intelligence
        <xref ref-type="bibr" rid="ref6">(Goleman, 2006)</xref>
        . Emotional intelligence
has played a crucial role in numerous applications
during the last years
        <xref ref-type="bibr" rid="ref7">(Krakovsky, 2018)</xref>
        , and being
able to pinpoint expressions of human emotions is
essential to advance further in technological
innovation. Emotions can be identified in many sources,
among which there are semantics and sentiment
in texts
        <xref ref-type="bibr" rid="ref4">(Calefato et al., 2017)</xref>
        . In NLP, Sentiment
Analysis already boasts many state-of-the-art tools
that can accurately predict or classify the polarity
of a text. However, real applications often need
to go beyond the dichotomy positive-negative and
identify the emotional content of a text with a finer
granularity. Nevertheless, the task of predicting a
precise emotion from text brings many challenges,
mostly because there is a need of context:
emotions can’t be easily understood in isolation, as
1Data can be downloaded at www.italianlp.it/
dataset_release.zip
they are conveyed by a complex of explicit (e.g.
speech) and implicit (e.g. gesture and posture)
behavioural cues. Still, there has been an increasing
interest in research for text-based emotion
detection
        <xref ref-type="bibr" rid="ref1 ref3">(Acheampong et al., 2020)</xref>
        . In this work, we
study how textual information extracted from the
screenplay of a movie can be used to predict the
emotions perceived by a group of people during the
view of the movie itself. We create a new dataset of
sentences extracted from the screenplay, annotated
with six different perceived emotions and their
perceived intensity and create a binary classification
task to predict emotional elicitation during the view
of the movie. We use two predicting models, with
different kind of features that capture diverse
language information. We determine which model and
which kind of features are the best for predicting
the emotions perceived by the subjects.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Data</title>
      <p>Our dataset was retrieved from studyforrest2, a
research project centered around the use of the movie
Forrest Gump. The project repository contains data
contributions from various research groups, divided
in three areas: (i) behavior and brain function, (ii)
brain structure and connectivity, and (iii) movie
stimulus annotations. We focused on the latter,
retrieving two types of data: the speech present in
the movie and the emotions that the vision of the
movie elicited in a group of subjects. As for the
speech, each screenplay line pronounced by the
characters is transcribed in sentences and
associated with two timestamps in terms of tenths of a
second tbegin and tend, that respectively indicate
the moment of the movie in which the character
starts talking and the moment in which they stop.
Emotional data comes from the contribution to the
project given by Lettieri et al. (2019). A group
of 12 subjects was asked to watch the movie and
2http://studyforrest.org/
report the emotions they were experiencing during
the vision, among a list of six emotions (happiness,
surprise, fear, sadness, anger, disgust). Emotion
reporting was performed by pressing the keys of a
keyboard, with which subjects could indicate the
emotion they were experiencing and its intensity,
within a range from 0 (no emotion) to 100.
2.1</p>
      <sec id="sec-2-1">
        <title>Data creation</title>
        <p>Emotional data was collected from a continuous
output z = (z1; z2; :::; zn) from the keyboard, such
that each zi corresponds to an increment of 0.1
seconds in the playing time of the movie (zi = 0:1,
zi+1 = 0:2, zi+2 = 0:3, ...). Each zi is
associated to a list xi1; xi2; :::; xij , with xj 2 [0; 100]
and j 2 [happiness, surprise, fear, sadness, anger,
disgust], where each xj indicates the intensity that
one emotion assumes at a given timestamp. For our
purpose, this information was too detailed and it
could not be mapped to textual data properly, thus
we proceeded to resample emotional information.
We generated new timestamps s = (s1; s2; :::; sm),
such that each si corresponds to the sum of 20
consecutive zi, thus to an increment of 2 seconds in the
playing time of the movie. Each si is associated
to a new list of emotional values, where each new
value is the average of the values associated to the
summed zi.</p>
        <p>After resampling, we aligned the text to
emotional data. As one of our aims is to determine
how much text is needed for accurate emotion
prediction, we considered three progressively larger
time windows for each sk, such that windowi =
[sk m; sk], where m = (2; 4; 6). For each
sentence, we retrieve its tend and align the sentence
verifying if sk m tend sk, thus checking if
the moment in which the sentence ends falls within
the given time window. In this way, the larger the
time window, the larger the amount of text that
gets aligned with a specific timestamp. With this
process, we created three different datasets, one for
each time window. We then removed all the lines in
which no text was aligned to sk. For each dataset,
we end up having 898 timestamps associated with
a line of text and 6 emotion declarations for each
of the 12 subjects.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Data statistics and data selection</title>
        <p>We first looked at the distribution of our data,
examining how many times each subject declared a
specific emotion. Whenever the subject assigned a
value different than zero to a certain emotion, we
considered that emotion as present at a given
timestamp, regardless of its intensity. If all 6 emotions
were zero at the same time (all xj = 0), we
assigned to that case the class neutral. Furthermore,
if any given emotion was declared (at least one
xj 6= 0), we assigned to that case the class emotion,
to indicate a generic emotional response.</p>
        <p>
          As shown in Table 1, the most represented
emotions in the dataset are happiness and sadness,
while the others are underrepresented. Table 1 also
shows that emotions distribution is quite uneven
among the different subjects, as there were some
subjects that declared emotions frequently and
others that entered fewer declarations. This is due
to the fact that emotive phenomena are strongly
subjective, meaning that emotion processing is
specific to each person and that everyone experiences
emotions at a different granularity
          <xref ref-type="bibr" rid="ref2">(Barrett, 2006)</xref>
          .
To account for this factor, we measured the level
of agreement between the 12 subjects using Fleiss’
Kappa. Table 2 reports the percentage of agreement
for each emotion in the data. The lowest agreement
was found on surprise and disgust. As disgust is
also the less declared emotion, it is fair to assume
Emotion
happiness
surprise
fear
sadness
anger
disgust
that the movie does not contain many moments
that elicit this emotion in the subjects. On the other
side, the strongest agreement is found on fear and
anger, showing that these emotions are evoked in
specific scenes of the movie and that subjects had
a similar emotional response to those scenes. In
Table 3 we report examples of sentences on which
the subjects agreed the most, for all six emotions.
For every emotion, there are many sentences on
which a large number of subjects agreed,
meaning that there were various moments of the movie
that elicited the same emotions in the subjects. In
the case of disgust, the highest level of agreement
was achieved at 8 subjects, only on one sentence.
There were no other sentences for which 8 subjects
(or more) agreed. This is justified by the fact that
disgust is the less represented emotion in the data.
        </p>
        <p>Given the information on the agreement and on
emotions distribution, we decided not to examine
underrepresented emotions directly, even if their
agreement was strong (i.e. surprise). In order to
still account for underrepresented emotions, we
relied on the general class emotion. Hence we
assessed three different scenarios: (i) the presence of
any kind of emotion (at least one xj 6= 0), (ii) the
presence of happiness (xhappiness 6= 0) and (iii) the
presence of sadness (xsadness 6= 0). Furthermore,
we decided to conduct our experiments only on
two subjects, subject 4 and subject 8. We focused
on these specific subjects as they declared all
emotions evenly, without neglecting any of them, and
because the number of declarations for each
emotion was quite similar between the two subjects.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Emotions prediction</title>
      <p>We evaluated the three scenarios described in 2.2 in
contrast to the absence of any emotion (all xj = 0),
producing three binary classification tasks. We
relied on two sets of features: automatically extracted
linguistic and lexical features, and contextual word
embeddings from a language model.</p>
      <p>Emotion
happiness
surprise
fear
sadness
anger
disgust
12
11
12
12
12
8</p>
      <p>Text
I had never seen anything so
beautiful in my life. She was like
an angel.</p>
      <p>Jenny! Forrest!
(into radio) Ah, Jesus! My unit is
down hard and hurting! 6 pulling
back to the blue line, Leg Lima 6
out! Pull back! Pull back!
Bubba was my best good friend.</p>
      <p>And even I know that ain’t
something you can find just around
the corner. Bubba was gonna be a
shrimpin’ Boat captain, But instead
he died right there by that river
in Vietnam.</p>
      <p>Are you retarded, Or just plain
stupid? Look, I’m Forrest Gump.</p>
      <p>
        You don’t say much, do you?
For the first set of features, sentences were first
POS tagged and parsed using UDPipe
        <xref ref-type="bibr" rid="ref11">(Straka and
Strakova´, 2017)</xref>
        . We extracted a wide set of
features, like the ones described in Brunato et al.
(2020). These features capture various linguistic
phenomena, that range from raw information to
information related to the morpho-syntactic and
syntactic structure of the sentence (rows 1, 2 and
3 in Table 4, hereafter linguistic features).
Additionally, we extracted other features that are able to
capture some lexical information (row 4 in Table
4, hereafter lexical features), as they identify set
of characters or words that appear more frequently
within a sentence. We trained two SVM models,
one on the linguistic features (SVMling), one on
the lexical features (SVMlex). We trained the
models with a linear kernel and standard parameters,
performing 10-cross-fold validation to evaluate the
models accuracy.
For the second set of features, we relied on BERT
        <xref ref-type="bibr" rid="ref5 ref8">(Devlin et al., 2019)</xref>
        , a Neural Language Model
that encodes contextual information. We retrieved
the pre-trained base model and fine tuned it on
our data. The pre-trained BERT model already
includes a lot of information about the language, as
it has already been trained on a large amount of
data. By fine tuning it on our data, we are able
to exploit the information already acquired by the
model and use it for our task. We performed
different fine tuning stages, then used the so fine-tuned
models to perform the binary classification task on
our data. We evaluated model accuracy using 10
cross-fold-validation. Specifically, we tested three
different fine tuning approaches: (1) original data
(BERTorig), (2) oversampled data to balance the
neutral class (BERTover), (3) oversampled data +
transfer learning tuning (BERTtransf ). In the case
of (3), we first fine tuned the model on data
different than ours but conceived for a similar task.
Notably, we relied on data created for
SemEval2018 Task 1E-c
        <xref ref-type="bibr" rid="ref9">(Mohammad et al., 2018)</xref>
        ,
containing tweets annotated with 11 emotion classes.
After this first tuning, we tuned the model again
on our oversampled data and proceeded with the
classification task.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Results and discussion</title>
      <p>Figure 1 shows the accuracy scores for all the
models, for both subjects and the three datasets. In all
cases, the baseline was determined with a
majority classifier. The results appear similar for both
subjects.</p>
      <p>SVM models are always outperformed by BERT
ones. In any case, SVMling is the model that gave
the lowest performance, remaining below or around
the baseline value. On the contrary, SVMlex tends
to bring a higher performance, despite remaining
close to the baseline in most cases. On one side,
this is due to the fact that features that look at the
raw, morpho-syntactic and syntactic aspects of text,
do not encode any relevant information regarding
the emotional cues in the text. SVMlex always
performs better than SVMling because lexical features
look at patterns of words and characters that are
repeated in the input text and thus record information
about the lexicon of the dataset. However, as our
dataset is too small, it is hard for the model to
retrieve the same lexical patterns in both the training
and test set.</p>
      <p>BERT models outperform the SVM ones in both
happiness and sadness prediction. In the case of
emotion prediction, BERT models obtain very good
results only on the 6 seconds dataset. This is due to
the fact that, in this case, we have flattened all
emotions into a single category, thus it may be difficult
for the model to distinguish between general
emotionally charged sentences and those that are not
perceived as emotionally charged. When emotions
are specific and clearly separated, as in happiness
and sadness cases, BERT is able to infer the
perLevel of
Annotation</p>
      <p>Raw
Text</p>
      <p>POS
tagging
Dependency</p>
      <p>Parsing
Lexical
Patterns</p>
      <p>Feature
Sentence length
Word length
Type/Token Ratio for words and lemmas
Distibution of POS
Lexical density
Inflectional morphology of lexical verbs
and auxiliaries (Mood, Number, Person,
Tense and VerbForm)
Depth of the whole syntactic tree
Average length of dependency links and
of the longest link
Average length of prepositional chains
and distribution by depth
Clause length (n. tokens/verbal heads)
Order of subject and object
Distribution of verbs by arity
Distribution of verbal heads and verbal
roots
Distribution of dependency relations
Distribution of subordinate and principal
clauses
Average length of subordination chains
and distribution by depth
Relative order of subordinate clauses
Bigrams, trigrams and quadrigrams of
characters, words and lemmas
ceived emotions even from small amounts of text
(2 and 4 seconds datasets). BERTover and
BERTtransf tend to give better performances than what
happens with BERTorig. In the case of BERTover,
there is a very slight difference in the prediction of
happiness and sadness, as in these cases the classes
to be predicted were distributed quite evenly. In the
case of emotion prediction, the model is helped by
the higher representation of the neutral class. With
BERTtransf, the performances stay in line with the
ones obtained with the bare oversampling. Fine
tuning the model on similar data did not add any
more useful information. This is due to the fact
that SemEval data were too distant from the ones
of our dataset. Therefore, even though the task is
similar to ours, the input text is too different from
our sentences to actually make a huge difference
for the prediction. We also tried another form of
transfer learning, tuning the model on one subject
and testing it on the other one. However, the results
were too low and we did not report them. This
is because emotion perception is a very personal
phenomenon and it cannot be easily generalised to
different individuals.</p>
      <p>To further evaluate the results, we computed the
percentage of agreement between the two models
that overall had the best performances, BERTover
and BERTtransf. We defined agreement as the
percentage of sentences for which the models gave the
same output during the classification task. Table
5 reports the results for emotion, happiness and
sadness, for every timespan window, and for both
subjects 4 and subject 8. The agreement is quite
high in all cases, and it tends to get stronger with
the amount of text on which models are trained (i.e.
6 seconds). A higher level of agreement indicates
that the models have similar behaviour, thus
making the same mistakes in the classification task. The
lowest levels of agreement are encountered on the
classification of happiness, showing that the two
models work differently in this part of the task.
Indeed, both BERTover and BERTtransf obtain high
performances in predicting happiness, but the fact
that their agreement is lower suggests that they
differ in the mistakes they make in the classification.
We may exploit this information to create systems
that combine different classifiers, actually
enhancing the classification accuracy. By doing this, it
is possible to compare the cases in which two or
more classifiers agree and the cases in which they
make mistakes, thus choosing the best
classification output accordingly.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>In this paper, we presented a dataset of sentences
extracted from the movie Forrest Gump, annotated
with the emotions that a group of subjects
perceived while watching the movie, and we
studied how to predict these emotions. To do so, we
retrieved different kinds of features from the
sentences pronounced by the characters of the movie.
We showed that contextual embeddings extracted
from the sentences can accurately predict specific
emotions, even if the amount of text used for the
prediction is very little. Instead, when predicting
generic emotional elicitation, a larger amount of
text is required for an accurate prediction. We also
show that lexical, morpho-syntactic and syntactic
aspects of the sentences cannot be used to infer
emotional elicitation during the view of the movie.</p>
      <p>As emotional response is directly correlated
with brain activity, we plan to add fMRI images
recorded during the vision of the movie to the
contextual embedding we extracted. In this way, we
could verify if brain images can help to increase the
accuracy in the prediction of perceived emotions.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>We thank MoMiLab research group of IMT Lucca
for having shared with us the data they collected
on human-perceived emotions. Furthermore, we
are grateful to the studyforrest project and all its
contributors.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Acheampong et al.2020]
          <string-name>
            <given-names>Francisca</given-names>
            <surname>Adoma</surname>
          </string-name>
          <string-name>
            <given-names>Acheampong</given-names>
            , Chen Wenyu, and
            <surname>Henry</surname>
          </string-name>
          Nunoo-Mensah.
          <year>2020</year>
          .
          <article-title>Text-based emotion detection: Advances, challenges, and opportunities</article-title>
          . Engineering Reports, page
          <year>e12189</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[Barrett2006] Lisa Feldman Barrett</source>
          .
          <year>2006</year>
          .
          <article-title>Valence is a basic building block of emotional life</article-title>
          .
          <source>Journal of Research in Personality</source>
          ,
          <volume>40</volume>
          (
          <issue>1</issue>
          ):
          <fpage>35</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Brunato et al.2020]
          <string-name>
            <given-names>Dominique</given-names>
            <surname>Brunato</surname>
          </string-name>
          , Andrea Cimino, Felice Dell'Orletta,
          <string-name>
            <given-names>Giulia</given-names>
            <surname>Venturi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Simonetta</given-names>
            <surname>Montemagni</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Profiling-ud: a tool for linguistic profiling of texts</article-title>
          .
          <source>In Proceedings of The 12th Language Resources and Evaluation Conference</source>
          , pages
          <fpage>7145</fpage>
          -
          <lpage>7151</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Calefato et al.2017]
          <string-name>
            <given-names>Fabio</given-names>
            <surname>Calefato</surname>
          </string-name>
          , Filippo Lanubile, and
          <string-name>
            <given-names>Nicole</given-names>
            <surname>Novielli</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Emotxt: a toolkit for emotion recognition from text</article-title>
          .
          <source>In 2017 seventh international conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)</source>
          , pages
          <fpage>79</fpage>
          -
          <lpage>80</lpage>
          . IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [Devlin et al.2019]
          <string-name>
            <given-names>Jacob</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Ming-Wei</surname>
            <given-names>Chang</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Kenton</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Kristina</given-names>
            <surname>Toutanova</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Bert: Pre-training of deep bidirectional transformers for language understanding</article-title>
          .
          <source>In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers), pages
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [Goleman2006]
          <string-name>
            <given-names>Daniel</given-names>
            <surname>Goleman</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>Emotional intelligence</article-title>
          . Bantam.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [Krakovsky2018]
          <string-name>
            <given-names>Marina</given-names>
            <surname>Krakovsky</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Artificial (emotional) intelligence</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [Lettieri et al.2019]
          <string-name>
            <given-names>Giada</given-names>
            <surname>Lettieri</surname>
          </string-name>
          , Giacomo Handjaras, Emiliano Ricciardi, Andrea Leo, Paolo Papale, Monica Betta, Pietro Pietrini, and
          <string-name>
            <given-names>Luca</given-names>
            <surname>Cecchetti</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Emotionotopy in the human right temporo-parietal cortex</article-title>
          .
          <source>Nature communications</source>
          ,
          <volume>10</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Mohammad et al.2018
          <string-name>
            <surname>] Saif</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mohammad</surname>
            , Felipe Bravo-Marquez,
            <given-names>Mohammad</given-names>
          </string-name>
          <string-name>
            <surname>Salameh</surname>
            , and
            <given-names>Svetlana</given-names>
          </string-name>
          <string-name>
            <surname>Kiritchenko</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Semeval-2018 Task 1: Affect in tweets</article-title>
          .
          <source>In Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)</source>
          , New Orleans, LA, USA.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [Salovey and Mayer1990]
          <string-name>
            <given-names>Peter</given-names>
            <surname>Salovey and John D Mayer</surname>
          </string-name>
          .
          <year>1990</year>
          .
          <article-title>Emotional intelligence</article-title>
          . Imagination, cognition and personality,
          <volume>9</volume>
          (
          <issue>3</issue>
          ):
          <fpage>185</fpage>
          -
          <lpage>211</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Straka and Strakova´2017]
          <string-name>
            <given-names>Milan</given-names>
            <surname>Straka</surname>
          </string-name>
          and Jana Strakova´.
          <year>2017</year>
          .
          <article-title>Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe</article-title>
          .
          <source>In Proceedings of the CoNLL</source>
          <year>2017</year>
          <article-title>Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies</article-title>
          , pages
          <fpage>88</fpage>
          -
          <lpage>99</lpage>
          , Vancouver, Canada, August. Association for Computational Linguistics.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>