<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Prediction of Happy Endings in German Novels based on Sentiment Information</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Albin Zehe</string-name>
          <email>zehe@informatik.uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martin Becker</string-name>
          <email>becker@informatik.uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lena Hettinger</string-name>
          <email>hettinger@informatik.uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Hotho</string-name>
          <email>hotho@informatik.uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Isabella Reger</string-name>
          <email>isabella.reger@uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fotis Jannidis</string-name>
          <email>fotis.jannidis@uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Würzburg</institution>
          ,
          <addr-line>97074 Würzburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>9</fpage>
      <lpage>16</lpage>
      <abstract>
        <p>Identifying plot structure in novels is a valuable step towards automatic processing of literary corpora. We present an approach to classify novels as either having a happy ending or not. To achieve this, we use features based on different sentiment lexica as input for an SVMclassifier, which yields an average F1-score of about 73%. In: P. Cellier, T. Charnois, A. Hotho, S. Matwin, M.-F. Moens, Y. Toussaint (Eds.): Proceedings of DMNLP, Workshop at ECML/PKDD, Riva del Garda, Italy, 2016. Copyright c by the paper's authors. Copying only for private and academic purposes.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Every child knows that stories are supposed to have a happy ending. Every adult
knows that this is not always true. In fact, in the course of the 19th Century
a happy ending became a sign of popular literature, while high literature was
marked by a preference for the opposite. This makes happy endings an
interesting point of research in the field of digital literary studies, since automatically
recognizing a happy ending, as one major plot element, could help to better
understand plot structures as a whole.</p>
      <p>
        To achieve this, we need a representation of plot that a computer can work
with. In digital literature studies, it has been proposed to use emotional arousal
as a proxy [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. But can we just use existing data mining methods in combination
with sentiment features and expect good results for happy ending classification?
      </p>
      <p>In this work, we tackle the problem of identifying novels with a “happy
ending” by training a classifier. Our goal is not to present the best method for
doing so, but to show that it is generally possible. We introduce our proposed
approach, which already yields results considerably above a random baseline,
and point out some problems and their possible solutions. Our method uses
sentiment lexica in order to derive features with respect to semantic polarity and
basic emotions. To account for the structural dynamics of happy endings, these
features are built by considering the relation of different sections of the novels.
We are able to train a support vector machine (SVM) which yields an average
F1 score of 0.73 on a corpus with over 200 labelled German novels. To the best
of our knowledge, our work is the first to cover happy ending classification.</p>
      <p>The remainder of this paper is structured as follows: related work and
background information is presented in Sections 2 and 3. The features and data we
use are described in Sections 4 and 5. Then we present our results (Section 6).</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>Recently, a lot of attention has been paid to sentiment analysis in the Digital
Humanities community. In this section we cover publications constructing
features that are useful to our task, but have not actually been used to recognize
happy endings.</p>
      <p>
        Matthew Jockers proposed, in a series of blog posts, to use the “analysis of the
sentiment markers” as a novel method for detecting plot [
        <xref ref-type="bibr" rid="ref5 ref7 ref8">5, 7, 8</xref>
        ]. The basic idea
of representing plot by emotions was well received, but the following discussion
showed his approach to use Fourier Transformation (FT) and a low-pass filter
to smooth the resulting curves is not reasonable, since FT assumes periodicity
of the signal [
        <xref ref-type="bibr" rid="ref14 ref6">6, 14</xref>
        ].
      </p>
      <p>
        Elsner constructs a representation of the plot in a story using sentiment
values, among other features, in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. He cites other works stating that sentiment
is a very important part of plot development and is therefore critical to automatic
understanding of plot.
      </p>
      <p>
        Mohammad builds emotional representations like ours in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], similar
representations are used to automatically compose music from written text.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], Goyal et al. present AESOP, a system that can identify plot units.
AESOP is partially based on affect states, which are closely related to sentiments.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Background</title>
      <p>We refer to a novel as having a “happy ending”, if the situation of the main
characters in the novel improves towards the end of the story or is constantly
favourable. In this paper, we propose a method for automatically predicting
whether novels have a happy ending or not, based on features derived from
sentiment analysis. We start by formally defining the task of “happy ending
classification” and introduce some concepts of sentiment analysis which are relevant
for our features.</p>
      <p>Happy ending classification. We formally define “happy ending classification”
as a simple classification task: Given a corpus C, we aim to learn a function
f : C ! {0, 1}, where f (c) = 1 iff a novel c 2 C has a “happy ending”. In this
work, we use a support vector machine (SVM) to train and test the classification
function f based on a labelled gold standard. The SVM model requires a feature
vector for each novel (cf. Section 5). We mostly use sentiment based features as
introduced in Section 4.</p>
      <p>
        Sentiment analysis. Since plot construction, and in particular happy endings,
are tightly coupled with sentiments [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], sentiment analysis provides a solid basis
for our classification. The goal of sentiment analysis is to determine the polarity
and emotions a human reader would associate with a given word, sentence or
other element of a text. In this work, we focus on word-level sentiment analysis.
      </p>
      <p>
        Polarity denotes if a word has a positive (e.g. friend) or negative (e.g. war)
connotation. It can be expressed as a ternary value ( 1, 0, or 1). A word can also
be associated with a set of basic emotions. There are many definitions for basic
emotions, as discussed in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Plutchik et al. define a set of eight basic emotions
in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]: joy, trust, fear, surprise, sadness, disgust, anger and anticipation.
      </p>
      <p>Generally, polarities and emotions are collected in sentiment lexica. Each
lexicon contains a set of words which it associates with a number of sentiment
values according to a set of dimensions (such as polarity or different emotions).
In Section 4 we derive different features for each novel based on such a lexicon
and in Section 5 we introduce the sentiment lexicon we use in our study.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Features</title>
      <p>For “happy ending classification” we derive feature vectors based on a set of
text segments which we combine to form sections. The final feature vectors are
derived based on certain characteristic values of these segments and sections
(e.g. the polarity of the final segment or the difference between the polarity of
the first and the last section).</p>
      <p>
        Negation detection. Our features are based on sentiments. However,
sentiments can be negated (e.g. “not happy”). For considering negations, we apply
the relatively simplistic technique presented in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]: we add a negation marker to
any word between a negation word and the following punctuation, inverting its
sentiment score. Following the textblob implementation,1 we multiply negated
sentiments by 0.5, improving results slightly.
      </p>
      <p>Segments. Given a corpus of novels C, we first split each novel C 2 C into n
segments, C = {S1, . . . , Sn}. We evenly split by word count resulting in segments
of size kCnk , where kCk denotes the number of words in novel C. Note that the
last segment may be shorter due to the length of the novel and the number of
segments.2
Sentiment values for segments. Now we derive a set of characteristic values
for each segment. Given a fixed lexicon L with several dimensions (e.g., the
polarity or an emotion), let vd(w) denote the value lexicon L associates with
word w according to dimension d. For example the word “death” is strongly
associated with the dimension “sadness”, that is, vsadness(“death”) = 1.</p>
      <p>For each segment Si and each dimension d in the lexicon L, we calculate the
characteristic value v¯d(Si) as follows:</p>
      <p>P vd(w)
v¯d(Si) = w2 Li
|Li|
(1)
where Li denotes the words in Si which are covered by lexicon L.
1 https://pypi.python.org/pypi/textblob-de/. The sentiment analysis in
textblob is not fully ported to German and does not include basic emotions, so
we did not use it directly.
2 The novels were split into words using textblob-de. Words were lemmatized iff the
lexicon used in the respective experiment contained lemmatized forms.
Sections. We merge consecutive segments into sections. We consider two
prominent sections, main and final. The main section Smain = {S1, . . . , Sm} covers the
majority (75 up to 98%, depending on the experimental setup) of the segments
starting from the beginning. The final section Sfinal = {Sm+1, . . . , Sn} covers
the remaining segments and represents the “ending” of the novel. Additionally,
we consider a third section, the late-main section Slate = {S2m n+1, . . . , Sm},
which covers the last part of Smain. This section is introduced in order to better
capture the sentiment development at the end of the novel. For example, there
may be a catastrophic event shortly before the end which is then resolved,
leading to a happy ending. Since, in our experiments, the late and the final section
are always of the same length, all sections are defined by specifying the number
of segments in the main section m.</p>
      <p>Sentiment values for sections. For each of these sections, we calculate the
characteristic value averages based on the covered segments. In particular, given
the segments of a novel, C = {S1, . . . , Sn}, and a section S, we calculate the
average characteristic value by extending v¯d to sections:</p>
      <p>P v¯d(Si)
v¯d(S) = Si2S
|S|
(2)
Features. Based on these characteristic values, we finally define the features
for each novel. Given n segments, a main section of size m, and a lexicon L,
the feature vector contains the following values for each dimension d: (1) the
characteristic value of the final section fd,final = v¯d(Sfinal), (2) the characteristic
value of the last segment fd,n = v¯d(Sn), (3) the difference between the main
and the final section fd,main-final = v¯d(Smain) v¯d(Sfinal) and (4) the difference
between the late-main and the final section fd,late-final = v¯d(Slate) v¯d(Sfinal).
The change in sentiment values towards the end of the novel is characterized by
the two differences. The difference was used to ensure that generally sad novels
that had a significant improvement in the final segments can still be classified
as having a happy ending or, in reverse, a drop in positive emotions towards the
end of a generally happy novel can be recognized as a sad ending.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Dataset</title>
      <p>In this section, we describe our annotated corpus, as well as the sentiment lexicon
we derive our features from.</p>
      <p>Annotated novels. Our dataset consists of 212 German novels compiled from
the TextGrid Digital Library3 and the Projekt Gutenberg4, mostly written
between 1750 and 1920. The number of words in the novels ranges from less than
20,000 words up to more than 300,000. These novels have been manually labelled
3 https://textgrid.de/digitale-bibliothek
4 http://gutenberg.spiegel.de
by domain experts as either having a happy ending or not,5 based on sources
like the Kindler,6 Wikipedia7 or by reading relevant parts of the novel. Half of
the novels (106) are annotated as having a happy ending. The annotated data
can be made available upon request.</p>
      <p>
        Sentiment lexica. In this work, we employ the German version of the NRC
sentiment lexicon,8 which is provided by the original author of the English
version [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. It encompasses the following semantic dimensions: if a word is positive
(0 or 1) or negative (0 or 1), and if a word is associated with some basic emotion
(each 0 or 1). We also add another dimension, i.e., the “polarity”, which is the
negative value subtracted from the positive value.
      </p>
      <p>
        After removing duplicates and all-zero entries, which would not help in our
task, the lexicon contains 4597 entries, as exemplified in Table 1. We also
evaluated our approach on SentiWS [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] (polarity scores 2 { 1, 0 1}) and GPC
(German Polarity Clues) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] (polarity scores 2 [
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        ]), however, achieving
inferior results.
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>Results and Discussion</title>
      <p>In this section we train a support vector machine (SVM)9 for classifying happy
endings of novels on the annotated corpus introduced in Section 5 using the
features presented in Section 4. For the SVM we use an RBF kernel and the
parameters C = 1 and = 0.01. A linear kernel gave slightly worse results, grid
search for parameter selection did not lead to an improvement. We standardize
the features using the sklearn StandardScaler10 before classification. All tests
were run with 10-fold cross-validation.</p>
      <p>Baselines. Since our dataset is equally divided in novels with and without a
happy ending, the random baseline as well as the ZeroR classifier (which assigns
every novel to the largest class) reach 50% accuracy. Dropping the notion of
sections, and only using the average sentiment values of the entire novel (i.e.,
5 Because this is a simple task, each novel was labelled by only a single domain expert.
6 http://www.derkindler.de
7 de.wikipedia.org
8 http://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm
9 We tried some other classifiers as well, with Random Forests and Naive Bayes
reaching about the same score, while k-NN and Decision Trees performed worse.
10 http://scikit-learn.org/stable/modules/preprocessing.html</p>
      <p>happy ending precision recall f1-score support
False
True
avg/total
one value for each dimension in L) yields an F1 score of 0.54, which is slightly
above the random baseline. Adding the scores of the last segment improves the
F1 score to about 0.66 for the best performing segment count n = 75. This
suggests that the final segment of the novel is indeed an important feature for
classifying happy endings.</p>
      <p>Best parameter configuration. Our method requires to choose a sentiment
lexicon and the number of segments n, as well as the length of the main section m
(in segments). We compared different configurations and found that working with
the NRC as introduced in Section 5 using n = 75 segments with a main section
of m = 71 segments worked best. Other lexica containing only polarity scores (cf.
Section 5) performed worse, suggesting that the combination of basic emotions
represents a more accurate picture of the overall mood in a novel than polarity
alone. Table 2 shows the results with the best configuration, accumulated over
20 iterations.11
Influence of segmentation and section size. In this paragraph, we describe
how changing the number of segments n and the percentage of segments assigned
to the main section Smain, that is |Smain| , influences the results. Figure 1 shows
n
the average F1-score over 20 test runs based on the NRC lexicon. Each line
corresponds to a segment count n. From a larger set of segment counts we chose
the 4 best performing ones. The x-axis corresponds to the percentage of segments
in the main section. The y-axis shows the F1-score achieved with the respective
configuration. It can be seen that splitting the novels into 50 segments mostly
works very well, but is outperformed by 75 segments with a main section of
71 segments (about 95%). Furthermore, most segmentations perform best when
using about 5% or 10% of the segments as the final section.</p>
      <p>Limitations. Here, we list some limitations of our work and suggest possible
solutions.</p>
      <p>
        One variation we tried was to limit our features to sentences containing
explicit references to the main character of the novel. The intuition behind this is
that the concept of happy ending is closely related to the fate of protagonists of
a story. Using a domain-adapted named entity recognition (NER) toolkit [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], we
selected the main character as the one being explicitly named most often. After
11 The results vary slightly between iterations, with total F1-scores mostly between
71% and 75%. Averaging over 20 iterations yields stable results.
0.80
0.85
segmenting the novel, we then removed all sentences not mentioning this
character. Contrary to our expectation, this did not improve results but indeed led to
a significant drop in accuracy. The reason for this might be our decision to (for
now) avoid error-prone co-reference resolution or our strict choice of focusing
only on a single main character.
      </p>
      <p>Employing more sophisticated sentiment analysis would likely improve our
results. For example, while our relatively crude negation detection only led to slight
improvements, considering a more advanced set of sentiment shifters should help
to get better results.</p>
      <p>We also did not take into account that some stories are not told in
chronological order. Those stories are difficult to our system, as the happy ending may
happen at some arbitrary point in the text. Working around this problem would
require a way to identify corresponding scenes in different novels.</p>
      <p>Finally, we are currently working on a way to choose the length of the main
section individually for each novel, instead of passing it to the model as a fixed
hyperparameter.
7</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>In this work, we have presented an SVM classifier for identifying novels with
happy endings. Our approach is based on features derived from sentiment lexica
and exploits structural dynamics by comparing different sections of the novels.
We consider the F1-score of 0.73 to be a good starting point for future work,
such as evaluating our method on more extensive labelled datasets.
Additionally, it is interesting to investigate if the same parameters yield good results for
different novel collections, or different languages.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mohammad</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Generating music from literature</article-title>
          .
          <source>In: Proceedings of the 3rd Workshop on Computational Linguistics for Literature (CLFL)</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . Association for Computational Linguistics, Gothenburg,
          <source>Sweden (April</source>
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Elsner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Abstract representations of plot struture</article-title>
          .
          <source>LiLT (Linguistic Issues in Language Technology)</source>
          <volume>12</volume>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Goyal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riloff</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Daumé</surname>
            <given-names>III</given-names>
          </string-name>
          , H.:
          <article-title>Automatically producing plot unit representations for narrative text</article-title>
          .
          <source>In: Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing</source>
          . pp.
          <fpage>77</fpage>
          -
          <lpage>86</lpage>
          . Association for Computational Linguistics (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Jannidis</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krug</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Puppe</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toepfer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weimer</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reger</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Automatische Erkennung von Figuren in deutschsprachigen Romanen (</article-title>
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Jockers</surname>
            ,
            <given-names>M.L.:</given-names>
          </string-name>
          <article-title>A novel method for detecting plot</article-title>
          (
          <year>Jun 2014</year>
          ), http://www. matthewjockers.net/
          <year>2014</year>
          /06/05/a
          <article-title>-novel-method-for-detecting-plot/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Jockers</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          :
          <article-title>Requiem for a low pass filter</article-title>
          (
          <year>Apr 2015</year>
          ), http://www. matthewjockers.net/
          <year>2015</year>
          /04/06/epilogue/
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Jockers</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          :
          <article-title>The rest of the story</article-title>
          (
          <year>Feb 2015</year>
          ), http://www.matthewjockers.net/
          <year>2015</year>
          /02/25/the-rest
          <article-title>-of-the-story/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Jockers</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          :
          <article-title>Revealing sentiment and plot arcs with the syuzhet package</article-title>
          (
          <year>Feb 2015</year>
          ), http://www.matthewjockers.net/
          <year>2015</year>
          /02/02/syuzhet/
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Mohammad</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>From once upon a time to happily ever after: Tracking emotions in novels and fairy tales</article-title>
          .
          <source>In: Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage</source>
          ,
          <source>Social Sciences, and Humanities</source>
          . pp.
          <fpage>105</fpage>
          -
          <lpage>114</lpage>
          . Association for Computational Linguistics, Stroudsburg, PA, USA (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Mohammad</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turney</surname>
          </string-name>
          , P.D.:
          <article-title>Crowdsourcing a word-emotion association lexicon 29(3</article-title>
          ),
          <fpage>436</fpage>
          -
          <lpage>465</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Pang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vaithyanathan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Thumbs up?: Sentiment classification using machine learning techniques</article-title>
          .
          <source>In: Proceedings of the ACL-02 conference on Empirical methods in natural language processing-</source>
          Volume
          <volume>10</volume>
          . pp.
          <fpage>79</fpage>
          -
          <lpage>86</lpage>
          . Association for Computational Linguistics (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Plutchik</surname>
          </string-name>
          , R.:
          <article-title>A general psychoevolutionary theory of emotion</article-title>
          .
          <source>Theories of emotion 1</source>
          ,
          <fpage>3</fpage>
          -
          <lpage>31</lpage>
          (
          <year>1980</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Remus</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Quasthoff</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heyer</surname>
          </string-name>
          , G.:
          <article-title>Sentiws - a publicly available germanlanguage resource for sentiment analysis</article-title>
          .
          <source>In: Proceedings of the 7th International Language Resources and Evaluation (LREC'10)</source>
          . pp.
          <fpage>1168</fpage>
          -
          <lpage>1171</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>B.M.:</given-names>
          </string-name>
          <article-title>Commodius vici of recirculation: the real problem with syuzhet</article-title>
          (
          <year>Apr 2015</year>
          ), http://benschmidt.org/
          <year>2015</year>
          /04/03/ commodius-vici
          <article-title>-of-recirculation-the-real-problem-with-syuzhet/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Waltinger</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Sentiment analysis reloaded: A comparative study on sentiment polarity identification combining machine learning and subjectivity features</article-title>
          .
          <source>In: Proceedings of the 6th International Conference on Web Information Systems and Technologies (WEBIST '10)</source>
          . Valencia,
          <string-name>
            <surname>Spain</surname>
          </string-name>
          (April
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>