<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>HarryMotions - Classifying Relationships in Harry Potter based on Emotion Analysis</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Data Science Chair - University of W u ̈rzburg</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Sentiment Analysis has long been a topic of interest in natural language processing and computational literary studies, where it can be used to infer the relationships between fictional characters. Building on the dataset and results of Kim and Klinger (2019), we propose a classifier based on BERT that improves the results reported therein and show that we can use this classifier to determine the relation between characters in Harry Potter novels. Our proposed sentiment classifier yields an F1score of up to 75 % for binary classification of emotions. Aggregating these emotions over novels, we reach an F1-score of up to 68 % for the classification of a pair of characters as friendly or unfriendly.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Characters and their relations are one of the basic
building blocks of stories
        <xref ref-type="bibr" rid="ref5">(Hettinger et al., 2015)</xref>
        .
Detecting them automatically is therefore a highly
interesting task for the analysis of fictional texts.
While there exists a multitude of methods for the
extraction of character networks
        <xref ref-type="bibr" rid="ref10 ref16 ref6 ref9">(Labatut and Bost,
2019)</xref>
        , these often provide networks with
unlabelled edges, that is, no information about the kind
of relationship the characters share. Following Kim
and Klinger (2019), we work towards the goal of
detecting the polarity of relations using sentiment
analysis. To this end, we collect all chunks of text
in a novel mentioning a pair of characters and
perform sentiment analysis on these pieces of text.
While methods for sentiment analysis perform very
well for certain domains, mostly short texts like
tweets, product reviews or news articles, the task
still poses a significant challenge on other domains.
Fictional literary texts in particular are hard to
analyse, since they usually do not express emotions
explicitly, but they have to be inferred from context
and possibly world knowledge.
      </p>
      <p>
        Recently, the trend in NLP has been to use large
transformer models that have been pre-trained for
language modelling (or similar tasks not requiring
explicit annotations) on enormous datasets. We
follow this trend by fine-tuning BERT
        <xref ref-type="bibr" rid="ref3">(Devlin et al.,
2019)</xref>
        to the task of classifying emotions in
interactions between characters. We use BookNLP
        <xref ref-type="bibr" rid="ref1">(Bamman et al., 2014)</xref>
        to extract entity mentions and
co-references and then fine-tune BERT on the
emotion dataset provided by Kim and Klinger (2019).
Emotions are aggregated to detect overall relations
between characters and their development over a
novel, as exemplified in Figure 1 (cf. Section 4).
      </p>
      <p>
        Our contribution is two-fold: 1. We generally
improve results on the emotion classification tasks
from Kim and Klinger (2019). 2. We track the
emotional relations detected by our classifier over
the course of a novel and describe an easy method
to aggregate them to an overall label. We evaluate
this method on the text of the well-known Harry
Potter series
        <xref ref-type="bibr" rid="ref13">(Rowling, 1997)</xref>
        .
      </p>
      <p>The remainder of this paper is structured as
follows: After giving a short introduction, we next
present related work. In chapter 3 we describe our
approaches towards emotion and relation
classification as well as our results. We conclude this paper
with a discussion of results and some possible
directions for future work
Our work is situated at the intersection of sentiment
analysis and social network extraction.</p>
      <p>
        Character networks for works of fiction have
been studied extensively in recent years
        <xref ref-type="bibr" rid="ref10 ref16 ref6 ref9">(Labatut
and Bost, 2019)</xref>
        . Some work has been done
on extracting networks from textual summaries
        <xref ref-type="bibr" rid="ref15 ref15 ref2 ref2 ref7">(Chaturvedi et al., 2016; Srivastava et al., 2016)</xref>
        and training large neural networks to specifically
model relationships over time
        <xref ref-type="bibr" rid="ref7">(Iyyer et al., 2016)</xref>
        .
While Harry Potter novels have been explored
before
        <xref ref-type="bibr" rid="ref10 ref16 ref4 ref6 ref9">(Vilares and Go´mez-Rodrıguez, 2019; Everton
et al., 2019)</xref>
        , research has not yet concentrated on
emotional relations between characters.
      </p>
      <p>
        For sentiment analysis, most work has focused
on short, self-contained texts like tweets
        <xref ref-type="bibr" rid="ref12 ref6">(Islam
et al., 2019; Rosenthal et al., 2017)</xref>
        or reviews
        <xref ref-type="bibr" rid="ref11 ref14 ref17">(Maas et al., 2011; Xue et al., 2020; Socher et al.,
2013)</xref>
        . Sentiment analysis in fictional texts has
become a topic of interest, but has so far proven
difficult because of the lack of suitable datasets. Kim
and Klinger (2018) provide an extensive overview
of papers addressing the issue of sentiment analysis
in fictional texts, also addressing papers that use
emotions in the context of social network
extraction.
      </p>
      <p>However, most of these works employ rather
simple sentiment analysis methods (e.g., Zehe et al.
(2016) rely on a simple lookup in a sentiment
lexicon). Most similar to our work is Kim and Klinger
(2019), which we directly build upon. The authors
propose a new corpus of short pieces of text
annotated with the emotional relations between
characters described in these texts. They train a GRU
(Cho et al., 2014) neural network to predict the
emotions based on this corpus, showing promising
results with F1 scores up to 67 % for undirected
binary classification (positive and negative emotions)
and 46 % for 5 basic emotions in the story-level
evaluation as described below. We extend this work
by improving the sentiment analysis model and
aggregate the instance-level labels for full novels.
3</p>
    </sec>
    <sec id="sec-2">
      <title>Classifying Emotional Relations</title>
      <p>We address two tasks in this paper: mention-level
emotion classification and story-level relation
classification, which we see as two steps in a pipeline.</p>
      <sec id="sec-2-1">
        <title>Emotion Classification Following Kim and</title>
        <p>Klinger (2019), we define emotion classification as
learning a classifier that, given a short piece of text
(roughly one sentence) containing two characters,
predicts the emotion described therein. We perform
this task on different granularity levels, using either
2, 5 or 8 directed or undirected emotions.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Relation Classification We define relation clas</title>
        <p>sification as an aggregation of emotions discovered
by step 1 over a novel. In this paper, we distinguish
between “friendly” and “unfriendly” relations.
3.1</p>
      </sec>
      <sec id="sec-2-3">
        <title>Method</title>
      </sec>
      <sec id="sec-2-4">
        <title>Emotion Classification We use a pretrained</title>
        <p>
          BERT-model
          <xref ref-type="bibr" rid="ref3">(Devlin et al., 2019)</xref>
          , which we
finetune to our task using the fast-bert library1, mostly
keeping the default parameters. We train for 6 (2-,
5-class) or 12 (8-class) epochs with batch size 1.
Relation Classification We extract all
interactions from a novel mentioning a pair of characters
1https://github.com/kaushaltrivedi/
fast-bert, based on https://github.com/
huggingface/transformers
a; b, classify the emotions described therein and
aggregate them to an overall label. We use BookNLP
          <xref ref-type="bibr" rid="ref1">(Bamman et al., 2014)</xref>
          to perform co-reference
resolution and extract all interactions where both a
and b each appear at least 20 times in the novel.
We define an interaction as a chunk of text where a
and b appear with no more than 10 tokens between
them, regardless of sentence boundaries, with 10
additional tokens on both sides as context. We
select only pairs where at least 5 interactions occur in
the novel and classify the emotions in each of these
interactions using our BERT-based classifier. For
the aggregation of emotions to an overall relation,
we count the number of positive, negative, neutral
and overall emotions (Xa;b) between a and b and
calculate their difference, classifying relations as
rel(a; b) =
(friendly
unfriendly
if
if
&lt; paolslaa;;bb
posa;b :
alla;b
The amount of positive emotions required for a
friendly relationship is a hyper-parameter.
3.2
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>Datasets</title>
        <p>Emotion Classification For the first task, we use
the dataset provided by Kim and Klinger (2019)
and refer to this paper for a detailed description due
to space constraints. The dataset consists of 1335
samples2, each annotated according to multiple
schemes. These schemes differ in the number of
emotions that are annotated (two, five or eight) and
whether the emotions are directed (from a causing
to an experiencing character) or undirected.
Relation Classification For the second task, we
have collected our own dataset. To this end, we
used BookNLP on all books from the Harry Potter
series to extract all interactions as described in
Section 3.1. In contrast to the first dataset, we use
automatically extracted characters and co-references
here. We then manually annotated all pairs of
characters for which we found interactions with their
relationship, distinguishing between friendly and
unfriendly relationships. We collected two sets of
independent annotations and, where the two
annotators disagreed, collected a third annotation as a
tie-breaker. The tie-breaker was given the option
to note that there is no (clear) relation between the
two characters. This was the case in the third novel
for the relation between Harry and Sirius Black (cf.</p>
        <sec id="sec-2-5-1">
          <title>Novel #friendly #unfriendly #disagree</title>
          <p>HP1
HP2
HP3
HP4
HP5
HP6
HP7</p>
          <p>Section 4). Table 1 provides details for the resulting
dataset, which we publish for future research.3
Emotion Classification We follow the
evaluation setup from Kim and Klinger (2019) for
emotion classification, who use multiple settings: The
dataset (cf. section 3.2) provides annotations for
sets of two, five and eight directed or undirected
emotions. Additionally, they define different ways
of representing the entities involved in the
emotions, where some add a marker to entities or
completely mask them (making it impossible for the
model to learn that, e.g., Harry always interacts
positively with Ron). We describe these schemes
shortly in the following and give an example for
how sentences would be represented according to
each scheme:</p>
          <p>No-indicator: Entities are represented
as in the text, the model is directly
fed the unmodified sentence (e.g.,
Alice is angry with Bob).</p>
          <p>Role: Entities are marked as causing or
experiencing (e.g., &lt;e&gt;Alice&lt;/e&gt; is angry
with &lt;c&gt;Bob&lt;/c&gt;), where &lt;e&gt; marks
the experiencing character and &lt;c&gt; the
causing character.</p>
          <p>MRole: Entities are only identified by their
role (&lt;e&gt; is angry with &lt;c&gt;), &lt;e&gt;
and &lt;c&gt; as above.</p>
          <p>Entity: Entities are marked as entities with no
indication as to whether they cause or
experi21742 overall, but following Kim and Klinger (2019) we
use only the subset annotated with a causing character
3http://professor-x.de/datasets/
harrymotions.
ence the emotion (e.g., &lt;et&gt;Alice&lt;/et&gt;
is angry with &lt;et&gt;Bob&lt;/et&gt;).
MEntity: Entities are masked by
entitymarkers (e.g., &lt;et&gt; is angry with
&lt;et&gt;).</p>
          <p>Relation Classification In our second
experiment, we use the emotions detected in the previous
step to detect overall relationships between
characters in the Harry Potter series by aggregating over
emotions as described in Section 3.1. In Table 3,
we report macro-averaged F1-scores as well as
accuracies for aggregating emotions as classified in
the Entity and MEntity settings for 2 and 5 emotion
classes, since we do not have role labels for the
Harry Potter corpus and the emotion classification
for 8 emotions did not perform well. Note that the
number of emotions only pertains to the emotion
classification setting, relations are always classified
as friendly or unfriendly. For the 5 class setting, we
define anger, disgust and sadness as negative
emotions, joy as positive and anticipation as neutral.
The parameter was optimised on hp1 and is set
to 0:4, except for 5-MEntity ( = 0:75). Lacking a
directly comparable approach, we report sampling
from the true label distribution per novel as a
baseline (which performs better than majority vote in
our setting). We find that, on average, 2 classes
lead to better results and we always outperform the
baseline.
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Discussion</title>
      <p>In this section, we discuss our findings along with
some of the decisions involved in the dataset
collection and provide some insight regarding the
development of emotions over the course of a novel.
BERT vs. GRU Our BERT-based classifier
outperforms the GRU in all undirected, but not all
directed settings. Specifically, in the 8-class
directed evaluation, the GRU usually performs better
than BERT. We hypothesise two possible reasons:
a) the rather low amount of training data available
for each of the 8 emotion classes, especially in the
directed case. We assume that the GRU’s lower
number of parameters makes it easier to tune on
fewer samples. b) BERT is a bi-directional model,
while the GRU used here is uni-directional. Since
the GRU reads sentences in the right order, while
BERT reads in both directions, it might be easier
for the GRU to model directed relations.</p>
      <p>Dataset Collection As mentioned in Section 3.2,
we excluded some relations during the annotation
process. This is due to two reasons: a) errors in
named entity recognition and b) changing
relationships. For the first category, BookNLP returned
the entity “Felix Felicis”, which is a luck potion.
We excluded all relationships involving the potion,
but kept collective entities like ”Hogwarts”. In the
second category we find the relationship between
Sirius Black and most other characters in the third
novel. For the majority of the book, Sirius is
regarded as a villain intent on killing Harry, which is
revealed to be wrong at the end of the novel,
turning the relation very positive. Since the label here
is unclear, we excluded it from the dataset.
Developing Relations As described before,
relationships can change drastically within a novel.
Two prominent examples of this in the Harry
Potter novels are the relations between Harry and
Hermione in the first novel (where they become
friends) and between Harry and Sirius Black in
the third novel (see prev. paragraph). We can use
the emotions detected by our classifier to plot a
trajectory over the novel. The polarity for
characters a and b in chapter i is then calculated as
pi = pi 1 + posa;b;i nega;b;i, where pos/nega;b;i
counts positive/negative emotions between a and
b in chapter i, respectively, and p0 := 0. We show
plots for three examples in Figure 1, using
predictions from the 2-class MEntity classification. In
all cases, the trajectory matches our expectation:
For Harry and Hermione, the relation starts
neutral with a very clear upper trend after they become
friends. For Ron, the relation quickly becomes very
positive. For Sirius, the relation is mostly negative,
while improving clearly in the final chapters.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>We have presented an improved approach for the
classification of emotional relations between
fictional characters. By aggregating sentence level
emotions, we have built a classifier for
novelwide character relations based on emotion
analysis. While our experiments show that
aggregation yields promising results, future work includes</p>
      <sec id="sec-4-1">
        <title>Undirected</title>
        <p>8c 5c 2c</p>
        <p>Directed
5c
the development of a stronger classifier for
storylevel relations. We also plan on investigating the
influence of co-reference resolution, which is
currently done automatically. Using manual labels
or improved co-references resolution should
further improve our results: First experiments indicate
better performance for frequent characters, where
resolution errors are more easily smoothed out.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>Many thanks to Darleen Pappelau for helpfully
providing the tie breaker annotations for the dataset.
dynamic relationships between characters in literary
novels. Thirtieth AAAI Conference on Artificial
Intelligence.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>David</given-names>
            <surname>Bamman</surname>
          </string-name>
          ,
          <source>Ted Underwood, and Noah A Smith</source>
          .
          <year>2014</year>
          .
          <article-title>A bayesian mixed effects model of literary character</article-title>
          .
          <source>In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</source>
          , pages
          <fpage>370</fpage>
          -
          <lpage>379</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Snigdha</given-names>
            <surname>Chaturvedi</surname>
          </string-name>
          , Shashank Srivastava,
          <string-name>
            <surname>Hal Daume</surname>
            <given-names>III</given-names>
          </string-name>
          , and
          <string-name>
            <given-names>Chris</given-names>
            <surname>Dyer</surname>
          </string-name>
          .
          <year>2016</year>
          . Modeling Kyunghyun Cho, Bart van Merrienboer,
          <string-name>
            <surname>Caglar Gulcehre</surname>
            , Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and
            <given-names>Yoshua</given-names>
          </string-name>
          <string-name>
            <surname>Bengio</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Learning phrase representations using RNN encoder-decoder for statistical machine translation</article-title>
          .
          <source>In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Jacob</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Ming-Wei</surname>
            <given-names>Chang</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Kenton</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Kristina</given-names>
            <surname>Toutanova</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>BERT: Pre-training of deep bidirectional transformers for language understanding</article-title>
          .
          <source>In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers), pages
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          , Minneapolis, Minnesota. Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Sean</given-names>
            <surname>Everton</surname>
          </string-name>
          , Tara Everton, Aaron Green,
          <string-name>
            <given-names>Cassie</given-names>
            <surname>Hamblin</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Rob</given-names>
            <surname>Schroeder</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Strong ties and where to find them: Or, why Neville (and Ginny and Seamus) and Bellatrix (and Lucius) might be more important than Harry and Tom</article-title>
          . SSRN.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>Lena</given-names>
            <surname>Hettinger</surname>
          </string-name>
          , Martin Becker, Isabella Reger, Fotis Jannidis, and
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Hotho</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Genre classification on german novels</article-title>
          .
          <source>In Proceedings of the 12th International Workshop on Text-based Information Retrieval.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Jumayel</given-names>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Robert E.</given-names>
            <surname>Mercer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Lu</given-names>
            <surname>Xiao</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Multi-channel convolutional neural network for twitter emotion and sentiment recognition</article-title>
          .
          <source>In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers), pages
          <fpage>1355</fpage>
          -
          <lpage>1365</lpage>
          , Minneapolis, Minnesota.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Mohit</given-names>
            <surname>Iyyer</surname>
          </string-name>
          , Anupam Guha, Snigdha Chaturvedi,
          <string-name>
            <surname>Jordan</surname>
          </string-name>
          Boyd-Graber, and Hal Daume´ III.
          <year>2016</year>
          .
          <article-title>Feuding families and former friends: Unsupervised learning for dynamic fictional relationships</article-title>
          .
          <source>In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , pages
          <fpage>1534</fpage>
          -
          <lpage>1544</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>Evgeny</given-names>
            <surname>Kim</surname>
          </string-name>
          and
          <string-name>
            <given-names>Roman</given-names>
            <surname>Klinger</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>A survey on sentiment and emotion analysis for computational literary studies. Submitted for review to DHQ (http://www</article-title>
          .digitalhumanities.org/dhq/).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>Evgeny</given-names>
            <surname>Kim</surname>
          </string-name>
          and
          <string-name>
            <given-names>Roman</given-names>
            <surname>Klinger</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Frowning Frodo, wincing Leia, and a seriously great friendship: Learning to classify emotional relationships of fictional characters</article-title>
          .
          <source>In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers), pages
          <fpage>647</fpage>
          -
          <lpage>653</lpage>
          , Minneapolis, Minnesota. Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>Vincent</given-names>
            <surname>Labatut</surname>
          </string-name>
          and
          <string-name>
            <given-names>Xavier</given-names>
            <surname>Bost</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Extraction and analysis of fictional character networks: A survey</article-title>
          .
          <source>ACM Computing Surveys (CSUR)</source>
          ,
          <volume>52</volume>
          (
          <issue>5</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Andrew L. Maas</surname>
            , Raymond E. Daly, Peter T. Pham, Dan Huang,
            <given-names>Andrew Y.</given-names>
          </string-name>
          <string-name>
            <surname>Ng</surname>
            , and
            <given-names>Christopher</given-names>
          </string-name>
          <string-name>
            <surname>Potts</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Learning word vectors for sentiment analysis</article-title>
          .
          <source>In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies</source>
          , pages
          <fpage>142</fpage>
          -
          <lpage>150</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>Sara</given-names>
            <surname>Rosenthal</surname>
          </string-name>
          , Noura Farra, and
          <string-name>
            <given-names>Preslav</given-names>
            <surname>Nakov</surname>
          </string-name>
          .
          <year>2017</year>
          . SemEval
          <article-title>-2017 task 4: Sentiment analysis in twitter</article-title>
          .
          <source>In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)</source>
          , pages
          <fpage>502</fpage>
          -
          <lpage>518</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>J. K.</given-names>
            <surname>Rowling</surname>
          </string-name>
          .
          <year>1997</year>
          .
          <article-title>Harry Potter and the Philosopher's Stone, 1 edition</article-title>
          , volume
          <volume>1</volume>
          .
          <string-name>
            <surname>Bloomsbury</surname>
            <given-names>Publishing</given-names>
          </string-name>
          , London.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>Richard</given-names>
            <surname>Socher</surname>
          </string-name>
          , Alex Perelygin, Jean Wu, Jason Chuang,
          <string-name>
            <surname>Christopher D Manning</surname>
            , Andrew Y Ng, and
            <given-names>Christopher</given-names>
          </string-name>
          <string-name>
            <surname>Potts</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Recursive deep models for semantic compositionality over a sentiment treebank</article-title>
          .
          <source>In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing</source>
          , pages
          <fpage>1631</fpage>
          -
          <lpage>1642</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>Shashank</given-names>
            <surname>Srivastava</surname>
          </string-name>
          , Snigdha Chaturvedi, and Tom Mitchell.
          <year>2016</year>
          .
          <article-title>Inferring interpersonal relations in narrative summaries</article-title>
          .
          <source>In Thirtieth AAAI Conference on Artificial Intelligence.</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <given-names>David</given-names>
            <surname>Vilares</surname>
          </string-name>
          and Carlos Go´
          <fpage>mez</fpage>
          -Rodrıguez.
          <year>2019</year>
          .
          <article-title>Harry Potter and the action prediction challenge from natural language</article-title>
          .
          <source>In Proceedings of NAACLHLT</source>
          , pages
          <fpage>2124</fpage>
          -
          <lpage>2130</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <given-names>Qianming</given-names>
            <surname>Xue</surname>
          </string-name>
          , Wei Zhang, and
          <string-name>
            <given-names>Hongyuan</given-names>
            <surname>Zha</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Improving domain-adapted sentiment classification by deep adversarial mutual learning</article-title>
          . Accepted to appear
          <source>in AAAI'20.</source>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <given-names>Albin</given-names>
            <surname>Zehe</surname>
          </string-name>
          , Martin Becker, Lena Hettinger, Andreas Hotho, Isabella Reger, and
          <string-name>
            <given-names>Fotis</given-names>
            <surname>Jannidis</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Prediction of happy endings in german novels</article-title>
          .
          <source>In Proceedings of the Workshop on Interactions between Data Mining and Natural Language Processing</source>
          <year>2016</year>
          , pages
          <fpage>9</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>