<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>One Graph to Rule them All: Using NLP and Graph Neural Networks to analyse Tolkien's Legendarium</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vincenzo Perri</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lisi Qarkaxhija</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Albin Zehe</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Hotho</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ingo Scholtes</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Chair of Informatics X - Data Science, Center for Arti昀椀cial Intelligence and Data Science (CAIDAS), Julius-Maximilians-Universität Würzburg</institution>
          ,
          <addr-line>D-97074 Würzburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Chair of Informatics XV - Machine Learning for Complex Networks, Center for Arti昀椀cial Intelligence and Data Science (CAIDAS), Julius-Maximilians-Universität Würzburg</institution>
          ,
          <addr-line>D-97074 Würzburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Data Analytics Group, Department of Informatics(IfI), Universität Zürich</institution>
          ,
          <addr-line>CH-8050 Zürich</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <fpage>291</fpage>
      <lpage>317</lpage>
      <abstract>
        <p>Natural Language Processing and Machine Learning have considerably advanced Computational Literary Studies. Similarly, the construction of co-occurrence networks of literary characters, and their analysis using methods from social network analysis and network science, have provided insights into the micro- and macro-level structure of literary texts. Combining these perspectives, in this work we study character networks extracted from a text corpus of J.R.R. Tolkien's Legendarium. We show that this perspective helps us to analyse and visualise the narrative style that characterises Tolkien's works. Addressing character classi昀椀cation, embedding and co-occurrence prediction, we further investigate the advantages of state-of-the-art Graph Neural Networks over a popular word embedding method. Our results highlight the large potential of graph learning in Computational Literary Studies.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;computational literary studies</kwd>
        <kwd>character networks</kwd>
        <kwd>network analysis</kwd>
        <kwd>graph neural networks</kwd>
        <kwd>NLP</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Motivation and Background</title>
      <p>
        Computational Literary Studies (CLS) have recently taken advantage of the latest developments
in Natural Language Processing (NLP), with deep learning techniques bringing major
improvements for tasks relevant to literature analysis: Examples include named entity recognition
[
        <xref ref-type="bibr" rid="ref28">28</xref>
        ], anaphora and coreference resolution [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ], sentiment analysis [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], scene detection [54]
or genre classi昀椀cation [
        <xref ref-type="bibr" rid="ref52">52</xref>
        ]. Apart from NLP, machine learning has recently shown great
potential in data with complex relational structure that can be represented as graph or network
= ( , ) , consisting of nodes ÿ, Ā , … ∈ and links (ÿ, Ā ) ∈ . In CLS, this abstraction is
frequently used to study character networks, i.e. graphs where nodes represent literary
characters and links represent relationships such as, e.g., their co-occurrence in a sentence or scene,
or dialogue interactions [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Building on this abstraction, several works in CLS used (social)
network analysis to study the narrative structure of literary works [
        <xref ref-type="bibr" rid="ref32 ref47">32, 47</xref>
        ]: Considering 19th
century novels, Elson, McKeown, and Dames [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] analysed macroscopic properties of
conversation networks, i.e. 昀椀ctional characters engaging in dialogue. Beveridge and Shan [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] applied
centrality measures and community detection to a network of characters that occur within
close proximity in the text of A Storm of Swords, the third novel in George R.R. Martin’s series
Song of Ice of Fire. Using the same network extraction, Bonato, D’Angelo, Elenberg, Gleich,
and Hou [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] studied statistical properties of character networks in three popular novels.
Several authors applied centrality measures to identify important characters, e.g. in works by
Shakespeare [53], Alice in Wonderland [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], or the 昀椀rst novel of the Harry Potter series [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ]. In
a recent work, Agarwal, Vijay, et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] used character networks to facilitate the automated
classi昀椀cation of literary genres. Using J.R.R. Tolkien’s The Lord of the Rings, which we also
consider in our manuscript, Ribeiro, Vosgerau, Andruchiw, and Pinto [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ] applied social
network analysis to its network of characters. Li, Zhang, Tan, and Li [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] studied small-world and
scale-free properties of character co-occurrence networks in movie scripts, among them the
movie adaptation of The Lord of the Rings.
      </p>
      <p>
        Existing studies of character networks mainly used methods from (social) network analysis
to gain insights into the narrative structure of literary texts. At the same time, recent advances
in geometric deep learning [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and Graph Neural Networks (GNNs) provide new ways to apply
deep learning to graph-structured data, which creates interesting opportunities for the study of
character networks. A major advantage of GNNs over other network analysis or machine
learning techniques is their ability to leverage relational patterns in the topology of a graph, while
at the same time incorporating additional node or link features. This facilitates unsupervised
and (semi-)supervised learning tasks at the node, link, or graph level. Examples include graph
representation learning [
        <xref ref-type="bibr" rid="ref17 ref34">34, 17</xref>
        ], community detection [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], node and link classi昀椀cation [
        <xref ref-type="bibr" rid="ref21 ref49">21,
49</xref>
        ], link prediction [55], or graph classi昀椀cation [
        <xref ref-type="bibr" rid="ref53">56</xref>
        ]. To the best of our knowledge, no works
have combined recent advances in (i) natural language processing, e.g. to recognize entities,
resolve coreferences, extract meaningful character networks, or generate word embeddings, and
(ii) graph neural networks, e.g. to compute latent space representations of 昀椀ctional characters
or address node- and link-level learning tasks. In CLS, the combination of these paradigms
can help with several tasks, e.g. (semi-)supervised character classi昀椀cation, semantic analyses
of character relationships, comparisons of character constellations in di昀erent works, or
automated genre classi昀椀cation. Addressing this gap, we combine NLP, network analysis and graph
learning to analyse J.R.R. Tolkien’s Legendarium, namely The Silmarillion, The Hobbit, and the
three volumes of The Lord of the Rings. Our contributions are:
• We apply entity recognition and coreference resolution to detect and disambiguate
characters in Tolkien’s Legendarium. We report key statistics like character mentions via
pronouns, nominal mentions or explicit references and compare them across di昀erent
works in the Legendarium. We use sequential character mentions to generate narrative
charts of the 昀椀ve considered works, which highlight Tolkien’s interlaced narrative style.
• We extract character networks for each work in our corpus, i.e. graphs = ( , ) where
nodes Ā ∈ capture characters in the Legendarium, while undirected edges (Ā , ā )
represent the co-occurrence of two characters in the same sentence. Apart from generating
character networks for the 昀椀ve works in our corpus, we generate a single network that
captures all characters in the works that constitute Tolkien’s Legendarium. We use this
to perform a macroscopic, graph-based characterisation of Tolkien’s works.
• We address the question to what extent graph learning techniques can leverage the
topology of automatically extracted character networks, and which advantages they provide
over word embeddings that just consider word-context pairs. To this end, we evaluate
the performance of di昀erent methods for the (i) latent space representation of
characters, (ii) (semi-)supervised classi昀椀cation of characters, and (iii) prediction of character
co-occurrences. The results con昀椀rm that the inclusion of topological information from
character networks considerably improves the performance of these tasks.
      </p>
      <p>Analysing a single graph representation of multiple literary works unfolding in the same
椀昀ctional universe, our work demonstrates the potential of graph neural networks for
computational literary studies. To facilitate the use of our data and methods in the (digital) study of
Tolkien’s works1, we make the results of our entity recognition, coreference resolution and
character network extraction available. To foster the application of our methods to other
corpora, we also provide a set of jupyter notebooks that reproduce our 昀椀ndings. 2</p>
    </sec>
    <sec id="sec-2">
      <title>2. Text Corpus and Data Processing</title>
      <p>
        We consider the English full text of The Lord of the Rings (consisting of the volumes The
Fellowship of the Ring, The Two Towers and The Return of the King, each split into two books), The
Hobbit and The Silmarillion. We used the python-based NLP pipeline BookNLP3 to extract
linguistic and literary features. BookNLP uses the NLP service spaCy [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] to perform tokenisation,
that is, splitting text into a list of words and special characters (like punctuation) and to split
text into sentences. This 昀椀rst step in the NLP pipeline is the basis for all further processing
steps.
      </p>
      <p>
        Entity Recognition and Coreference Resolution Entity recognition refers to the task of
detecting all references to entities (e.g., characters, location) in a text corpus. These references
can either be explicitly named references (e.g., “Bilbo Baggins”, “Smaug”), noun phrases (e.g.,
“the hobbit”, “the dragon“) or pronouns (e.g., “she”, “they”). BookNLP uses an entity annotation
model that has been trained on a large annotated data set [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] to identify named entities, noun
phrases as well as pronoun references. A昀琀er these references have been detected, in a next
step coreference resolution can be applied, which is a very hard task in general [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ] and is
especially hard in the context of literary texts due to the high variation of references used
and the very long texts [
        <xref ref-type="bibr" rid="ref22 ref40">40, 22</xref>
        ]. Con昀椀rming this view, our initial analyses revealed that the
performance of BookNLP’s coreference resolution, which was trained on a data set of annotated
coreferences [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] was not satisfactory when applying it to our corpus. We thus decided to focus
on named references, and resolve these using a set of simple manually-created disambiguation
rules (e.g., “Sam” → “Sam Gamgee”, “Peregrin” → “Pippin”).4 Although this approach may yield
1https://digitaltolkien.com/.
2https://github.com/LSX-UniWue/one-graph-to-rule-them-all.
3https://github.com/booknlp/booknlp.
4The full list of disambiguation rules can be found in our repository.
a low recall (i.e. there are many unidenti昀椀ed coreferences since pronouns and noun phrases
are not considered), we 昀椀nd that this coreference resolution yields high precision (i.e. almost
all resolved coreferences that we inspected manually were correct). We found this approach
preferable over a “full” coreference resolution for two reasons: First, considering our focus
on character networks, a coreference resolution with high recall but lower precision would
give rise to many spurious character co-occurrences that would harm our analyses of graph
learning techniques. Second, our corpus of Tolkien’s Legendarium is special in the sense that
it has a large number of named references, which give rise to rich character networks despite
limiting our view to named references.
      </p>
      <p>
        Extraction of Character Co-Occurrences A昀琀er 昀椀nding references to characters, we next
extract co-occurrences of pairs of characters that can be used to build character networks.
While the co-occurrence of characters does not necessarily imply an interaction between them,
due to its simplicity it is a frequently used approach to construct character networks in CLS
[
        <xref ref-type="bibr" rid="ref23 ref7 ref8">7, 23, 8</xref>
        ]. A昀琀er evaluating di昀erent strategies (cf. Appendix A), we decided to extract each
co-occurrence of two characters in the same sentence.
      </p>
      <p>Descriptive Statistics of Text Corpus In Table 1, we provide key summary statistics that
characterize the di昀erent works in our corpus. Apart from di昀erences in terms of tokens, we
椀昀nd striking di昀erences between character mentions in the 昀椀ve texts: The Silmarillion uses
considerably fewer pronoun mentions than the other four works, while using more explicit
references by name. We attribute this to the compressed writing style of The Silmarillion, which
rather resembles a 昀椀ctional historical record that lists character names and locations, and gives
a chronology of events compared to the more conventional prose style of The Hobbit and The
Lord of the Rings.</p>
      <p>We calculate the number of character co-occurrences for each work in our corpus, which we
indicate in Table 1 and visualise in Figure 1a – Figure 1f. The overall number of co-occurrences
is in昀氀uenced by the number of explicit references (cf. Table 1), since we only extract
cooccurrences if both characters are mentioned by name in the same sentence.
Character-based Visualization of Narrative Structure We 昀椀nally use the character
mentions in conjunction with the chapter in which they occurred to automatically produce a
narrative chart for the three volumes of The Lord of the Rings. To avoid occurrences of characters
that are only mentioned while not being present, we excluded mentions of characters within
dialogues, as detected by BookNLP. The narrative charts for the three volumes of The Lord of
the Rings are shown in the three panels of 昀椀g. 2. The columns within each panel represent
chapters. To ease the presentation, we focus on a selected set of main characters shown in
the rows. The color of each row-column cell represents the fraction of the number of times
a characters is mentioned in a given chapter, relative to the number of mentions of other key
characters presented in the narrative chart.</p>
      <p>
        While it is easy to generate those charts, we can use them to identify major plot lines and
reveal a narrative structure that is characteristic for The Lord of the Rings: In volume I, we
see that Frodo and the Hobbits maintain a chief role throughout book I and II, i.e. both parts
of volume I of The Lord of the Rings. A shi昀琀 is visible for the second half (book II), which
coincides with the Hobbits’ arrival in Rivendell. In volume II, we see a clear transition in
narrative structure that is due to Tolkien’s adoption of an interlacing narrative style [
        <xref ref-type="bibr" rid="ref51">51</xref>
        ]. The
椀昀rst half of volume II (book III) focuses on the main characters Aragorn, Legolas, and Gimli,
and their attempt to rescue Merry and Pippin from the Uruk-Hai. Leaping back in time, the
second half of volume III (book IV) focuses on the journey of Frodo, Sam, Gollum and their
encounter with Faramir, which coincides with a brief absence of Gollum as he hides from the
rangers. Following the original titles suggested for the six books that constitute the three
volumes of The Lord of the Rings [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ], these interlaces can be called The Treason of Isengard
and The Journey of the Ring-bearers. In volume III, we see a similar but less marked separation
(a) TLoTR, Vol. I
(b) TLoTR, Vol. II
(c) TLoTR, Vol. III
(d) The Hobbit
(e) The Silmarillion
(f) Legendarium
      </p>
      <p>Fellowship</p>
      <p>Two Towers</p>
      <p>
        Return of the King
between two interlaced plot lines: The 昀椀rst half (book V) focuses on the War of the Ring in
Gondor, while book VI continues the story of Frodo’s and Sam’s journey to Mount Doom,
followed by the Scouring of the Shire, which explains Merry’s and Saruman’s reappearance
in the last part. The original titles referring to those interlaces are The War of the Ring and
The End of the Third Age. The non-linear narrative style expressed in the narrative charts
in Figure 2 is common in early medieval literature [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Tolkien likely adopted it due to his
familiarity with medieval literature, to increase suspense and to achieve a sense of historicity
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Since these charts capture the narrative structure that we would expect, we can assume
that our character extraction, even though using only limited coreference resolution, captures
the overall occurrences of key characters rather well. In Appendix B we include additional
narrative charts for The Hobbit and The Silmarillion.
3. Analysis of Character Co-occurrence Networks
Building on Section 2 we now use character co-occurrences to construct character networks for
J.R.R. Tolkien’s Legendarium. A graph or network is de昀椀ned as tuple = ( , ) , where ÿ, Ā , ... ∈
represent nodes and (ÿ, Ā ) ∈ ⊆ × is a set of directed or undirected links. For our analysis,
we model characters as nodes and their co-occurrences within a sentence as undirected links
, i.e. (ÿ, Ā ) ∈ ⟹ (Ā , ÿ) ∈ for all ÿ, Ā ∈ . We further add link weights ā ∶ → ℕ, i.e.
we assign a value ā ((ÿ, Ā )) to each link (ÿ, Ā ) that counts the co-occurrences of ÿ and Ā . We
adopt this de昀椀nition to construct character networks for each of the four works in our corpus. A
particularly interesting aspect of our corpus is that all works refer to a single Legendarium, with
frequent cross-reference, and thus, overlaps in terms of characters. We use this to construct a
single network of characters across all works, which we call Legendarium Graph. Except for a
single disconnected pair of nodes that we removed, the resulting graph has a single connected
component with 238 nodes, which enables a macroscopic analysis of Tolkien’s Legendarium.
Figure 3 shows a visualisation of the character network that has been generated using the
force-directed layout algorithm by Fruchterman and Reingold [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. This algorithm simulates
attractive forces between connected nodes (and repulsive forces between all nodes) such that
their positions in the stable state of the resulting dynamical system highlight patterns in the
topology.
Network Size, Density, and Mean Degree Modelling character networks enables us to
compute metrics from social network analysis, which we report in Table 2. The 昀椀rst columns
of Table 2 show the number of nodes Ā = | | and links ÿ = | | and the density = Ā(Āÿ−1) of each
character network, where the latter captures which fraction of possible links actually occurred.
We 昀椀nd that The Hobbit and The Silmarillion have the smallest and largest number of nodes
respectively. The Hobbit has a higher link density, which is likely because the plot is focused
on a small number of strongly interacting characters. For The Lord of the Rings we see that the
number of nodes for Vol. III is smaller than for the previous two volumes, while the density is
larger. This re昀氀ects the fact that the last volume is strongly focused on interactions between
small groups of characters, e.g. Frodo, Sam and Gollum. The degree of a node Ā ∈ is de昀椀ned
as the number of other nodes to which it is connected by links, i.e. ā(Ā ) ∶= |{ÿ ∶ (ÿ, Ā ) ∈ }|.
The fourth column in Table 2 reports the mean node degree of characters, where larger mean
degrees are associated with higher link density.
      </p>
      <p>
        Shortest paths, Diameter and Betweenness Centrality An important feature of (social)
networks is the structure of shortest paths between nodes, which provide a topological notion
of pair-wise distances [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. We calculate shortest paths between all pairs of nodes and report
the average shortest path length across all character pairs. We further calculate the diameter,
i.e. the maximum shortest path length between any pair of nodes. The results in Table 2 show
that, on the one hand, the average shortest path length is associated with the number of nodes
(smallest average shortest path length for the smallest networks The Hobbit and The Lord of
the Rings Vol. III). On the other hand, it is in昀氀uenced by mean degree and link density, which
explains why The Lord of the Rings Vol. III and The Silmarillion have similar values despite the
latter having more than twice as many nodes. The shortest paths between characters allows us
the Eagles
Bifur
      </p>
      <p>Gloin
Oin Nori</p>
      <p>Dori</p>
      <p>DHweallmin
the Isengarders
the Galadhrim</p>
      <p>Arod
Thrain</p>
      <p>Thorin
Fili
Kili
Durin
Bil
Caradhras</p>
      <p>Ioreth
Halbarad
the Shirri s
the D u´nedain</p>
      <p>Esgaroth
Bard</p>
      <p>Bert</p>
      <p>De´agol
Smaug
Wil iam</p>
      <p>Hama
Maggot</p>
      <p>Girion</p>
      <p>Snowmane</p>
      <sec id="sec-2-1">
        <title>Eorl Thror</title>
        <p>Hasufel
Thengel
Ingold
Pimple</p>
        <p>Balin
Fatty Bolger</p>
        <sec id="sec-2-1-1">
          <title>Meriadocthe Prince Imrahil Eomer</title>
          <p>E´owyn
Shadowfax
El adan</p>
          <p>Drogo
Cirith Ungol</p>
          <p>Tom Bombadil
Imrahil
Boromir
Arathorn
Bilbo
Elrohir Gandalf</p>
          <p>Celeborn</p>
          <p>Black Riders Aragorn
Gimli Pippin
Ugluk Legolas Faramir
Gl´oin
Lothol´Drieennethor Elrond
The´odenGol um Grima Wormtongue
Barliman Butterbur
the DeadGrishnakh Nob</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>Merry Beregond Gil-galad</title>
          <p>the Rohirrim</p>
          <p>Glorfindel Ents
Dwarves</p>
          <p>Treebeard
Isildur</p>
          <p>Sauron
Frodo
Sam</p>
          <p>Saruman</p>
          <p>Elendil
the Shadow</p>
          <p>Rosie Cotton</p>
          <p>Hobbits
Damrod
the Ga er
the Company</p>
          <p>Erkenbrand</p>
          <p>Hirgon
Gwaihir</p>
          <p>Bergil
the Great Goblin</p>
          <p>Goldberry</p>
          <p>Almaren</p>
          <p>Bregolas
King Finrod Felagund</p>
          <p>Lo´rien</p>
          <p>Nienna
Mı´riel</p>
          <p>Este¨
Ingwe¨
Draugluin</p>
          <p>Taniquetil
Dale</p>
          <p>Farmer Cotton</p>
          <p>Bob
Bil Ferny</p>
          <p>Anborn</p>
          <p>Trol</p>
          <p>Gamling
Shelob
Lotho
Gorbag</p>
          <p>Lobelia
Otho
the Entwives</p>
          <p>Erestor
Farmer Maggot</p>
          <p>Huorns</p>
          <p>Bregalad
Shagrat</p>
          <p>Quickbeam
the Sackvil e-Bagginses</p>
          <p>Snaga
Barahir</p>
          <p>Belegund</p>
          <p>Haldir Galdor</p>
          <p>Orcs
Galadriel</p>
          <p>Elves</p>
          <p>Beren
Cı´rdanElros Lu´thien
the Firstborn Anar</p>
          <p>Mablung
the N u´men´oreans
Elwe¨</p>
          <p>Varda</p>
          <p>the Dwarves
the Haladin</p>
          <p>Manwe¨
Maeglin</p>
          <p>Osse¨</p>
          <p>H u´rin T halion</p>
          <p>Lu´thien T inu´viel</p>
          <p>T inu´viel
Mıˆm
Turambar</p>
          <p>Gelmir</p>
          <p>Amandil</p>
          <p>Daeron</p>
          <p>Brandir
Beleg</p>
          <p>Dorlas
Eo¨l</p>
          <p>Gothmog</p>
          <p>Tirion</p>
          <p>Ilu´vatar
Bereg</p>
          <p>Marach</p>
          <p>
            Nı´niel
to de昀椀ne a path-based notion of centrality. The betweenness centrality [
            <xref ref-type="bibr" rid="ref50">50</xref>
            ] of node Ā captures
the number of shortest paths between any pair of nodes that pass through Ā , i.e. nodes have
higher betweenness centrality if they are important for the “communication” between other
nodes. The second column of table 3 reports characters with the highest betweenness centrality
for the single Legendarium graph. We note the high betweenness centrality Galadriel, who
despite an ephemeral appearance in The Lord of the Rings and absence in The Hobbit is one of
few characters that link the narrative across di昀erent ages in Tolkien’s mythology.
4. Application of Graph Neural Networks
We now turn our attention to the application of state-of-the-art graph learning techniques to
the character network of Tolkien’s Legendarium, which has been introduced and characterised
in the previous section. Our analysis is focused on our guiding research question outlined in
Section 1, i.e. what additional information we can draw from the topology of the character
network, compared to an application of standard machine learning to a word embedding
technique. A major hurdle for the application of machine learning to character networks is that
standard techniques like, e.g., logistic regression, support vector machines, or neural networks
require features in a continuous vector space. Their application to discrete objects like graphs
typically requires a two-step procedure that consists of (i) a representation learning or
embedding step to extract vectorial features of nodes and/or links, and (ii) a downstream application
of machine learning to those features. This approach is limited by the fact that graphs with
complex topologies are fundamentally non-Euclidean objects [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ], which limits our ability to
椀昀nd a generic vector space representation that is suitable for di昀erent learning tasks.
          </p>
          <p>
            Addressing this limitations, recent works in the 昀椀eld of Geometric Deep Learning [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ] and
Graph Learning have generalized deep learning to graph-structured data. Among those works,
Graph Neural Networks (GNNs) [
            <xref ref-type="bibr" rid="ref15 ref16 ref39">43, 16, 39, 15</xref>
            ] have developed into a particularly successful
paradigm. A major advantage of GNNs over other network analysis or machine learning
techniques is their ability to capture both relational patterns in the topology of a graph as well as
additional (vectorial) features of nodes or links. A common concept of GNNs is their use of
hidden layers with neural message passing, i.e. nodes repeatedly exchange and update feature
vectors with neighbouring nodes thus incorporating information from their neighbourhood.
The (hidden) features generated in this way can be used to address learning tasks by means of
a perceptron model with a non-linear activation function. The gradient-based optimization of
GNNs can be thought of as an implicit way to generate a topology- and feature-aware latent
space representation of a graph that facilitates node-, link- or graph-level learning tasks [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ].
          </p>
          <p>Moving beyond the social network analysis techniques applied in Section 3, in the
following we apply two state-of-the-art GNN architectures to the character network of Tolkien’s
Legendarium. We use them to address three unsupervised and supervised learning tasks: (i)
representation learning, i.e. 昀椀nding a meaningful latent space embedding of characters, (ii)
(semi-)supervised node classi昀椀cation, i.e. assigning characters to di昀erent works in the
Legendarium, and (iii) link prediction, i.e. using a subset of links in the graph to predict missing links
in a holdout set.</p>
          <p>
            Latent Space Representation of Characters in Tolkien’s Legendarium Representation
learning is a common challenge both in natural language processing and graph learning. In
NLP, word or text embedding techniques are commonly used to generate vector space
representations that can then be used to apply downstream machine learning techniques to literary
texts. A popular word embedding technique is word2vec [
            <xref ref-type="bibr" rid="ref30 ref31">31, 30</xref>
            ], which we use as a baseline
in our analysis that only uses the text corpus while being agnostic to the topology of the
character network. We speci昀椀cally use the SkipGram architecture to train a neural network with a
single hidden layer with ā neurons that captures the context of words in the text corpus, i.e. a
concatenation of all works in our corpus. The weights of neurons in the hidden layer are then
interpreted as positions of words in a ā-dimensional latent vector space. For our analysis, we
used the word2vec implementation in the package gensim with default parameters, i.e. we use
a latent space with ā = 300dimensions.
          </p>
          <p>
            Apart from this standard NLP approach to generate latent space embeddings, we apply two
graph representation learning techniques to generate character embeddings based on the
topology of the character network: The 昀椀rst approach, Laplacian Eigenmaps, uses eigenvectors
corresponding to the leading eigenvalues of the Laplacian matrix of a graph [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. This can be seen
as a factorization of an eigenmatrix [
            <xref ref-type="bibr" rid="ref37">37</xref>
            ] that yields a representation of nodes in a ā-dimensional
latent space, where ā is chosen to be smaller than the number of nodes. To determine a
reasonable choice for the number of dimensions ā, we performed an experiment in which we
evaluated the average performance of a logistic regression model for node classi昀椀cation (see
detailed description below) for di昀erent dimensions ā of the latent space. The results of this
experiment are shown in Figure 4. As expected, we observe tendency to under昀椀t for a very
small (ā &lt; 10) number of dimensions, while the performance saturates as we increase ā
beyond a “reasonable” value that allows to capture the topological patterns in the network. This
analysis informed our choice of ā = 20 for the subsequent experiments. As second approach
we adopt the popular graph representation learning method node2vec, which applies the
SkipGram architecture to sequences of nodes generated by a biased second-order random walk (i.e.
a walk with one-step memory) in a graph [
            <xref ref-type="bibr" rid="ref17">17</xref>
            ]. We again use ā = 20 hidden dimensions to
make it comparable with the previous method. We note that choosing a value of ā = 20 for
the graph representation learning techniques, which is substantially smaller than the default
value of ā = 300 for word2vec, is justi昀椀ed because the number of nodes in the Legendarium
graph (Ā = 238) is much smaller than the vocabulary used by word2vec (Ā = 18, 430).
1.0
          </p>
          <p>
            We 昀椀nally adopt two Graph Neural Networks, namely Graph Convolutional Networks (GCNs)
[
            <xref ref-type="bibr" rid="ref21">21</xref>
            ] and Graph Attention Networks (GATs) [
            <xref ref-type="bibr" rid="ref49">49</xref>
            ]. Both of these deep graph learning techniques
use a variant of neural message passing, the main di昀erence being that GATs use a learnable
attention mechanism that can place di昀erent weights on nearby nodes [
            <xref ref-type="bibr" rid="ref56">57</xref>
            ]. We trained both
architectures to address the graph learning tasks, i.e. node classi昀椀cation and link prediction,
outlined below, using a hidden layer with ā = 20 neurons. We then use the hidden layer
activations of both architectures to infer a representation of characters in a 20-dimensional latent
space. To exclusively focus on the graph topology, for our experiments we treated the network
as an unweighted graph. Additional results for experiments with weighted graphs are included
in Appendix D.
          </p>
          <p>Leveraging the ability of GNNs to consider additional node attributes, we compare three
di昀erent approaches: First, we use GNNs without additional node features, which we emulate
by initializing the message passing layer with a one-hot-encoding (OHE) of characters. In other
words, for a network with Ā nodes, each node = 0, … , Ā−1 we assign a “dummy feature” vector
∈ ℝĀ de昀椀ned as:
= (⏟0⏟,⏟…⏟⏟⏟⏟,0, 1, 0⏟⏟,⏟…⏟⏟⏟⏟,0 )</p>
          <p>
            times
Second, we use the node embeddings generated by node2vec as additional node features that
are used in the message passing layers. Third, we assign the word2vec as additional node
Ā− −1 times
features thus combining NLP and graph neural networks. In Figure 5 we illustrate two latent
space representations of characters generated by (a) word2vec and (b) the combination of GCN
with word2vec features. For both 昀椀gures, we used t-SNE [
            <xref ref-type="bibr" rid="ref48">48</xref>
            ] to reduce the latent space
embedding to two dimensions, nodes are coloured according to the works in which the corresponding
characters occur most frequently. A comparison of the two embeddings clearly highlights the
advantage of graph learning over a mere application of word2vec: Di昀erent from the word
embedding, the combination of GCN with word2vec generates a latent space representation
that captures the distinction of characters across di昀erent works in Tolkien’s Legendarium.
We argue that this visualisation highlights the additional information that graph neural
networks can leverage from the topology of character networks, as opposed to mere word-context
pair statistics. In the following, we more thoroughly investigate this interesting aspect in two
clearly de昀椀ned learning tasks.
          </p>
          <p>
            Predicting Character Classes We use the methods outlined above to address a supervised
node classi昀椀cation problem in the Legendarium Graph, i.e. the single character network
capturing all works in Tolkien’s Legendarium. We assign three labels to nodes that correspond
to the work (i.e. The Silmarillion, The Hobbit, and The Lord of the Rings), in which the
corresponding character is most prominent. We extract these labels automatically as argmaxÿāāý
count(Ā,ÿāāý/)∑Ā∈ count(Ā,ÿāāý,) where count(Ā, ÿāāý)is the number of mentions of character Ā in
ÿāāýand are all characters. We verify the labels manually, 昀椀nding them to be reasonable in
all cases. The resulting three classes contain 113 (The Silmarillion), 30 (The Hobbit) and 99 (The
Lord of the Rings) characters, i.e. there is class imbalance that we address by using the
macroaveraged f1-score, precision and recall. We highlight that the information on the di昀erent
works is withheld from all methods, i.e. for the word embedding and character network
construction we concatenate all works to a single text corpus. We train our models on a training
set of labelled characters and use them to predict the unknown classes of unlabelled characters
in a test data set. As a baseline that does not utilise the topology of the character network, we
椀昀rst train a logistic regression model using the embeddings generated by word2vec. Similarly,
we train a logistic regression model on the embeddings generated by Laplacian Eigenmaps and
node2vec. For node2vec we use three di昀erent sets of hyper-parameters Ă and ā , where for
Ă = ă = 1 node2vec is equivalent to the graph embedding technique DeepWalk [
            <xref ref-type="bibr" rid="ref34">34</xref>
            ]. We
trained the model for 200 epochs. We 昀椀nally train the GCN and GAT model either using
onehot encoding (OHE) or assigning additional node features generated by the word and graph
embeddings as explained above. For both we used two message passing layers and we trained
the models for 5000 epochs using an Adam optimizer with learning rate þĄ = 0.0001. We
evaluate all models using a 10−fold cross-validation. Average results with standard deviation are
shown in Table 4.
          </p>
          <p>Interestingly, we 昀椀nd that the performance of character classi昀椀cation based on the word
embedding word2vec is worse than any of the graph learning techniques, thus highlighting the
added value of the graph perspective. The graph embedding technique node2vec, which uses
a SkipGram architecture to embed nodes, clearly outperforms the graph-agnostic word2vec,
despite both using the same logistic regression model for the class prediction. Moreover, we
椀昀nd that a simple application of Laplacian Eigenmaps, i.e. a mathematically principled matrix
(a) Latent space embedding of characters obtained by applying the word embedding word2vec to the
whole text corpus; edges represent character co-occurrences in the text.
(b) Graph Convolutional Network using word2vec character embeddings as additional node features</p>
          <p>F1-score
decomposition, yields comparable node classi昀椀cation performance than the neural
networkbased node2vec embedding. The best precision is achieved for a Graph Convolutional Network
(GCN) with additional node features generated by node2vec, while the best f1-score and recall
are achieved for GCN with additional word2vec embeddings. We attribute this to the fact that
the combination of word embeddings and graph neural networks integrates two
complementary sources of information, thus highlighting the advantages of methods that leverage both
NLP and graph learning.</p>
          <p>In Figure 6 we further demonstrate the ability of Graph Neural Networks to perform a
semisupervised classi昀椀cation, i.e. their ability to accurately predict classes based on a very small
number of labelled examples. Figure 6(a) shows the training network with three randomly
coloured characters, one for each ground-truth class. Nodes in the test set are shown in grey
and node positions are chosen based on the latent space representation shown in Figure 5(b).
We use these three labelled characters to train a GCN with additional word2vec embeddings
as node features. We then use the trained model to predict the character classes in the test
set. The predicted classes are shown as coloured nodes in Figure 6(b), where the three training
nodes are shown in grey. A visual comparison of Figure 6(b) and 5 (b) allows to evaluate the
prediction against the ground truth. Despite the very sparse labelled examples, and thanks to
its use of the graph topology of the unlabelled nodes in the training set, the GCN model is able
to accurately predict character classes, reaching an f1-score of ≈ 79.7%, a precision of ≈ 78.6%
and a recall of ≈ 82.6%. Remarkably, this shows that the combination of a GCN model with
word2vec node features yields a higher f1-score and recall with only three labelled examples
than a word embedding alone, even when all characters in the training set are labelled.
(a) Training network with three labelled characters Galadriel, Orcs, and Balin (shown as coloured nodes).
(b) Character classes predicted by Graph Convolutional Network (GCN) uswinorgd2vec character
embeddings as additional node features.
Predicting Character Interactions We 昀椀nally address link prediction, which refers to the
task of predicting “missing” links in a graph, i.e. links that are either “missing” in incomplete
data or that likely form in the future. Link prediction is a well-studied graph learning problem,
with important applications in social network analysis and recommender system29s].[In the
context of character networks, it is relevant because it could be used to alleviate the low recall
of the rigid, sentence-based character network extraction that we employed in Secti3o.n</p>
          <p>Adopting a supervised approach, we split the edges of thLeegendarium Graph in a training
and set set, where we withhold 10% of the edges during training to test our model. We use
word2vec, Laplacian Eigenmaps andnode2vec to generate embeddings Ā ∈ ℝā of characters.
Forword2vec embeddings are generated using the full text corpus. For the graph embedding
techniques Laplacian Eigenmaps andnode2vec we only use links in the training set,
potentially putting them at a disadvantage in terms of training data. We use the resulting feature
vectors to calculate the element-wise (Hadamard) producĀt∘ ā ∈ ℝā for character pairĀs, ā ,
which yieldsā-dimensional features for all character pairs. We then use features of positive
instances (i.e. pairsĀ , ā connected by a link(Ā , ā ) in the training set) and negative instances
(pairs Ā, ā not connected by a link in the training set) to train a (binary) logistic regression
classi昀椀er and use the trained model to predict links in the test set. We use negative sampling
to mitigate the imbalance between negative and positive instances in the training set. For the
two GNN architectures we adopt the common approach to adddaecoding step that computes
the Hadamard product of node features a昀琀er the last message passing layer. We use a Binary
Cross Entropy with Logits Loss function and train the models f1o5r000 epochs using an Adam
Optimizer with learning rat0e.001.</p>
          <p>
            We use the Receiver-Operating Characteristic (ROC) to evaluate our models, i.e. we
compute ROC curves that give the true and false positive rates across all discrimination thresholds.
An example (and explanation) is included in AppendiEx. We compute the Area Under Curve
(AUC) of ROC curves within the unit square, which range from0to 1. 0.5 corresponds to the
performance of a random classi昀椀er, value&lt;s 0.5 indicate worse and values&gt; 0.5 better than
random performance. We again evaluate all models usin1g0a−fold cross-validation. Tabl4e
reports average results and standard deviations of the AUC for all models. With the exception
of Laplacian Eigenmaps, we 昀椀nd that graph methods generally perform better thawnord2vec.
We further observe that GNNs perform considerably better thnaonde2vec, where the best
performance is achieved when coupling GCN with Laplacian Eigenmaps onorde2vec.
5. Conclusion
In summary, we used natural language processing techniques like named entity recognition
and coreference resolution to construct a single character network from a corpus of works
that constitute J.R.R. Tolkien’s Legendarium. Apart from characterising the network based on
social network analysis, we adopt state-of-the-art graph learning techniques to (i) generate
latent space embeddings of characters, (ii) automatically classify characters based on the work
to which they belong, and (iii) predict character co-occurrences. For all three tasks, we 昀椀nd
a signi昀椀cant advantage of Graph Neural Networks (GNNs) over a common word embedding
technique and we 昀椀nd that a combination of both yields the best performance. Our approach
to construct a single graph for multiple literary texts could be interesting to analyze other
corpora of works with overlapping characters (e.g. mythology, historical novels, etc.). We
further believe that our results on the application of GNNs to address a link prediction task have
interesting implications for computational literary studies. Considering the di昀케culty of
coreference resolution, and the low coverage of the resulting character networks that we observed
in our experiments, we expect that link prediction could potentially be used as an approach
to address the low recall observed for the sentence-based co-occurrence networks. In future
work, we will further consider the modelling of character co-occurrencteemsapsoral character
co-occurrence networks, where the temporal ordering of sentences determines the “time stamps”
of edges in the resulting dynamic graph. This promises the application of recently developed
higher-order graph modelling, visualization, and learning techniq2u4e,s3[
            <xref ref-type="bibr" rid="ref25 ref36 ref46 ref5">5, 46, 25, 36</xref>
            ], which
capture patterns in the temporal ordering of edges and can thus provide insights beyond the
mere graph topology.
          </p>
          <p>
            Acknowledgements
Vincenzo Perri and Ingo Scholtes acknowledge support by the Swiss National Science
Foundation, grant 176938.
[55]
A. Alternative Approaches to Detect Character Co-Occurrences
As an alternative to the sentence-based co-occurrence approach described in Secti2o, nwe also
evaluated two other approaches in initial experiments: detection based on (i) the parse tree
and (ii) a sliding text window. For (i), we marked all sentences as an “interaction” between
two characters if they had the same head word in the parse tree. This is the most strict version
of our character network construction, as it only captures explicit interactions (e.g., “Frodo
saw Sam”). Due to the small number of detected interactions, we discarded this strategy. On
the other hand, (ii) is more lenient than our 昀椀nal sentence-based approach, only requiring two
characters to be mentioned within a window of a 昀椀xed number of characters (letters). This
approach of extracting character co-occurrences within a sliding text window has been adopted
in a number of prior works7,[
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]. It allows to capture interactions between characters that
are not mentioned in the same sentence, but introduces the risk of detecting a large number
of spurious interactions. In our experiments, we chose a window size of 2000 characters, with
the additional restriction that chapter borders may not be crossed. Aiming for character
networks that maintain a balance between recall (i.e. detecting all meaningful character links) and
precision (i.e. limiting the number of spurious links), we decided to use the sentence-based
interaction detection.
          </p>
          <p>B. Narrative Charts for The Hobbit and The Silmarillion
Complementing the results in Section2, in Figure7 we present two additional narrative charts
that we generated forThe Hobbit and The Silmarillion.</p>
          <p>Hobbit</p>
          <p>Silmarillion
Balin
Bard
Bert
Bilbo
Dain</p>
          <p>Dori
Elrond</p>
          <p>Fili
Gandalf
Gollum</p>
          <p>Kili
Smaug
Thorin</p>
          <p>
            Tom
William
C. Visualisation of Additional Latent Space Embeddings
In Figure8 and Figure9 we include additional visual representations of latent space embeddings
of characters obtained by those methods that have not been included in the main text. For all
methods, we employed t-SNE 4[
            <xref ref-type="bibr" rid="ref8">8</xref>
            ] to reduce the dimensionality of the latent space embeddings
to two dimensions.
          </p>
          <p>D. Node Classification and Link Prediction with Weighted</p>
          <p>GNNs
Complementing the results discussed in the main article, in Tab5lwee include additional results
for which we applied Graph Neural Networks wtoeighted graphs, i.e. di昀erent from Table4
we consider a character network with link weighāts((Ā , ā )) that capture the number of
cooccurrences of two characterĀs, ā . Due to time constraints, we performed this analysis only
for the best performing Graph Neural Network, i.e. GCN. These additional results suggests
that, at least for our corpus, the inclusion of link weights does not signi昀椀cantly improve the
performance of models for character classi昀椀cation and link prediction.
E. Explanation of ROC/AUC Evaluation of Link Prediction
In Section 4 we use the Area Under Curve (AUC) of a Receiver-Operating Characteristic (ROC)
curve to evaluate the performance of our models in link prediction, which is a common
approach to evaluate the diagnostic quality of binary classi昀椀ers in information retrieval and
machine learning. A key advantage of this approach is that it enables us to evaluate the
performance of a binary classi昀椀er across all possible discrimination thresholds, which can be adapted
to tune the sensitivity/speci昀椀city of the prediction depending on application requirements. To
assist the reader to follow this evaluation approach, below we explain one exemplary ROC
curve obtained for a link prediction in thLeegendarium graph using the a node2vec
embedding of characters and a logistic regression model. To generate this curve, we 昀椀rst consider the
prediction scores (i.e. in the case of logistic regression the positive class probability) assigned
to each node pair in the test set, where a link is predicted whenever the score is above a given
discrimination threshold. For each value of we can now calculate the true and false positive
(b) Graph Attention Networks with additional word2vec features
(a) Graph Convolutional Networks with one-hot-encoding (OHE)</p>
          <p>(b) Laplacian Eigenmap
rate (TPR and FPR), i.e. the fraction of those predicted links for which the prediction is
correct and the fraction of unconnected node pairs for which a link is predicted errorneously. A
sweep over all possible discrimination thresholds now yields a ROC curve in the unit square.
In Figure 10 we show the ROC curve of a logistic regression model using node2vec features.
A classi昀椀er that perfectly classi昀椀es all instances in the data will assume initial values of FPR=0
and TPR=0 only for the maximal discrimination threshold of = 1, where all instances are
assigned to the negative class. For any smaller than the maximum and larger than the
minimum value of zero, a perfect classi昀椀er correctly predicts all instances, which yields TPR=1 and
FPR=0. For the minimum value of = 0, the classi昀椀er necessarily predicts the positive class
for all instances, which yields TPR=1 and FPR=1. We thus 昀椀nd that the ROC curve of a
perfect classi昀椀er follows the le昀琀 and upper border of the unit square, which yields an Area Under
Curve (AUC) of one. Conversely, the ROC curve of a classi昀椀er that consistently predicts the
opposite of the true class follows the bottom and right border of the unit square, which yields
an Area Under Curve (AUC) of zero. For a classi昀椀er with no diagnostic ability, the FPR and
TPR are expected to increase equally as we lower the discrimination threshold , i.e. the ROC
curve follows the so-called diagonal of no-discrimination (see red dashed line in Figure 10) and
the Area Under Curve is expected to be close to 0.5.</p>
          <p>Predicting character interactions with</p>
          <p>node2vec
1.0
0.8
0.6
R
P
T0.4
0.2
0.0
F. Evaluation of Link Prediction for Individual Works
In the main text we present and discuss the performance of di昀erent techniques to predict
links in a single character network that spans Tolkien’s Legendarium. Apart from this analysis,
we additionally evaluated link prediction in character co-occurrence networks that have been
generated for the three works of our corpus, i.e. The Silmarillion, The Hobbit, and The Lord of
the Rings separately. For the sake of completeness, we include these results in Table 6 below.
Like in Appendix D, we applied the Graph Convolutional Network (GCN) model on a weighted</p>
          <p>
            GCN20LE
GCN20node2vecĂ = 1, ă = 1
GCN20OHE
GCN20word2vec
GAT20LE
GAT20node2vecĂ = 1, ă = 1
GAT20OHE
GAT20word2vec
GCN20LE (weighted)
GCN20node2vecĂ = 1, ă = 1 (weighted)
GCN20OHE (weighted)
GCN20word2vec (weighted)
5We resorted to an unweighted graph due to a supposed implementation error in the weighted GAT implementation
in version 2.0.4 of the graph learning library pytorch-geometric [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ].
          </p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Corvalan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jensen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Rambow</surname>
          </string-name>
          . “
          <article-title>Social network analysis of alice in wonderland”</article-title>
          .
          <source>InP:roceedings of the NAACL-HLT 2012 Workshop on computational linguistics for literature. 2012</source>
          , pp.
          <fpage>88</fpage>
          -
          <lpage>96</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vijay</surname>
          </string-name>
          , et al. “
          <article-title>Genre Classi昀椀cation using Character Networks”</article-title>
          .
          <source>In2:021 5th International Conference on Intelligent Computing and Control Systems (ICICCS)</source>
          .
          <source>Ieee</source>
          .
          <year>2021</year>
          , pp.
          <fpage>216</fpage>
          -
          <lpage>222</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>E. E. Auger.</surname>
          </string-name>
          “
          <article-title>The Lord of the Rings' interlace: Tolkien's narrative and Lee's illustrations”</article-title>
          .
          <source>In: Journal of the Fantastic in the Arts 19.1</source>
          (
          <issue>2008</issue>
          ), p.
          <fpage>70</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bamman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lewke</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <surname>A. Mansoor. “</surname>
          </string-name>
          <article-title>An annotated dataset of coreference in English literature”</article-title>
          . Ina:rXiv preprint arXiv:
          <year>1912</year>
          .
          <volume>01140</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bamman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Popat</surname>
          </string-name>
          , and
          <string-name>
            <surname>S. Shen. “</surname>
          </string-name>
          <article-title>An annotated dataset of literary entities”. InP:roceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</article-title>
          , Volume
          <volume>1</volume>
          (Long and Short Papers).
          <year>2019</year>
          , pp.
          <fpage>2138</fpage>
          -
          <lpage>2144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Belkin</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Niyogi</surname>
          </string-name>
          . “
          <article-title>Laplacian eigenmaps for dimensionality reduction and data representation”</article-title>
          .
          <source>In:Neural computation 15.6</source>
          (
          <issue>2003</issue>
          ), pp.
          <fpage>1373</fpage>
          -
          <lpage>1396</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Beveridge</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Shan</surname>
          </string-name>
          . “
          <article-title>Network of thrones”</article-title>
          .
          <source>InM: ath Horizons 23.4</source>
          (
          <issue>2016</issue>
          ), pp.
          <fpage>18</fpage>
          -
          <lpage>22</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bonato</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. R. D'Angelo</surname>
            ,
            <given-names>E. R.</given-names>
          </string-name>
          <string-name>
            <surname>Elenberg</surname>
            ,
            <given-names>D. F.</given-names>
          </string-name>
          <string-name>
            <surname>Gleich</surname>
            , and
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Hou</surname>
          </string-name>
          . “
          <article-title>Mining and modeling character networks”</article-title>
          . In:International workshop
          <article-title>on algorithms and models for the webgraph</article-title>
          . Springer.
          <year>2016</year>
          , pp.
          <fpage>100</fpage>
          -
          <lpage>114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>M. M. Bronstein</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Bruna</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>LeCun</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Szlam</surname>
            , and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Vandergheynst</surname>
          </string-name>
          . “
          <article-title>Geometric deep learning: going beyond euclidean data”</article-title>
          .
          <source>I nIE: EE Signal Processing Magazine 34.4</source>
          (
          <issue>2017</issue>
          ), pp.
          <fpage>18</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bruna</surname>
          </string-name>
          and
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          . “
          <article-title>Community detection with graph neural networks”</article-title>
          .
          <source>Isnta:</source>
          t
          <volume>1050</volume>
          (
          <year>2017</year>
          ), p.
          <fpage>27</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D. K.</given-names>
            <surname>Elson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>McKeown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and N. J.</given-names>
            <surname>Dames</surname>
          </string-name>
          . “
          <article-title>Extracting social networks from literary 椀昀ction”</article-title>
          . In: (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Feldman</surname>
          </string-name>
          . “
          <article-title>Techniques and applications for sentiment analysis”</article-title>
          .
          <source>ICn:ommunications of the ACM 56.4</source>
          (
          <issue>2013</issue>
          ), pp.
          <fpage>82</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fey</surname>
          </string-name>
          and
          <string-name>
            <surname>J. E. Lenssen.</surname>
          </string-name>
          “
          <article-title>Fast graph representation learning with PyTorch Geometric”</article-title>
          . In: arXiv preprint arXiv:
          <year>1903</year>
          .
          <volume>02428</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T. M.</given-names>
            <surname>Fruchterman</surname>
          </string-name>
          and
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Reingold</surname>
          </string-name>
          . “
          <article-title>Graph drawing by force-directed placement”</article-title>
          .
          <source>In: So昀琀ware: Practice and experience 21.11</source>
          (
          <year>1991</year>
          ), pp.
          <fpage>1129</fpage>
          -
          <lpage>1164</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Gallicchio</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Micheli</surname>
          </string-name>
          . “
          <article-title>Graph Echo State Networks”</article-title>
          .
          <source>In: 2010</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . do1i0:.11 09/ijcnn.
          <year>2010</year>
          .
          <volume>5596796</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gori</surname>
          </string-name>
          , G. Monfardini, and
          <string-name>
            <given-names>F.</given-names>
            <surname>Scarselli</surname>
          </string-name>
          . “
          <article-title>A new model for learning in graph domains”</article-title>
          .
          <source>In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks</source>
          ,
          <year>2005</year>
          .
          <volume>2</volume>
          (
          <issue>2005</issue>
          ),
          <fpage>729</fpage>
          -
          <lpage>734</lpage>
          vol.
          <volume>2</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Grover</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          . “node2vec:
          <article-title>Scalable feature learning for networks”</article-title>
          .PIrno-:
          <source>ceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining</source>
          .
          <source>2016</source>
          , pp.
          <fpage>855</fpage>
          -
          <lpage>864</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hackl</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Scholtes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. V.</given-names>
            <surname>Petrović</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Perri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Verginer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Gote</surname>
          </string-name>
          .
          <article-title>“Analysis and Visualisation of Time Series Data on Networks with Pathpy”</article-title>
          .
          <source>InC:ompanion Proceedings of the Web Conference</source>
          <year>2021</year>
          . Www '
          <fpage>21</fpage>
          . Ljubljana, Slovenia: Association for Computing Machinery,
          <year>2021</year>
          , pp.
          <fpage>530</fpage>
          -
          <lpage>532</lpage>
          . doi:
          <volume>10</volume>
          .1145/3442442.3452052. url: https://doi.org/10.11 45/3442442.3452052.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>W. L.</given-names>
            <surname>Hamilton</surname>
          </string-name>
          .
          <article-title>Graph Representation Learning</article-title>
          . Vol.
          <volume>14</volume>
          . 3. Morgan &amp; Claypool Publishers,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Honnibal</surname>
          </string-name>
          , I. Montani,
          <string-name>
            <surname>S. Van Landeghem</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Boyd</surname>
          </string-name>
          . “spaCy:
          <article-title>Industrial-strength Natural Language Processing in Python”</article-title>
          . In: (
          <year>2020</year>
          ).
          <year>doi1</year>
          :
          <fpage>0</fpage>
          .5281/zenodo.1212303.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>T. N.</given-names>
            <surname>Kipf</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Welling</surname>
          </string-name>
          . “
          <article-title>Semi-Supervised Classi昀椀cation with Graph Convolutional Networks”</article-title>
          .
          <source>In:Proceedings of the 5th International Conference on Learning Representations (Iclr)</source>
          .
          <source>Iclr '17. Palais des Congrès Neptune</source>
          , Toulon, France,
          <year>2017</year>
          . uhrtlt:ps://openrevie w.net/forum?id=SJU4ayYg.l
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Krug</surname>
          </string-name>
          . “
          <article-title>Techniques for the Automatic Extraction of Character Networks in German Historic Novels”</article-title>
          .
          <source>doctoralthesis. Universität Würzburg</source>
          ,
          <year>2020</year>
          . d1o0i.:25972/opus-20918.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>V.</given-names>
            <surname>Labatut</surname>
          </string-name>
          and
          <string-name>
            <given-names>X.</given-names>
            <surname>Bost</surname>
          </string-name>
          . “
          <article-title>Extraction and analysis of 昀椀ctional character networks: A survey”</article-title>
          .
          <source>In: ACM Computing Surveys (CSUR) 52.5</source>
          (
          <issue>2019</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R.</given-names>
            <surname>Lambiotte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rosvall</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. Scholtes.</surname>
          </string-name>
          “
          <article-title>From networks to optimal higher-order models of complex systems”</article-title>
          .
          <source>In:Nature physics 15.4</source>
          (
          <issue>2019</issue>
          ), pp.
          <fpage>313</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>T. LaRock</surname>
          </string-name>
          , I. Scholtes, and
          <string-name>
            <surname>T.</surname>
          </string-name>
          Eliassi-Rad.
          <article-title>“Sequential motifs in observed walks”</article-title>
          .
          <source>JIonu: rnal of Complex Networks 10.5</source>
          (
          <issue>2022</issue>
          ),
          <year>cnac036</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>J.</given-names>
            <surname>Leyerle</surname>
          </string-name>
          . “
          <article-title>The interlace structure of Beowulf”</article-title>
          .
          <source>UInn:iversity of Toronto Quarterly 37.1</source>
          (
          <issue>1967</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          .
          <article-title>“Complex Networks of Characters in Fictional Novels”</article-title>
          .
          <source>In: 2019 IEEE/ACIS 18th International Conference on Computer and Information Science (ICIS)</source>
          .
          <year>2019</year>
          , pp.
          <fpage>417</fpage>
          -
          <lpage>420</lpage>
          . doi:
          <volume>10</volume>
          .1109/icis46139.
          <year>2019</year>
          .
          <volume>8940174</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sun</surname>
          </string-name>
          , J. Han, and
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          .
          <article-title>“A survey on deep learning for named entity recognition”</article-title>
          .
          <source>In: IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>34</volume>
          .1 (
          <issue>2020</issue>
          ), pp.
          <fpage>50</fpage>
          -
          <lpage>70</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>L.</given-names>
            <surname>Lü</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhou</surname>
          </string-name>
          . “
          <article-title>Link prediction in complex networks: A survey”</article-title>
          .
          <source>IPnh: ysica A: statistical mechanics and its applications 390</source>
          .6 (
          <issue>2011</issue>
          ), pp.
          <fpage>1150</fpage>
          -
          <lpage>1170</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          , G. Corrado, and J.
          <source>DeanE.昀케cient Estimation of Word Representations in Vector Space</source>
          .
          <year>2013</year>
          . url: http://arxiv.org/abs/1301.3781.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , I. Sutskever,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          , G. Corrado, and J.
          <source>DeaDn.istributed Representations of Words and Phrases and their Compositionality</source>
          .
          <year>2013</year>
          . url: http://arxiv.org/abs/1310.4546.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>F.</given-names>
            <surname>Moretti</surname>
          </string-name>
          .
          <article-title>Network theory, plot analysis</article-title>
          .
          <source>Literary Lab Pamphlet 2</source>
          .
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Newman</surname>
          </string-name>
          . Networks. Oxford university press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>B.</given-names>
            <surname>Perozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Al-Rfou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Skiena</surname>
          </string-name>
          . “Deepwalk:
          <article-title>Online learning of social representations”</article-title>
          .
          <source>In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining</source>
          .
          <source>2014</source>
          , pp.
          <fpage>701</fpage>
          -
          <lpage>710</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>V.</given-names>
            <surname>Perri</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Scholtes. “</surname>
          </string-name>
          <article-title>HOTVis: Higher-order time-aware visualisation of dynamic graphs”</article-title>
          .
          <source>In:International Symposium on Graph Drawing and Network Visualization</source>
          . Springer.
          <year>2020</year>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>L.</given-names>
            <surname>Qarkaxhija</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Perri</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. Scholtes.</surname>
          </string-name>
          “De Bruijn goes Neural:
          <article-title>Causality-Aware Graph Neural Networks for Time Series Data on Dynamic Graphs”</article-title>
          .
          <source>Ianr:Xiv preprint arXiv:2209.08311</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>J.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dong</surname>
          </string-name>
          , H. Ma,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          . “Netsmf:
          <article-title>Large-scale network embedding as sparse matrix factorization”</article-title>
          .
          <source>InT:he World Wide Web Conference</source>
          .
          <year>2019</year>
          , pp.
          <fpage>1509</fpage>
          -
          <lpage>1520</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vosgerau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Andruchiw</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Pinto</surname>
          </string-name>
          . “
          <article-title>The complex social network from The Lord of The Rings”</article-title>
          . In:Revista Brasileira de Ensino de Fıśica (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>F.</given-names>
            <surname>Scarselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Tsoi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hagenbuchner</surname>
          </string-name>
          , and
          <string-name>
            <surname>G. Monfardini.</surname>
          </string-name>
          “
          <article-title>The Graph Neural Network Model”</article-title>
          .
          <source>In:Trans. Neur. Netw. 20.1</source>
          (
          <issue>2009</issue>
          ), pp.
          <fpage>61</fpage>
          -
          <lpage>80</lpage>
          . doi:
          <volume>10</volume>
          .1109/tnn.
          <year>2008</year>
          .
          <volume>200</volume>
          5605. url: https://doi.org/10.1109/TNN.
          <year>2008</year>
          .
          <volume>200560</volume>
          5.
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>F.</given-names>
            <surname>Schröder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. O.</given-names>
            <surname>Hatzel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Biemann</surname>
          </string-name>
          . “
          <article-title>Neural end-to-end coreference resolution for German in di昀erent domains”</article-title>
          .
          <source>In: Proceedings of the 17th Conference on Natural Language Processing (KONVENS</source>
          <year>2021</year>
          ).
          <year>2021</year>
          , pp.
          <fpage>170</fpage>
          -
          <lpage>181</lpage>
          . url: https://aclanthology.org/
          <year>2021</year>
          .konv ens-
          <volume>1</volume>
          .15/.
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>T.</given-names>
            <surname>Shippey</surname>
          </string-name>
          .
          <article-title>The road to Middle-earth: how JRR Tolkien created a new mythology</article-title>
          .
          <source>Hmh</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42] [43]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Sparavigna</surname>
          </string-name>
          . “
          <article-title>On social networks in plays and novels”</article-title>
          .
          <source>IInn:ternational Journal of Sciences 2.10</source>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Sperduti</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Starita</surname>
          </string-name>
          . “
          <article-title>Supervised Neural Networks for the Classi昀椀cation of Structures”</article-title>
          .
          <source>In:Trans. Neur. Netw. 8</source>
          .
          <issue>3</issue>
          (
          <issue>1997</issue>
          ), pp.
          <fpage>714</fpage>
          -
          <lpage>735</lpage>
          . doi:
          <volume>10</volume>
          .1109/72.572108. url: https: //doi.org/10.1109/72.57210 8.
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sukthanker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Poria</surname>
          </string-name>
          , E. Cambria, and
          <string-name>
            <given-names>R.</given-names>
            <surname>Thirunavukarasu</surname>
          </string-name>
          . “
          <article-title>Anaphora and coreference resolution: A review”</article-title>
          .
          <source>InI:nformation Fusion</source>
          <volume>59</volume>
          (
          <year>2020</year>
          ), pp.
          <fpage>139</fpage>
          -
          <lpage>162</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sukthanker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Poria</surname>
          </string-name>
          , E. Cambria, and
          <string-name>
            <given-names>R.</given-names>
            <surname>Thirunavukarasu</surname>
          </string-name>
          . “
          <article-title>Anaphora and coreference resolution: A review”</article-title>
          .
          <source>InI:nformation Fusion</source>
          <volume>59</volume>
          (
          <year>2020</year>
          ), pp.
          <fpage>139</fpage>
          -
          <lpage>162</lpage>
          . doi: https://doi.org /10.1016/j.inffus.
          <year>2020</year>
          .
          <volume>01</volume>
          .010.
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>L.</given-names>
            <surname>Torres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Blevins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bassett</surname>
          </string-name>
          , and
          <string-name>
            <surname>T.</surname>
          </string-name>
          Eliassi-Rad. “
          <article-title>The why, how, and when of representations for complex systems”</article-title>
          .
          <source>InS:IAM Review 63.3</source>
          (
          <issue>2021</issue>
          ), pp.
          <fpage>435</fpage>
          -
          <lpage>485</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>P.</given-names>
            <surname>Trilcke</surname>
          </string-name>
          . “
          <article-title>Social network analysis (SNA) als Methode einer textempirischen Literaturwissenscha昀琀”</article-title>
          . In: Empirie in der Literaturwissenscha昀琀 . Brill mentis,
          <year>2013</year>
          , pp.
          <fpage>201</fpage>
          -
          <lpage>247</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>L. Van der Maaten and G.</given-names>
            <surname>Hinton</surname>
          </string-name>
          . “
          <article-title>Visualizing data using t-SNE.” InJ:ournal of machine learning research 9</article-title>
          .11 (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>P.</given-names>
            <surname>Veličković</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cucurull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Casanova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Liò</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          . “
          <article-title>Graph Attention Networks”</article-title>
          .
          <source>In:6th International Conference on Learning Representations</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wasserman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Faust</surname>
          </string-name>
          , et al. “
          <article-title>Social network analysis: Methods and applications”</article-title>
          . In: (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>R. C.</given-names>
            <surname>West</surname>
          </string-name>
          . “
          <article-title>The Interlace Structure of The Lord of the Rings”</article-title>
          .
          <source>IAn: Tolkien Compass</source>
          <volume>2</volume>
          (
          <year>1975</year>
          ), pp.
          <fpage>75</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>J.</given-names>
            <surname>Worsham</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Kalita</surname>
          </string-name>
          . “
          <article-title>Genre Identi昀椀cation and the Compositional E昀ect of Genre in Literature”</article-title>
          .
          <source>In:Proceedings of the 27th International Conference on Computational Linguistics. Santa Fe</source>
          , New Mexico, USA: Association for Computational Linguistics,
          <year>2018</year>
          , pp.
          <fpage>1963</fpage>
          -
          <lpage>1973</lpage>
          . url: https://aclanthology.org/C18-116.7 [53]
          <string-name>
            <surname>M. C.</surname>
          </string-name>
          <article-title>Yavuz. “Analyses of character networks in dramatic works by using graphs”</article-title>
          .
          <source>In: 2020 7th International Conference on Behavioural and Social Computing (BESC)</source>
          .
          <source>Ieee</source>
          .
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          [56]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zehe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Konle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. K.</given-names>
            <surname>Dümpelmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Gius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hotho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Jannidis</surname>
          </string-name>
          , L. Kaufmann,
          <string-name>
            <given-names>M.</given-names>
            <surname>Krug</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Puppe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Reiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schreiber</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Wiedmer</surname>
          </string-name>
          . “
          <article-title>Detecting Scenes in Fiction: A new Segmentation Task”</article-title>
          .
          <source>In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics:</source>
          Main Volume.
          <source>Online: Association for Computational Linguistics</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>3167</fpage>
          -
          <lpage>3177</lpage>
          .
          <year>doi1</year>
          :
          <fpage>0</fpage>
          .18653/v1/
          <year>2021</year>
          .eacl- main.276.
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          url: https://aclanthology.org/
          <year>2021</year>
          .eacl-main.
          <volume>27</volume>
          .6
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          . “
          <article-title>Link prediction based on graph neural networks”</article-title>
          .
          <source>IAnd:vances in neural information processing systems</source>
          <volume>31</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Neumann</surname>
          </string-name>
          , and
          <string-name>
            <surname>Y. Chen. “</surname>
          </string-name>
          <article-title>An End-to-End Deep Learning Architecture for Graph Classi昀椀cation”</article-title>
          .
          <source>In:Proceedings of the Thirty-Second AAAI Conference on Arti昀椀cial Intelligence and Thirtieth Innovative Applications of Arti昀椀cial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Arti昀椀cial Intelligence</source>
          . Aaai'18/iaai'18/eaai'18. New Orleans, Louisiana, USA: AAAI Press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          [57]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , G. Cui,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Sun</surname>
          </string-name>
          . “
          <article-title>Graph neural networks: A review of methods and applications”</article-title>
          .
          <source>IAnI: Open</source>
          <volume>1</volume>
          (
          <year>2020</year>
          ), pp.
          <fpage>57</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>