<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Registering and Monitoring Fairness in Spo-
ken Political and Journalistic Texts</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Sense and Sensitivity: Knowledge Graphs as Training Data for Processing Cognitive Bias, Context and Information Not Uttered in Spoken Interaction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christina Alexandris</string-name>
          <email>calexandris@gs.uoa.gr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National and Kapodistrian University of Athens</institution>
        </aff>
      </contrib-group>
      <fpage>47</fpage>
      <lpage>54</lpage>
      <abstract>
        <p>The processing of information not uttered in spoken interaction - subjective, perceived, context-related information, and its conversion into “visible” information in knowledge graphs and subsequent use in vectors and other forms of training data contributes to registering and monitoring fairness in spoken interaction and to the enrichment of NLP models and refinement of HCI/HRI applications.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The present approach focuses on the processing of
information not uttered in spoken interaction and its conversion
into “visible” and processable information in the form of
knowledge graphs for its subsequent use in vectors and other
forms of training data
        <xref ref-type="bibr" rid="ref27 ref29 ref44 ref47">(Wang et al., 2021, Mountantonakis
and Tzitzikas, 2019, Tran, and Takashu, 2019, Mittal et al.,
2017)</xref>
        . The knowledge graphs are intended, at least in the
present stage, as a dataset for training a neural network.
      </p>
      <p>Here, we describe the modelling of information not
uttered into knowledge graphs for their subsequent conversion
into neural networks, which, in turn, are targeted to learn
this particular type of data.</p>
      <p>This subjective, perceived, context-related information is
directly linked to Cognitive Bias and to the monitoring of
(true) fairness in spoken interaction. Here, fairness is
referred to the sense that all voices-aspects-opinions are heard
clearly –that all participants are given a fair chance in the
interview or discussion and are not purposefully or
unconsciously repressed, oppressed, offended or even bullied. In
other words, the proposed graphs depict “sensitive”
information – “Sensitivity” of the speakers-participants.</p>
      <p>A crucial element in achieving “visibility” of information
not uttered is causality, namely the registration and
processing of reactions triggered by that very information not
uttered - the multiple facets of the “Sense” of the words
and/or transcribed video and speech segments.</p>
      <p>
        The detection and registration information not uttered and
its conversion into knowledge graphs is based on previous
research presented. Previous research involves an
interactive application allowing the monitoring of fairness in
interviews and discussions in spoken political and journalistic
texts, especially in respect to Cognitive Bias, namely
detecting Lexical Bias and avoiding Confidence Bias.
In our previous research
        <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris et al., 2021, Alexandris
et al., 2020, Alexandris 2019, Alexandris, 2018)</xref>
        , a
processing and evaluation framework was proposed for the
generation of graphic representations and tags corresponding to
values and benchmarks depicting the degree of information
not uttered and non-neutral elements in Speaker behavior in
spoken text segments. The implemented processing and
evaluation framework allows the graphic representation to
be presented in conjunction with the parallel depiction of
speech signals and transcribed texts. Specifically, the
alignment of the generated graphic representation with the
respective segments of the spoken text enables a possible
integration in existing transcription tools.
      </p>
      <p>
        Although the concept of the generated graphic
representations originates from the Discourse Tree prototype
        <xref ref-type="bibr" rid="ref26">(Marcu, 1999)</xref>
        , the characteristics of spontaneous
turn-taking
        <xref ref-type="bibr" rid="ref50">(Wilson and Wilson, 2005)</xref>
        and short spoken speech
segments did not facilitate the implementation of typical
strategies based on Rhetorical Structure Theory (RST)
        <xref ref-type="bibr" rid="ref10 ref43 ref52">(Stede,et
al., 2017, Zeldes, 2016, Carlson et al., 2001)</xref>
        .
      </p>
      <p>
        In particular, strategies typically employed in the
construction of most Spoken Dialog Systems (such as keyword
processing in the form of topic detection
        <xref ref-type="bibr" rid="ref21 ref31">(Jurafsky and
Martin, 2008, Nass and Brave 2005)</xref>
        from which approaches
involving neural networks are developed
        <xref ref-type="bibr" rid="ref49">(Jurafsky and Martin
2020, Williams, et al., 2017)</xref>
        ) were adapted in an interactive
annotation tool designed to operate with most commercial
transcription tools
        <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris et al., 2021, Alexandris et al.,
2020, Mourouzidis et al., 2019)</xref>
        . The output provides the
User-Journalist with (i) the tracked indications of the topics
handled in the interview or discussion and (ii) the graphic
pattern of the discourse structure of the interview or
discussion. The output (i) and (ii) also included functions and
respective values reflecting the degree in which the
speakersparticipants address or avoid the topics in the dialog
structure (“RELEVANCE” Module) as well as the degree of
tension in their interaction (“TENSION” Module).
      </p>
    </sec>
    <sec id="sec-2">
      <title>Sensitive Topics, Sensitive Participants: Previous</title>
    </sec>
    <sec id="sec-3">
      <title>Research</title>
      <p>
        The implemented “RELEVANCE” Module (Mourouzidis et
al., 2019), intended for the evaluation of short speech
segments, generates a visual representation from the user’s
interaction, tracking the corresponding sequence of topics
(topic-keywords) chosen by the user and the perceived
relations between them in the dialog flow. The generated visual
representations (not presented here) depict topics avoided,
introduced or repeatedly referred to by each
Speaker-Participant, and, in specific types of cases, may indicate the
existence of additional, “hidden”(Mourouzidis et al., 2019)
Illocutionary Acts
        <xref ref-type="bibr" rid="ref38 ref9">(Austin , 1962, Searle, 1969)</xref>
        other than
“Obtaining Information Asked” or “Providing Information
Asked” in a discussion or interview. In the “RELEVANCE”
Module (Mourouzidis, et al., 2019), a high frequency of
Repetitions (value 1), Generalizations (value 3) and Topic
Switches (value -1) in comparison to the duration of the
spoken interaction is connected to the “(Topic) Relevance”
benchmarks with a value of “Relevance (X)”
        <xref ref-type="bibr" rid="ref3 ref4">(Alexandris,
2020, Alexandris, 2018)</xref>
        . These values were converted into
generated visual representations and were registered as
tuples or as triple tuples (Fig.1).
(chemical weapons, military confrontation, 2)
(chemical weapons, military confrontation, 3)
chemical weapons -&gt; ASSOC-&gt; military confrontation
chemical weapons -&gt; GEN-&gt; military confrontation
Thus, the evaluation of Speaker-Participant behavior targets
to by-pass Cognitive Bias, specifically, Confidence Bias
        <xref ref-type="bibr" rid="ref19">(Hilbert, 2012)</xref>
        of the user-evaluator, especially if multiple
users-evaluators may produce different forms of generated
visual representations for the same conversation and
interaction. The generated visual representations for the same
conversation and interaction may be compared to each other
and be integrated in a database currently under
development. In this case, chosen relations between topics may
describe Lexical Bias
        <xref ref-type="bibr" rid="ref45">(Trofimova, 2014)</xref>
        and may differ
according to political, socio-cultural and linguistic
characteristics of the user-evaluator, especially if international
speakers/users are concerned
        <xref ref-type="bibr" rid="ref11 ref25 ref32 ref33 ref51">(Du et al, 2017, Paltridge 2012, Ma,
2010, Yu et al., 2010, Pan, 2000)</xref>
        due to lack of world
knowledge of the language community involved
        <xref ref-type="bibr" rid="ref16 ref48">(Hatim,
1997, Wardhaugh, 1992)</xref>
        .
      </p>
      <p>The detecting and processing of information not uttered
but perceived-sensed by speakers-participants allows the
integration of additional information content –
meanings/senses- in training data. This allows the enrichment of
data for understanding speaker-participant
psychologymentality and sensitivities and the possible impact or
consequences of a spoken journalistic/political text or interview.
This also allows an additional approach to registering of
cause-result relations on a discourse basis.</p>
      <p>The way sensitive topics and speakers-participant
sensitivity are purposefully or unconsciously treated and
managed contributes to registering and monitoring fairness in
spoken interaction, especially if non-native speakers and/or
an international community is concerned.</p>
      <p>
        The registration and integration of “invisible”
information in training data contributes to enriching models and
to refining various Natural Language Processing (NLP)
tasks such as Sentiment Analysis and Opinion Mining –
especially when videos and multimodal data are processed
        <xref ref-type="bibr" rid="ref35">(Poria et al., 2017)</xref>
        . This approach may serve as (initial)
training and test sets or for Speaker (User) behavior and
expectations in Human-Computer Interaction and even in
Human-Robot Interaction systems.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Creating Knowledge Graphs</title>
      <p>The complexity of the above-described type of spoken
interaction can be accurately depicted in knowledge graphs.
Knowledge graphs allow the multidimensional presentation
of information and the relations-links between information
(word –entities) within a dataset. The very nature and
structure of knowledge graphs allows the representation of
multiple facets of information – the multiple facets of the
“Sense” of the words and/or transcribed video speech
segments – although it is considered that there may exist some
types of information and/or some cases where there may not
be a 100% coverage by a knowledge graph.</p>
      <p>
        The possibility of converting knowledge graphs into
vectors and other types of data,
        <xref ref-type="bibr" rid="ref27">(Mittal et al., 2017)</xref>
        for training
neural networks (or other types of approaches and models)
is presented in recent research, with Wang et al., 2021,
Mountantonakis and Tzitzikas, 2019, Tran, and Takashu,
2019 as characteristic examples applying to the approach
presented here.
      </p>
      <p>The conversion of knowledge graphs into training data
contributes to the integration and processing of complex
information and information not uttered in Natural Language
Processing (NLP) tasks, thus, contributing to the creation of
even more sophisticated systems. This possibility would not
be considered if the above-stated characteristic research
work were not accomplished. Thus, the triple tuples
presented in the example illustrated in Fig. 1, may be converted
into the following form (Fig. 2):
chemical
weapons
chemical
weapons</p>
      <p>
        ASSOC
GEN
The knowledge graphs, generated by an interactive
application presented in related/previous research
        <xref ref-type="bibr" rid="ref1 ref2 ref3">(Alexandris et
al., 2022, Alexandris et al., 2021, Mourouzidis et al., 2019)</xref>
        ,
involve the depiction of two main categories of information
not uttered in spoken interaction.
      </p>
      <p>The first category (I) concerns additional perceived
information content and dimensions of –notably- very common
words – information not registered in language resources.
This additional information may concern context-specific
socio-cultural associations and Cognitive Bias. These words
may also constitute the perceived topic of a spoken utterance
or they may be perceived to play a crucial role in the content
of the spoken utterance. The perceived information is
language- and socio-culturally specific and is purposefully or
subconsciously conveyed or perceived-understood by
speakers-participants in the same language community.</p>
      <p>The second category (II) concerns perceived
paralinguistic elements influencing the information content of spoken
utterances.</p>
      <p>Both types of information not uttered are context-specific
and rely on whether they are perceived by the
communicating parties and on socio-cultural factors.</p>
      <p>The knowledge graphs can, subsequently, be converted
into vectors and other forms of training data which is
targeted to contain (a) “visible” and processable information
not uttered in spoken interaction and (b) multiple versions
and varieties of training data with perceived information
generated by the interactive application.</p>
      <p>Evaluation is based on the comparison of the
(interactively annotated) information in the original sequences of
tuples and triplets with the information depicted in the
created knowledge graphs. Therefore, there should be a 100%
compatibility between the information of the original
sequences and the knowledge graphs.</p>
    </sec>
    <sec id="sec-5">
      <title>Integrating Cognitive Bias in Knowledge</title>
    </sec>
    <sec id="sec-6">
      <title>Graphs</title>
      <p>In the context of the spoken interaction concerned, namely
interviews and discussions-debates in spoken political and
journalistic texts, Cognitive Bias concerns association
relations and argumentation related to inherent yet subtle
socioculturally determined linguistic features in (notably)
commonly occurring words presented in previous research
(examples from the international community: (the) “people”,
(our) “sea”).</p>
      <p>
        These word types are detectable from the registered
reactions
        <xref ref-type="bibr" rid="ref2 ref3">(Alexandris, 2021)</xref>
        they trigger in the processed
dialog segment with two (or multiple) speakers-participants.
      </p>
      <p>
        Since these words are very common and do not contain
descriptive features, the subtlety of their content is often
unconsciously used or is perceived (mostly) by native speakers
and may contribute to the degree of formality or intensity of
conveyed information in a spoken utterance. Here, these
words concerning Cognitive Bias – Lexical Bias are referred
to as “Gravity” words
        <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris, 2021, Alexandris, 2020)</xref>
        .
      </p>
      <p>
        In other cases, these word types, although common
words, may contribute to a descriptive or emotional tone in
an utterance and they may play a remarkable role in
interactions involving persuasion and negotiations. Specifically, it
is considered that, accord
        <xref ref-type="bibr" rid="ref40">ing to Rockledge et al, 2018</xref>
        , “the
more extremely positive the word, the greater the
probability individuals were to associate that word with persuasion”.
Here, these words concerning Cognitive Bias – Lexical Bias
are referred to as “Evocative” words
        <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris, 2021,
Alexandris, 2020)</xref>
        .
      </p>
      <p>
        The subtle impact of words is one of the tools typically
used in persuasion and negotiations
        <xref ref-type="bibr" rid="ref13 ref41">(Skonk, 2020, Evans
and Park, 2015)</xref>
        .
      </p>
      <p>In other words, information that is not uttered and
information that is perceived plays an essential role in
understanding the above-described types of spoken interaction.
The modeling and processing of information not uttered and
information perceived does not only allow access to the
complete content of spoken utterances and to registering and
monitoring fairness in spoken interaction, but also to predict
user-speaker behavior and reactions.</p>
    </sec>
    <sec id="sec-7">
      <title>The “Context” Relation: Visualizing and Linking</title>
    </sec>
    <sec id="sec-8">
      <title>Perception and Sensitivity</title>
      <p>In the knowledge graphs, this additional information of the
above-described categories (I) and (II) is linked as an
additional node to the spoken word with the proposed “Context”
relation. The term “Context” is chosen to signalize the
perceived context of additional information in the form of
cooccurring linguistic and/or paralinguistic features.</p>
      <p>The context of additional information perceived and
implied by the speaker or perceived by the recipient influences
the information content of the spoken utterance and its
impact in the spoken interaction and dialogue structure.</p>
      <p>The “Context” relation signalizes the perceived “Gravity”
or “Evocative” word and links it to the word-topic of the
utterance. In other words, both words in the utterance
–perceived word-topic and/or perceived “Gravity” or
“Evocative” word may contribute to the type of response generated
by an/the other speaker-participant, possibly also to tension.
This case may be compared to multiple factors contributing
to a creation of a particular state or situation.</p>
      <p>The existence of a “Gravity” or an “Evocative” word is
signalized by the “Context” relation itself, however, the
word’s additional dimension and content and/or
interpretation (for example, “important” – for a “Gravity” word or
“heartfelt” for an “Evocative” word) is not signalized and
generated, at least not in the current stage of the present
research. This is because any additional content is may not be
limited to a singular interpretation summarized by a
particular expression-keyword.</p>
      <p>We focus on the signalization and (cause-) effect of these
words during spoken interaction, as an additional factor in
the context.</p>
      <p>
        Generated graphical representations of perceived
wordtopic relations and registered “Gravity” and “Evocative”
words (concerning Cognitive Bias – Lexical Bias) can be
converted into sequences for their subsequent conversion
into knowledge graphs or other forms of data for neural
networks and Machine Learning applications
        <xref ref-type="bibr" rid="ref27 ref29 ref44 ref47">(Wang et al.,
2021, Mountantonakis and Tzitzikas, 2019, Tran, and
Takashu, 2019, Mittal et al., 2017)</xref>
        .
      </p>
      <p>
        As described in previous research
        <xref ref-type="bibr" rid="ref3 ref4">(Alexandris et al,
2020)</xref>
        , registered “Gravity” and “Evocative” words are
appended as marked values with “&amp;” in the respective tuples
or triple tuples. In the sequences with the respective tuples
or triple tuples, the “&amp;” indication is converted into a
“CONTEXT” relation.
      </p>
      <p>For example, a “No” answer (-2) preceded by “sanctions”
as a perceived word topic accompanied with a perceived
“Gravity” word “dignity” (sanctions, -2, &amp;dignity), is
converted into the following sequences (Fig. 3):
(sanctions, -2, &amp;dignity):
sanctions -&gt;NO -&gt; SWITCH -&gt; [...]
sanctions -&gt; CONTEXT -&gt; dignity
“dignity” contributing to “No” answer and subsequent topic
switch (SWITCH).</p>
      <p>If the perceived word-topic also constitutes a perceived
“Gravity” or “Evocative” word, the “&amp;” indication is
converted into a “CONTEXT” relation with the same word.</p>
      <p>
        Furthermore, perceived word-topics and “Gravity” and
“Evocative” words may also trigger tension or other
reactions and can be depicted as sequences for their subsequent
modelling into knowledge graphs (Fig. 5, Fig. 6) or other
forms of data. Figure 4 depicts a speech segment with two
occurrences of a registered tension trigger from a speech
segment with detected “Tension”
        <xref ref-type="bibr" rid="ref3 ref4">(the “TENSION” Module
implemented in previous research, Alexandris et al., 2020,
Alexandris, 2019)</xref>
        .
(sanctions, -2, &amp;dignity),
(chemical weapons, military confrontation 2, &amp;justice):
      </p>
      <sec id="sec-8-1">
        <title>TENSION {</title>
        <p>sanctions -&gt;NO -&gt; SWITCH-&gt;[...]
sanctions -&gt; CONTEXT -&gt; dignity
chemical weapons -&gt; ASSOC-&gt; military confrontation
chemical weapons -&gt;CONTEXT -&gt; justice
} TENSION
Fig. 4. Conversion of triple tuples and tuples for the
generation of knowledge graphs from a speech segment with
detected “Tension”.</p>
        <p>The first occurrence is the “Gravity” word “dignity”
co-occurring within the same utterance with the word-topic
“sanctions” to which there is a negative response (“No”). In other
words, within the detected “Tension” context, the negative
response is linked to the utterance with the perceived
wordtopic “sanctions”, containing the “Gravity” word “dignity”.
The second occurrence of a registered tension trigger is the
“Gravity” word “justice” co-occurring with the word-topic
“chemical weapons” and linked to the word-topic “military
confrontation” with a perceived “Association” (ASSOC)
relation. Fragments of knowledge graphs for the perceived
and registered relations between topics of the speech
segment in Fig.4 are depicted in Fig. 5 and Fig. 6.</p>
        <sec id="sec-8-1-1">
          <title>CONTEXT</title>
          <p>sanctions
dignity
NO</p>
        </sec>
        <sec id="sec-8-1-2">
          <title>SWITCH</title>
          <p>[.]
in utterance segment with detected tension between
speakers.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>On Registering Tension</title>
      <p>
        As presented in previous research
        <xref ref-type="bibr" rid="ref3 ref4">(Alexandris et. al, 2020,
Alexandris, 2019)</xref>
        , multiple points of tension (“hot
spots”consisting of a question-answer pair or a statement-response
pair (or any other type of relation) between speaker turns)
indicate a more argumentative than a collaborative
interaction, even if speakers-participants display a calm and
composed behavior
        <xref ref-type="bibr" rid="ref3 ref4">(Alexandris et. al, 2020, Alexandris, 2019)</xref>
        .
      </p>
      <p>
        These points of tension (“hot spots”) involving, among
others, the registration of words and word-topics and the
reactions they provoke
        <xref ref-type="bibr" rid="ref3 ref4">(“tension-triggers” -Alexandris et. al,
2020, Alexandris, 2019)</xref>
        , can contribute to the detection and
identification of more subtle emotions, in the middle and
outer zones of the Plutchik Wheel of Emotions
        <xref ref-type="bibr" rid="ref34">(Plutchik,
1982)</xref>
        . For example, for subtle negative reactions in the
Plutchik Wheel of Emotions, namely “Apprehension”,
“Annoyance”, “Disapproval”, “Contempt”, “Aggressiveness”
        <xref ref-type="bibr" rid="ref34">(Plutchik, 1982)</xref>
        . These emotions are usually too subtle to
be easily extracted by sensor and/or speech signal data.
However, such subtle emotions may play a crucial role in
spoken interactions involving persuasion and negotiations,
although they are not always easily detectable or “visible”.
      </p>
      <p>
        Points of possible tension and/or conflict between
speakers-participants (“hot-spots”) are identified by a set of
criteria based on the Gricean Cooperative Principle
        <xref ref-type="bibr" rid="ref15">(Grice, 1989,
Grice, 1975)</xref>
        (including paralinguistic elements, as
presented in the following section) and signalized in generated
graphic representations of registered negotiations (or other
type of spoken interaction concerning persuasion), with
special emphasis on words and topics triggering tension and
non-collaborative speaker-participant behavior
        <xref ref-type="bibr" rid="ref3 ref4">(Alexandris
et. al, 2020, Alexandris, 2019)</xref>
        . The detection of “hot spots”
- points of tension implemented in previous research and
integrated in knowledge graphs facilitates the detection of
words and word-topics associated with Persuasion and/or
Tension, according to the factor of perception, subjectivity,
socio-cultural factors and the current state-of-affairs.
      </p>
    </sec>
    <sec id="sec-10">
      <title>Paralinguistic Features: Sense and Sensitivity</title>
      <p>
        Paralinguistic features constituting information that is not
uttered may often contribute to the correct detection and
identification of subtle emotions, complementing or
intensifying the information content of the word or utterance.
There are also cases where the semantic content of a spoken
utterance may be contradicted by a gesture, facial
expression or movement. However, as described in previous
research
        <xref ref-type="bibr" rid="ref3 ref4">(Alexandris et. al, 2020, Alexandris, 2019)</xref>
        , the use
of linguistic information with or without a link to
paralinguistic features is proposed as a more reliable source of a
speaker’s attitude, behavior and intentions than stand-alone
paralinguistic features, especially if international speakers
and/or an international public are concerned.
      </p>
      <p>
        The Gricean Cooperative Principle is violated if the
information conveyed is perceived as not complete (Violation of
Quantity or Manner) or even contradicted by paralinguistic
features (Violation of Quality)
        <xref ref-type="bibr" rid="ref15">(Grice, 1989, Grice, 1975)</xref>
        .
      </p>
      <p>Paralinguistic features may often contribute to the correct
detection and identification of subtle emotions,
complementing or intensifying the information content of the word
or word-topic, however, they are not always reliable,
especially if international speakers and/or an international public
are involved.</p>
      <p>
        Paralinguistic features constituting information that is not
uttered is also problematic in Data Mining and Sentiment
Analysis-Opinion Mining applications. These applications
mostly rely on word groups, word sequences and/or
sentiment lexica
        <xref ref-type="bibr" rid="ref24">(Liu, 2012)</xref>
        , including recent approaches with
the use of neural networks
        <xref ref-type="bibr" rid="ref17 ref39 ref8">(Hedderich and Klakow, 2018,
Shah et al., 2018, Arockiaraj, 2013)</xref>
        , especially if Sentiment
Analysis from videos (text, audio and video) is concerned.
However, even if context dependent multimodal utterance
features are extracted, as proposed in relatively recent
research
        <xref ref-type="bibr" rid="ref35">(Poria, 2017)</xref>
        , the semantic content of a spoken
utterance may be either complemented or contradicted by a
gesture, facial expression or movement.
      </p>
      <p>As in the above-presented cases of “Gravity” and
“Evocative” words, for paralinguistic features, the additional
information in the form of a linked node and respective
wordentity with the “Context” relation allow the “visibility” and,
subsequently, the processing of information not uttered.</p>
    </sec>
    <sec id="sec-11">
      <title>The “Context” Relation: Visualizing and Linking</title>
    </sec>
    <sec id="sec-12">
      <title>Information Not Uttered</title>
      <p>As in the case of perceived “Gravity” and “Evocative”
words, paralinguistic elements can be similarly annotated as
appended messages and processed with a “CONTEXT”
relation for their subsequent modelling into knowledge graphs
or other forms of data. As described above, the
“CONTEXT” relation enables the conversion of knowledge
graphs and into vectors or other forms of data for neural
networks and Machine Learning applications (Wang et al.,</p>
      <p>We note that the “CONTEXT” relation may link both a
“Gravity”/ “Evocative” word and a paralinguistic element to
the word-topic of a spoken utterance.</p>
      <p>Figure 7 and Figure 8 depict examples of registered
paralinguistic elements and their respective messages from
speech segments.</p>
      <sec id="sec-12-1">
        <title>CONTEXT</title>
        <p>CONTEXT
sanctions
indeed
sanctions
important
2021, Mountantonakis and Tzitzikas, 2019, Tran, and
Takashu, 2019, Mittal et al., 2017).</p>
        <p>
          In the case of paralinguistic elements, the “Context”
relation links an additional expression – a word-entity, to the
word uttered, for example, a modifier
          <xref ref-type="bibr" rid="ref7">(Alexandris, 2010)</xref>
          ,
completing its perceived content. This practice is typical of
professional translators and interpreters when correctness
and precision is targeted (Koller, 2000), as research and
reports demonstrate.
        </p>
        <p>
          Therefore, expert knowledge, concerning a finite set of
expressions-keywords, is integrated into the knowledge
graphs
          <xref ref-type="bibr" rid="ref1">(with the interactive application presented in related
research, Alexandris et al., 2022)</xref>
          . The additional
information in the form of a linked node and respective
wordentity allows the “visibility” and, subsequently, the
processing of information not uttered.
        </p>
        <p>
          As described in previous research
          <xref ref-type="bibr" rid="ref3 ref4">(Alexandris, 2020)</xref>
          , the
interactive annotation of paralinguistic features is proposed,
depicting information complementing the information
content of the spoken utterance (for example, “[+ facial-expr:
eyebrow-raise]” and “[+ gesture: low-hand-raise]”) or
constituting “stand-alone” information
          <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris, 2021,
Alexandris, 2020)</xref>
          . In the latter case, information was
interactively annotated with the insertion of a separate message or
response [Message/Response].
        </p>
        <p>
          For example, the raising of eyebrows with the
interpretation “I am surprised” [and / but this surprises me]
          <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris, 2021, Alexandris, 2020)</xref>
          was indicated as [I am
surprised] (a), either as a pointer to information content or as or
as a substitute of spoken information, a “stand-alone”
paralinguistic feature [Message /Response: I am surprised]
          <xref ref-type="bibr" rid="ref3 ref4">(Alexandris, 2020)</xref>
          .
        </p>
        <p>
          The alternative interpretations of the paralinguistic
feature (namely, “I am listening very carefully” (b), “What I
am saying is important”(c) or “I have no intention of doing
otherwise” (d) Alexandris, 2021, Alexandris, 2020) was
indicated with the respective annotations “[I am listening],
[Please pay attention], [No] - [Message /Response: I am
listening], [Message /Response: Please pay attention],
[Message /Response: No]. The insertion of the respective type of
annotation for the paralinguistic features was according to
the parameters of the language(s) and the speaker(s)
concerned
          <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris, 2021, Alexandris, 2020)</xref>
          .
        </p>
        <p>The “CONTEXT” relation connects the chosen
wordtopic from the speech segment with a word-expression
emphasizing / complementing the spoken content such as
“indeed” or respective word summarizing the message. For
example, for the paralinguistic element [eyebrow-raise],
possible options are: word-topic -&gt; CONTEXT -&gt; indeed,
word-topic -&gt; CONTEXT -&gt; surprised, word-topic -&gt;
CONTEXT -&gt; important, or word-topic -&gt; CONTEXT -&gt;
No.</p>
        <p>[…]
dignity</p>
      </sec>
      <sec id="sec-12-2">
        <title>CONTEXT</title>
        <p>[…]
[…]
Fig. 8. Fragment of knowledge graph for perceived
meaning of eyebrow-raise (“important”) co-occurring with topic
“sanctions” and perceived “Gravity” word (“dignity”) in
utterance.</p>
        <p>
          For paralinguistic features depicting contradictory
information to the information content of the spoken utterance,
the additional signalization of “!” is proposed in previous
research
          <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris, 2021, Alexandris, 2020)</xref>
          , for example,
“[! facial-expr: eye-roll]” and “[! gesture: clenched-fist]”
          <xref ref-type="bibr" rid="ref2 ref3 ref4">(Alexandris, 2021, Alexandris, 2020)</xref>
          or even a smile. In this
case, the “CONTEXT” relation connects the chosen
wordtopic from the speech segment with a word-expression
contradicting the spoken content with the expression “not
really” as a special indication (Fig. 9 and Fig. 10).
        </p>
      </sec>
      <sec id="sec-12-3">
        <title>CONTEXT</title>
        <p>sanctions
not really
Fig. 9. Fragment of knowledge graph for perceived
contradictory meaning of eye-roll (“not really”) co-occurring
with topic “sanctions” in utterance.</p>
      </sec>
      <sec id="sec-12-4">
        <title>CONTEXT</title>
      </sec>
    </sec>
    <sec id="sec-13">
      <title>Conclusions and Further Research</title>
      <p>The processing of (subjective) perceived information,
information concerning Cognitive Bias and information not
uttered and its integration in training data contributes to a
better understanding of spoken interaction, registration of
cause-result relations on a discourse basis and a fair
evaluation of all parties concerned, especially if non-native
speakers and an international community are taken into account.
Furthermore, apart from contributing to enriching models
and refining NLP tasks such as Sentiment Analysis and
Opinion Mining, the integration of “invisible” information
in training data may serve as training and test sets for
Human-Computer Interaction and Human-Robot Interaction
applications.</p>
      <p>Expert knowledge and world knowledge is, therefore,
integrated in training data using knowledge graphs. This
possibility contributes to the enrichment of models for NLP,
HCI and HRI applications, allowing the processing of
information not uttered as well as multiple varieties and versions
of socio-linguistically related and user/speaker -specific
implied and perceived information.</p>
      <p>The next stages of research concern the application of the
training and test sets converted from the proposed
knowledge graphs into the Human-Computer Interaction
and/or Human-Robot Interaction systems for evaluating the
effectiveness of the proposed knowledge graphs and for
their further upgrading and improvement. This includes
evaluating the behavior and output of the neural networks
and the data learnt, especially if multiple datasets of
different registered versions of the (subjective) perceived
information are concerned. Further research is geared towards
the extensive implementation, evaluation and improvement
of the training data created by the knowledge graphs,
especially in respect to a wider range of languages and speakers
–and possibly, to other types of information not uttered
related to Cognitive Bias and affecting Fairness.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Alexandris</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Du</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Floros</surname>
            <given-names>V.</given-names>
          </string-name>
          <year>2022</year>
          .
          <article-title>Forthcoming. Visualizing and Processing Information Not Uttered in Spoken Political and Journalistic Data: From Graphical Representations to Knowledge Graphs in an Interactive Application</article-title>
          . Lecture Notes in Computer Science Heidelberg, Germany: Springer.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Alexandris</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Floros</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mourouzidis</surname>
            <given-names>D.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Graphic Representations of Spoken Interactions from Journalistic Data: Persuasion and Negotiations</article-title>
          .
          <source>In Human-Computer Interaction. Design and User Experience Case Studies, Lecture Notes in Computer Science</source>
          , vol
          <volume>12764</volume>
          ,
          <string-name>
            <surname>edited by M. Kurosu</surname>
          </string-name>
          ,
          <volume>3</volume>
          -
          <fpage>17</fpage>
          . Cham, Switzerland: Springer. doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -78468-
          <issue>3</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Alexandris</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Registering the impact of Words in Spoken Political and Journalistic Texts</article-title>
          .
          <source>Journal of Human Language, Rights and Security</source>
          (
          <volume>1</volume>
          ):
          <fpage>26</fpage>
          -
          <lpage>48</lpage>
          . doi.org/10.22363/
          <fpage>2713</fpage>
          -0614-2021- 1-1-
          <fpage>26</fpage>
          -48 Alexandris,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>Issues in Multilingual Information Processing of Spoken Political and Journalistic Texts in the Media and Broadcast News</article-title>
          .
          <source>Newcastle upon Tyne</source>
          , UK: Cambridge Scholars.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Alexandris</surname>
            ,
            <given-names>C</given-names>
          </string-name>
          , Mourouzidis,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Floros</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>Generating Graphic Representations of Spoken Interactions Revisited: The Tension Factor and Information Not Uttered in Journalistic Data</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>In</surname>
          </string-name>
          Human-Computer Interaction.
          <source>Design and User Experience, Lecture Notes in Computer Science</source>
          , vol
          <volume>12181</volume>
          , edited by M.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Kurosu</surname>
          </string-name>
          ,
          <fpage>523</fpage>
          -
          <lpage>537</lpage>
          . Cham, Switzerland: Springer Nature. doi.org/ 10.1007/978-3-
          <fpage>030</fpage>
          -49059-1_39 Alexandris,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Evaluating Cognitive Bias in Two-Party and Multi-Party Spoken Interactions</article-title>
          .
          <source>In Proceedings of Interpretable AI</source>
          for
          <article-title>Well-being: Understanding Cognitive Bias and Social Embeddedness (IAW 2019) in conjunction with AAAI Spring Symposium</article-title>
          (SS-19-03), Stanford University, Palo Alto, CA. ceurws.org/Vol-2448
          <string-name>
            <surname>Alexandris</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Measuring Cognitive Bias in Spoken Interaction and Conversation: Generating Visual Representations</article-title>
          .
          <source>In Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity</source>
          for
          <string-name>
            <surname>Well-Being</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>Papers from the 2018 AAAI Spring Symposium</article-title>
          .
          <source>Technical Report SS-18-03</source>
          , Palo Alto, CA: AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Alexandris</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2010</year>
          . English, German and the International “
          <article-title>Semi-professional” Translator: A Morphological Approach to Implied Connotative Features</article-title>
          .
          <source>Journal of Language and Translation</source>
          , Sejong University, Korea,
          <year>September 2010</year>
          , vol.
          <volume>11</volume>
          (
          <issue>2</issue>
          ):
          <fpage>7</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Arockiaraj</surname>
            ,
            <given-names>C. M.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Applications of Neural Networks In Data Mining</article-title>
          .
          <source>International Journal Of Engineering And Science</source>
          , vol.
          <volume>3</volume>
          , Issue 1 (May
          <year>2013</year>
          ):
          <fpage>8</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Austin J. L.</surname>
          </string-name>
          <year>1962</year>
          .
          <article-title>How to Do Things with Words</article-title>
          .
          <source>2nd edition</source>
          <year>1976</year>
          , edited by J.O.,
          <string-name>
            <surname>Urmson</surname>
            and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sbisà</surname>
          </string-name>
          . Oxford, UK: Oxford University Press, Oxford Paperbacks.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Carlson</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marcu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Okurowski</surname>
            ,
            <given-names>M. E.</given-names>
          </string-name>
          <year>2001</year>
          .
          <article-title>Building a Discourse-Tagged Corpus in the Framework of Rhetorical Structure Theory</article-title>
          .
          <source>In Proceedings of the 2nd SIGDIAL Workshop on Discourse and Dialogue, Eurospeech</source>
          <year>2001</year>
          , Denmark.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Du</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alexandris</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mourouzidis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Floros</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iliakis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <article-title>Controlling Interaction in Multilingual Conversation Revisited: A Perspective for Services and Interviews in Mandarin Chinese</article-title>
          .
          <source>In Lecture Notes in Computer Science</source>
          , vol
          <volume>10271</volume>
          ,
          <string-name>
            <surname>edited by M. Kurosu</surname>
          </string-name>
          ,
          <volume>573</volume>
          -
          <fpage>583</fpage>
          . Heidelberg, Germany: Springer.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>N. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Rethinking the Persuasion Knowledge Model: Schematic Antecedents and Associative Outcomes of Persuasion Knowledge Activation for Covert Advertising</article-title>
          , Journal of Current Issues &amp; Research in Advertising,
          <volume>36</volume>
          (
          <issue>2</issue>
          ):
          <fpage>157</fpage>
          -
          <lpage>176</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          doi.org/10.1080/10641734.
          <year>2015</year>
          .1023873 Grice,
          <string-name>
            <surname>H. P.</surname>
          </string-name>
          <year>1989</year>
          .
          <article-title>Studies in the Way of Words</article-title>
          . Cambridge, MA: Harvard University Press.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Grice</surname>
            ,
            <given-names>H.P.</given-names>
          </string-name>
          <year>1975</year>
          .
          <article-title>Logic and conversation</article-title>
          .
          <source>In Syntax and Semantics</source>
          , vol.
          <volume>3</volume>
          ,
          <string-name>
            <surname>edited</surname>
            <given-names>by P.</given-names>
          </string-name>
          <string-name>
            <surname>Cole</surname>
            and
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Morgan</surname>
          </string-name>
          . New York: Academic Press.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Hatim</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>1997</year>
          .
          <article-title>Communication Across Cultures: Translation Theory and Contrastive Text Linguistics</article-title>
          . Exeter, UK: University of Exeter Press.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Hedderich</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klakow</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <source>In Proceedings of the Workshop on Deep Learning</source>
          Approaches for
          <string-name>
            <surname>Low-Resource</surname>
            <given-names>NLP</given-names>
          </string-name>
          , Melbourne, Australia,
          <fpage>12</fpage>
          -
          <lpage>18</lpage>
          .
          <article-title>Association for Computational Linguistics-ACL.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Hilbert</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2012</year>
          .
          <article-title>Toward a Synthesis of Cognitive Biases: How Noisy Information Processing Can Bias Human Decision Making.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>Psychological Bulletin</source>
          <volume>138</volume>
          (
          <issue>2</issue>
          ):
          <fpage>211</fpage>
          -
          <lpage>237</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Jurafsky</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Speech and Language Processing, an Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition</article-title>
          . 2nd ed.
          <source>Prentice Hall series in Artificial Intelligence</source>
          . Upper Saddle River, NJ: Pearson Education.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Jurafsky</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          <string-name>
            <surname>Forthcoming</surname>
          </string-name>
          .
          <article-title>Speech and Language Processing, an Introduction to Natural Language Processing</article-title>
          ,
          <source>Computational Linguistics and Speech Recognition. 3rd edition.</source>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          (
          <year>2020</year>
          ) Draft: https://web.stanford.edu/~jurafsky/slp3/ed3book.pdf Koller,
          <string-name>
            <surname>W.</surname>
          </string-name>
          <year>2000</year>
          .
          <article-title>Der Begriff der Äquivalenz in der Übersetzungswissenschaft</article-title>
          . In Übertragung, Annäherung, Angleichung, edited by C.
          <string-name>
            <surname>Fabricius-Hansen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          and
          <string-name>
            <surname>J. Østbø</surname>
          </string-name>
          ,
          <volume>11</volume>
          -
          <fpage>30</fpage>
          . Frankfurt, Germany: Peter Lang.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2012</year>
          .
          <article-title>Sentiment Analysis</article-title>
          and
          <string-name>
            <given-names>Opinion</given-names>
            <surname>Mining</surname>
          </string-name>
          . San Rafael, CA: Morgan &amp; Claypool.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Ma J.</surname>
          </string-name>
          <year>2010</year>
          .
          <article-title>A comparative analysis of the ambiguity resolution of two English-Chinese MT approaches: RBMT and SMT</article-title>
          . Dalian University of
          <source>Technology Journal</source>
          <volume>31</volume>
          (
          <issue>3</issue>
          ):
          <fpage>114</fpage>
          -
          <lpage>119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Marcu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1999</year>
          .
          <article-title>Discourse trees are good indicators of importance in text</article-title>
          .
          <source>In Advances in Automatic Text Summarization</source>
          ,
          <fpage>123</fpage>
          -
          <lpage>136</lpage>
          , edited by I. Mani and
          <string-name>
            <given-names>M.</given-names>
            <surname>Maybury</surname>
          </string-name>
          . Cambridge, MA: The MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Mittal</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joshi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finin</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Thinking, Fast and Slow: Combining Vector Spaces and Knowledge Graphs</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <source>arXiv:1708.03310v2 [cs.AI]</source>
          . Ithaca, NY: Cornell University Library.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>Mountantonakis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tzitzikas</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Knowledge Graph Embeddings over Hundreds of Linked Datasets</article-title>
          .
          <source>In Metadata and Semantic Research</source>
          ,
          <source>Communications in Computer and Information Science</source>
          , vol
          <volume>1057</volume>
          . (
          <year>2019</year>
          ), edited by E.
          <string-name>
            <surname>Garoufallou</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Fallucchi</surname>
          </string-name>
          and E.W. De Luca. Cham, Switzerland: Springer.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -36599-8_13 Mourouzidis,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Floros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Alexandris</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Generating Graphic Representations of Spoken Interactions from Journalistic Data</article-title>
          .
          <source>In Lecture Notes in Computer Science</source>
          , vol.
          <volume>11566</volume>
          ,
          <string-name>
            <surname>edited by M. Kurosu</surname>
          </string-name>
          ,
          <volume>559</volume>
          -
          <fpage>570</fpage>
          . Basel, Switzerland: Springer.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>Nass</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brave</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship</article-title>
          . Cambridge, MA: The ΜΙΤ Press.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <string-name>
            <surname>Paltridge</surname>
            <given-names>B.</given-names>
          </string-name>
          <year>2012</year>
          .
          <article-title>Discourse Analysis: An Introduction</article-title>
          . London: Bloomsbury Publishing.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <year>2000</year>
          .
          <article-title>Politeness in Chinese Face-to-Face Interaction</article-title>
          .
          <source>Advances in Discourse Processes Series</source>
          vol.
          <volume>67</volume>
          .
          <string-name>
            <surname>Stamford</surname>
            ,
            <given-names>CT</given-names>
          </string-name>
          , USA: Ablex Publishing Corporation.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>Plutchik</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>1982</year>
          .
          <article-title>A psychoevolutionary theory of emotions</article-title>
          .
          <source>Social Science Information</source>
          . (
          <volume>21</volume>
          ):
          <fpage>529</fpage>
          -
          <lpage>553</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          doi.org/10.1177/053901882021004003 Poria,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Cambria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Hazarika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Mazumder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Zadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Morency</surname>
          </string-name>
          , L-P.
          <year>2017</year>
          .
          <article-title>Context-Dependent Sentiment Analysis in User-Generated Videos</article-title>
          .
          <source>In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</source>
          , Vancouver, Canada,
          <source>July 30 - August 4</source>
          ,
          <year>2017</year>
          ,
          <fpage>873</fpage>
          -
          <lpage>88</lpage>
          . Association for Computational Linguistics - ACL. doi.org/10.18653/v1/
          <fpage>P17</fpage>
          -1081.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <string-name>
            <surname>Rocklage</surname>
          </string-name>
          , M.D,
          <string-name>
            <surname>Rucker D.D.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Nordgren</surname>
            ,
            <given-names>L.F.</given-names>
          </string-name>
          <year>2018</year>
          .
          <source>Psychological Science</source>
          <year>2018</year>
          , vol.
          <volume>29</volume>
          (
          <issue>5</issue>
          ):
          <fpage>749</fpage>
          -
          <lpage>760</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>doi/org/10.1177/0956797617744797.</mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <string-name>
            <surname>Searle</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          <year>1969</year>
          .
          <article-title>Speech Acts: An Essay in the Philosophy of Language</article-title>
          . Cambridge, MA: Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kopru</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruvini</surname>
            ,
            <given-names>J-D.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Neural Network based Extreme Classification and Similarity Models for Product Matching</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <source>In Proceedings of NAACL-HLT</source>
          <year>2018</year>
          , New Orleans, Louisiana, June 1 - 6,
          <year>2018</year>
          ,
          <fpage>8</fpage>
          -
          <lpage>15</lpage>
          .
          <article-title>Association for Computational LinguisticsACL</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <string-name>
            <surname>Skonk</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>5 Types of Negotiation Skills, Program on Negotiation Daily Blog</article-title>
          , Harvard Law School, May the 14th
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          https://www.pon.harvard.edu/daily/negotiation-skills-daily/typesof-negotiation-skills/ Accessed:
          <fpage>2021</fpage>
          -12-30.
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <string-name>
            <surname>Stede</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taboada</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Das</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Annotation Guidelines for Rhetorical Structure</article-title>
          . Manuscript. University of Potsdam and Simon Fraser University,
          <year>March 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>H. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Takashu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Analyzing Knowledge Graph Embedding Methods from a Multi-Embedding Interaction Perspective</article-title>
          .
          <source>In Proceedings of the 1st International Workshop on Data Science for Industry 4</source>
          .0 (
          <issue>DSI4</issue>
          ) at EDBT/ICDT 2019 Joint Conference. arxiv.org/abs/
          <year>1903</year>
          .11406 [cs.
          <source>AI]</source>
          . Ithaca, NY: Cornell University Library.
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <string-name>
            <surname>Trofimova</surname>
            <given-names>I.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Observer Bias: An Interaction of Temperament Traits with Biases in the Semantic Perception of Lexical Material</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <issue>PLoSONE 9</issue>
          (
          <issue>1</issue>
          ):
          <fpage>e85677</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qiu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>A Survey on Knowledge Graph Embeddings for Link Prediction</article-title>
          .
          <source>Symmetry</source>
          <year>2021</year>
          ,
          <volume>13</volume>
          :
          <fpage>485</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          doi.org/10.3390/sym13030485 Wardhaugh,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>1992</year>
          .
          <article-title>An Introduction to Sociolinguistics, 2nd edition</article-title>
          . Oxford, UK: Blackwell.
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asadi</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zweig</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning</article-title>
          .
          <source>In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</source>
          , Vancouver, Canada,
          <source>July 30 - August 4</source>
          ,
          <year>2017</year>
          ,
          <fpage>665</fpage>
          -
          <lpage>677</lpage>
          , Association for Computational Linguistics-ACL.
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          <string-name>
            <surname>Wilson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Wilson,
          <string-name>
            <surname>T.P.</surname>
          </string-name>
          <year>2005</year>
          .
          <article-title>An oscillator model of the timing of turn taking</article-title>
          .
          <source>Psychonomic Bulletin and Review</source>
          <year>2005</year>
          :
          <volume>12</volume>
          (
          <issue>6</issue>
          ):,
          <fpage>957</fpage>
          -
          <lpage>968</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aoyama</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ozeki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nakamura</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Capture, Recognition, and Visualization of Human Semantic Interactions in Meetings</article-title>
          .
          <source>In Proceedings of PerCom</source>
          , Mannheim, Germany,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <string-name>
            <surname>Zeldes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>rstWeb - A Browser-based Annotation Interface for Rhetorical Structure Theory and Discourse Relations</article-title>
          .
          <source>In Proceedings of NAACL-HLT 2016 System Demonstrations</source>
          . San Diego, CA,
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . aclweb.org/anthology/N/N16/N16-3001.pdf
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>