<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Processing Information Unspoken: New Insights from Crowd- Sourced Data for Sentiment Analysis and Spoken Interaction Applications</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christina Alexandris</string-name>
          <email>calexandris@gs.uoa.gr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AAAI 2023 Spring Symposia</institution>
          ,
          <addr-line>Socially Responsible AI for Well-being</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Crowd-sourced data offers new insights in the processing of information not uttered in spoken interaction. This subjective, perceived, context-related information, and its conversion into “visible” information in knowledge graphs for use in vectors / other forms of training data contributes to registering complex emotions in Sentiment Analysis, to monitoring fairness in spoken interaction and to data enrichment in HCI/HRI applications. Additionally, insights from crowd-sourced data allow a differentiation between circumstantial factors / evidence and socio-culturally-biased factors /evidence in data analysis and training data.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Knowledge Graphs</kwd>
        <kwd>Crowd-Sourced Data</kwd>
        <kwd>Sentiment Analysis</kwd>
        <kwd>Cognitive Bias</kwd>
        <kwd>Plutchik Wheel of Emotions</kwd>
        <kwd>Human-Computer Interaction</kwd>
        <kwd>Spoken Dialog Systems</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Fairness,</title>
    </sec>
    <sec id="sec-2">
      <title>Information</title>
    </sec>
    <sec id="sec-3">
      <title>Bias</title>
      <p>and</p>
    </sec>
    <sec id="sec-4">
      <title>Unspoken</title>
      <p>
        Crowd-sourced data allows new insights in the
analysis and processing of information not uttered
in spoken interaction and the conversion of this
information into “visible” and processable
information in the form of knowledge graphs. The
knowledge graph data, with subsequent use in
vectors and other forms of training data [1] [
        <xref ref-type="bibr" rid="ref1">2</xref>
        ] [
        <xref ref-type="bibr" rid="ref2">3</xref>
        ]
[
        <xref ref-type="bibr" rid="ref3">4</xref>
        ] are intended, at least in the present stage, as a
dataset for training a neural network, with the
possibility of conversion in Graph Neural
Networks [
        <xref ref-type="bibr" rid="ref4">5</xref>
        ]. The conversion of knowledge
graphs into training data contributes to the
integration and processing of complex
information and information not uttered in
Natural Language Processing (NLP) tasks, thus,
contributing to the creation of even more
sophisticated systems. This possibility would not
be considered if the above-stated characteristic
research work were not accomplished.
      </p>
      <p>The very nature and structure of knowledge
graphs allows the representation of multiple facets
of information – the multiple facets of the “Sense”
of the words and/or transcribed video speech
segments – although it is considered that there
may exist some types of information/ some cases
that may not have 100% coverage by a knowledge
graph.</p>
      <p>
        The detecting and processing of information
not uttered but perceived-sensed by
speakersparticipants allows the integration of additional
information content – meanings/senses- in
training data. This allows the enrichment of data
and a deeper understanding of speaker-participant
psychology-mentality and sensitivities,
contributing to a deeper understanding of the
possible impact or consequences of a spoken
journalistic/political text or interview or a video in
Social Media (a). This also allows an additional
approach to registering of cause-result relations
on a discourse basis, including the monitoring of
Fairness, namely that all voices-aspects-opinions
are heard clearly –that all participants are given a
fair chance in the interview or discussion and are
not purposefully or unconsciously repressed,
oppressed, offended or even bullied (b). The way
sensitive topics and speaker-participant
sensitivity are purposefully or unconsciously
treated and managed contributes to registering
and monitoring fairness in spoken interaction,
avoiding Confidence Bias [
        <xref ref-type="bibr" rid="ref5">6</xref>
        ]. In particular, a
crucial element in achieving “visibility” and,
subsequently, “processability” of information not
uttered is causality, namely the registration and
processing of reactions triggered by that very
information not uttered - the multiple facets of the
“Sense” of the words in transcribed video and
speech segments and in Social Media.
      </p>
      <p>
        These reactions include subtle negative
reactions in the Plutchik Wheel of Emotions,
namely “Apprehension”, “Annoyance”,
“Disapproval”, “Contempt”, “Aggressiveness”
[
        <xref ref-type="bibr" rid="ref6">7</xref>
        ] - emotions usually too subtle to be easily
extracted by sensor and/or speech signal data [
        <xref ref-type="bibr" rid="ref7">8</xref>
        ]
[
        <xref ref-type="bibr" rid="ref8">9</xref>
        ] [
        <xref ref-type="bibr" rid="ref9">10</xref>
        ]. Additionally, the detecting and
processing of information not uttered (often
emotionally “sensitive” information) contributes
in Sentiment Analysis (and Opinion Mining)
applications where spoken data and/or videos are
processed. However, crowd-sourced input
indicates that information not uttered, along with
subtle emotions – occurring in the outer circles
of the Plutchik Wheel of Emotions, may be (1)
differently (or falsely) perceived – especially by
non-native speakers of a natural language”, (2)
may be highly dependent on random and/or
circumstantial or individual-specific factors and
(3) may concern specific domains and related
discourse. For Sentiment Analysis (and Opinion
Mining) applications, (1), (2) and (3) are equally
important.
sourced data and their integration into knowledge
graphs with subsequent use in training data
neural networks. The main focus is on the data
preparation stage for subsequent extensive
implementation and quantitative evaluation.
      </p>
    </sec>
    <sec id="sec-5">
      <title>2. Processing Unspoken Information and Knowledge Graphs- the “Context” Relation</title>
      <p>
        The knowledge graphs, generated by an
interactive application presented in
related/previous research [
        <xref ref-type="bibr" rid="ref10">11</xref>
        ] [
        <xref ref-type="bibr" rid="ref11">12</xref>
        ] [
        <xref ref-type="bibr" rid="ref12">13</xref>
        ], involve
the depiction of two main categories of
information not uttered in spoken interaction:
(I) Additional perceived information content
and dimensions of –notably- very common words
– information not registered in language
resources, it may concern context-specific
sociocultural associations and Cognitive Bias, in
particular, Lexical Bias [
        <xref ref-type="bibr" rid="ref13">14</xref>
        ]. (II) Perceived
paralinguistic elements influencing the
information content of spoken utterances. Both
types of perceived information are language- and
socio-culturally specific and are purposefully or
subconsciously conveyed or
perceivedunderstood by speakers-participants in the same
language community.
      </p>
      <p>
        In the knowledge graphs, this additional
information of the above-described categories (I)
and (II) is linked as an additional node to the
spoken word with the proposed “Context”
relation. The knowledge graphs can,
subsequently, be converted into vectors and other
forms of training data which is targeted to contain
(a) “visible” and processable information not
uttered in spoken interaction and (b) multiple
versions and varieties of training data with
perceived information generated by the
implemented interactive application [
        <xref ref-type="bibr" rid="ref10">11</xref>
        ] [
        <xref ref-type="bibr" rid="ref11">12</xref>
        ] [
        <xref ref-type="bibr" rid="ref12">13</xref>
        ].
      </p>
      <p>
        In our previous research [
        <xref ref-type="bibr" rid="ref11">12</xref>
        ] [
        <xref ref-type="bibr" rid="ref14">15</xref>
        ] [
        <xref ref-type="bibr" rid="ref15">16</xref>
        ], a
processing and evaluation framework was
proposed for the generation of graphic
representations and tags corresponding to values
and benchmarks depicting the degree of
information not uttered and non-neutral elements
in Speaker behavior in spoken text segments. The
implemented processing and evaluation
framework allows the graphic representation to be
presented in conjunction with the parallel
depiction of speech signals and transcribed texts.
Specifically, the alignment of the generated
graphic representation with the respective
segments of the spoken text enables a possible
integration in existing transcription tools.
      </p>
      <sec id="sec-5-1">
        <title>Although the concept of the generated graphic</title>
        <p>
          representations originates from the Discourse
Tree prototype [
          <xref ref-type="bibr" rid="ref16">17</xref>
          ], the characteristics of
spontaneous turn-taking [
          <xref ref-type="bibr" rid="ref17">18</xref>
          ] and short spoken
speech segments did not facilitate the
implementation of typical strategies based on
Rhetorical Structure Theory (RST) [
          <xref ref-type="bibr" rid="ref18">19</xref>
          ] [
          <xref ref-type="bibr" rid="ref19">20</xref>
          ] [
          <xref ref-type="bibr" rid="ref20">21</xref>
          ].
        </p>
        <p>
          In particular, strategies typically employed in
the construction of most Spoken Dialog Systems
were adapted in an interactive annotation tool
designed to operate with most commercial
transcription tools [
          <xref ref-type="bibr" rid="ref11">12</xref>
          ] [
          <xref ref-type="bibr" rid="ref14">15</xref>
          ] [
          <xref ref-type="bibr" rid="ref12">13</xref>
          ]. These strategies
include keyword processing in the form of topic
detection from which approaches involving neural
networks are developed [
          <xref ref-type="bibr" rid="ref21">22</xref>
          ] [
          <xref ref-type="bibr" rid="ref22">23</xref>
          ]. The output
provides the User-Journalist with (i) the tracked
indications of the topics handled in the interview
or discussion and (ii) the graphic pattern of the
discourse structure of the interview or discussion.
The output (i) and (ii) also included functions and
respective values reflecting the degree in which
the speakers-participants address or avoid the
topics in the dialog structure (“RELEVANCE”
Module) [
          <xref ref-type="bibr" rid="ref12">13</xref>
          ] as well as the degree of tension in
their interaction (“TENSION” Module). These
features are identified by a set of criteria based on
the Gricean Cooperative Principle [
          <xref ref-type="bibr" rid="ref23">24</xref>
          ] [
          <xref ref-type="bibr" rid="ref24">25</xref>
          ]
(including paralinguistic elements). The
implemented “RELEVANCE” Module [
          <xref ref-type="bibr" rid="ref12">13</xref>
          ] is
intended for the evaluation of short speech
segments and generates a visual representation
from the user’s interaction, tracking the
corresponding sequence of topics
(topickeywords) chosen by the user and the perceived
relations between them in the dialog flow. This
concerns topics avoided, introduced or repeatedly
referred to by each Speaker-Participant
(Repetitions, Associations, Generalizations and
Topic Switches). The assigned respective values
of each relation (“Relevance (X)” benchmark,
[
          <xref ref-type="bibr" rid="ref15">16</xref>
          ]) were converted into generated visual
representations and were registered as tuples or as
triple tuples and, subsequently, converted into
knowledge graphs.
        </p>
        <p>
          In the context of spoken interaction, Cognitive
Bias may concern “Association” (or other)
relations and argumentation related to inherent yet
subtle socio-culturally determined linguistic
features in (notably) commonly occurring words
presented in previous research (examples from the
international community: (the) “people”, (our)
“sea”). These word types are detectable from the
registered reactions [
          <xref ref-type="bibr" rid="ref25">26</xref>
          ] they trigger in the
processed dialog segment with two (or multiple)
speakers-participants. Since these words are very
common and do not contain descriptive features,
the subtlety of their content is often unconsciously
used or is perceived (mostly) by native speakers
and may contribute to the degree of formality or
intensity of conveyed information in a spoken
utterance. Here, these words concerning
Cognitive Bias – Lexical Bias are referred to as
“Gravity” words [
          <xref ref-type="bibr" rid="ref25">26</xref>
          ]. In other cases, these word
types, although common words, may contribute to
a descriptive or emotional tone in an utterance and
they may play a remarkable role in interactions
involving persuasion and negotiations.
Specifically, it is considered that, according to
Rockledge et al, 2018 [
          <xref ref-type="bibr" rid="ref26">27</xref>
          ], “the more extremely
positive the word, the greater the probability
individuals were to associate that word with
persuasion”. Here, these words concerning
Cognitive Bias – Lexical Bias are referred to as
“Evocative” words [
          <xref ref-type="bibr" rid="ref25">26</xref>
          ]. The subtle impact of
words is one of the tools typically used in
persuasion and negotiations [
          <xref ref-type="bibr" rid="ref27">28</xref>
          ] [
          <xref ref-type="bibr" rid="ref28">29</xref>
          ].
        </p>
        <p>
          Generated graphical representations of
perceived word-topic relations and registered
“Gravity” and “Evocative” words (concerning
Cognitive Bias – Lexical Bias) can be converted
into sequences for their subsequent conversion
into knowledge graphs or other forms of data for
neural networks and Machine Learning
applications [1] [
          <xref ref-type="bibr" rid="ref1">2</xref>
          ] [
          <xref ref-type="bibr" rid="ref2">3</xref>
          ] [
          <xref ref-type="bibr" rid="ref3">4</xref>
          ]. As described in
previous research [
          <xref ref-type="bibr" rid="ref14">15</xref>
          ], registered “Gravity” and
“Evocative” words are appended as marked
values with “&amp;” in the respective tuples or triple
tuples. In the sequences with the respective tuples
or triple tuples, the “&amp;” indication is converted
into a “CONTEXT” relation. In the knowledge
graphs, this additional information is linked as an
additional node to the spoken word with the
proposed “Context” relation. The term “Context”
is chosen to signalize the perceived context of
additional information in the form of co-occurring
linguistic and/or paralinguistic features,
influencing the information content of the spoken
utterance and its impact in the spoken interaction
and dialogue structure.
        </p>
        <p>
          In the case of paralinguistic elements, the
“Context” relation links an additional expression
– a word-entity, to the word uttered, for example,
a modifier [
          <xref ref-type="bibr" rid="ref29">30</xref>
          ], completing its perceived content.
This practice is typical of professional translators
and interpreters when correctness and precision is
targeted [
          <xref ref-type="bibr" rid="ref30">31</xref>
          ], as research and reports demonstrate.
The “CONTEXT” relation connects the chosen
word-topic from the speech segment with a
wordexpression emphasizing / complementing the
spoken content such as “important” or a
respective word summarizing the message. We
note that the “CONTEXT” relation may link both
a “Gravity”/ “Evocative” word and a
paralinguistic element to the word-topic of a
spoken utterance (Fig. 2).
        </p>
        <p>For paralinguistic features depicting
contradictory information to the information
content of the spoken utterance, the “CONTEXT”
relation connects the chosen word-topic from the
speech segment with a word-expression
contradicting the spoken content with the
expression “not really” as a special indication.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>3. New Insights for Knowledge</title>
    </sec>
    <sec id="sec-7">
      <title>Graphs and the Information (Atmo) “Sphere” of Spoken Words</title>
      <sec id="sec-7-1">
        <title>The generated knowledge graphs from the</title>
        <p>
          interactively created visual representations for the
same conversation and interaction may be
compared to each other and be integrated in a
database currently under development. Chosen
relations between topics may describe Lexical
Bias [
          <xref ref-type="bibr" rid="ref13">14</xref>
          ] and may differ according to political,
socio-cultural and linguistic characteristics of the
user-evaluator. This especially applies for
international speakers/users [
          <xref ref-type="bibr" rid="ref31">32</xref>
          ] [
          <xref ref-type="bibr" rid="ref32">33</xref>
          ] [
          <xref ref-type="bibr" rid="ref33">34</xref>
          ] [
          <xref ref-type="bibr" rid="ref34">35</xref>
          ]
[
          <xref ref-type="bibr" rid="ref35">36</xref>
          ], due to lack of world knowledge of the
language community involved [
          <xref ref-type="bibr" rid="ref36">37</xref>
          ] [
          <xref ref-type="bibr" rid="ref37">38</xref>
          ]. In this
case, it is considered that the registration of
spoken interaction is dependent on user’s
perception and linguistic parameters and
sociocultural norms. This allows for a finite set of data
to be pre-defined for evaluation and comparison
and/or it used as seed data for the enrichment of
existing data sets. However, with the extended
integration of crowd-sourced input, the use of
seed data for the enrichment of existing data sets
does not apply in all cases. In particular,
crowdsourced input indicates that:
        </p>
        <p>Unspoken Information may be differently (or
falsely) perceived – especially by non-native
speakers of a natural language - and especially
when subtle emotions in the Plutchik Wheel of
Emotions, are concerned (1).</p>
        <p>Another important factor is that the perception
of information not uttered may be highly
dependent on random and/or circumstantial or
individual-specific factors (2) or the perception of
unspoken information may concern only specific
domains and related discourse (3).</p>
        <p>User-specific and crowd-sourced data may be
problematic due to a number of factors concerning
users’ perception but also users’ experience and
time and effort invested in providing quality data
– especially when very subtle linguistic and
paralinguistic features are concerned. Therefore,
it is necessary for the above-described
problematic aspects of user-specific and
crowdsourced data to be minimized and/or controlled.</p>
        <p>These observations from crowd-sourced data
call for a differentiation between perceived
unspoken information compatible to
languagespecific and socio-cultural norms and perceived
unspoken information that is either strictly
circumstantial or strictly domain/context
dependent. Context-specific unspoken additional
dimensions of individual spoken words may be
described as an information (atmo) “sphere”
surrounding the word, with the semantic content
of the word in its nucleus, its context-specific and
language-specific dimensions in the inner layer of
the sphere (A) and its context-specific and
nonlanguage-specific dimensions in the outer layer of
the “sphere” (B).</p>
        <p>In other words, the actual semantic content of
the word as defined in dictionaries and lexica (and
hence, retrievable and processable) constitutes the
center-nucleus of the “sphere” and is
contextindependent. The perceived unspoken
contextspecific dimensions of the word that are
dependent on the above-described linguistic
parameters and socio-cultural norms (such as
“Gravity” and “Evocative” words and distinctive
meanings of paralinguistic features) constitute the
inner layer of the “sphere” (A). As previously
mentioned, this information can constitute a finite
set of pre-defined (seed) data for the enrichment
of existing data sets, according to the type(s) of
natural language(s) involved. This information
may be not perceived or incorrectly perceived by
non-native speakers-participants or by
inexperienced speakers-participants due to age or
training/background (i.e. crowd-sourced data
from teenagers, users not familiar with i.e.
sophisticated political speech) (i).</p>
        <p>The perceived unspoken context-specific and
non-language-specific dimensions of the word
constitute the outer layer of the “sphere” (B).
These non-language-specific dimensions account
for information perceived by an individual as an
isolated case or due to random and/or
circumstantial factors of the current context (i).</p>
        <p>The differentiation between context-specific
dimensions of a spoken word that are
languagespecific and non-language-specific allows a
differentiation between circumstantial
factors/evidence and socio-culturally-biased
factors/evidence in data analysis and training data.</p>
        <p>
          The outer layer of the “sphere” also accounts
for unspoken and non-language-specific
dimensions of a word that are, however,
domainspecific and/or related to a domain-specific
discourse. For example, the word “follower” may
be linked to different associations and subsequent
dimensions of meanings and responses within a
social media domain or within a geopolitical – war
domain (ii). Furthermore, a word not expressing
sentiment/emotion may be related to
domainspecific positive or negative statements as
observed in Sentiment Analysis and Opinion
Mining applications [
          <xref ref-type="bibr" rid="ref21">22</xref>
          ]. A typical case are words
that do not express sentiment but are connected to
positive or negative statements as registered in
Sentiment Analysis and Opinion Mining. For
example, in restaurant reviews, the word “waiter”
often occurs in negative statements [
          <xref ref-type="bibr" rid="ref21">22</xref>
          ].
spoken word &amp;
semantic content
context and language-specific
unspoken infornation (A)
[W/P-LANG CONTEXT node link]
context and non language-specific
        </p>
        <p>unspoken infornation (B)
(cirumstantial / domain-specific) (B)
[W/P CONTEXT node link]
words. Additionally, the context-specific and
language-specific “CONTEXT” relations are,
henceforth referred to as “P-LANG” CONTEXT
relations for paralinguistic information not uttered
such as the above-described perceived meaning of
a facial expression (“eyebrow-raise”) related to
language-specific and socio-cultural norms.</p>
        <p>The non-language-specific /domain-specific
(B) “CONTEXT” relations are, henceforth
referred to as “W” CONTEXT relations for
linguistic information not uttered and “P”
CONTEXT relations for non-language-specific/
/domain-specific paralinguistic information.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>4. Language-/Socio-culturally</title>
    </sec>
    <sec id="sec-9">
      <title>Specific Unspoken Information</title>
      <sec id="sec-9-1">
        <title>In the proposed knowledge graphs (Fig. 4),</title>
        <p>language-specific dimensions in the inner layer of
the sphere (A) include “Gravity” or “Evocative”
words perceived by native speakers of a natural
language that can be expressed with the W-LANG
CONTEXT relation. The standard types of
messages and information (and their variants)
conveyed by paralinguistic features perceived by
native speakers of a natural language can be
expressed with the P-LANG CONTEXT relation
(Fig. 5).
(“important”) co-occurring with topic “sanctions”
and perceived “Gravity” word (“dignity”) in
utterance: context-specific and language-specific
“CONTEXT: W-LANG” relation for linguistic
information and context-specific and
languagespecific “CONTEXT: P-LANG” relation for
paralinguistic information.</p>
        <p>
          In regard to the language and culture-specific
(standard) types of messages and information
(and their variants) conveyed by paralinguistic
features, examples of (interactively) annotated
paralinguistic features depicting information
complementing the information content of the
spoken utterance are the following [
          <xref ref-type="bibr" rid="ref25">26</xref>
          ], for
example: “[+ facial-expr: eyebrow-raise]” and “[+
gesture: low-hand-raise]”) or constituting
“standalone” information [
          <xref ref-type="bibr" rid="ref25">26</xref>
          ]. In the latter case,
information was interactively annotated with the
insertion of a separate message or response
[Message/Response]. For example, the raising of
eyebrows with the interpretation “I am surprised”
[and / but this surprises me] [
          <xref ref-type="bibr" rid="ref25">26</xref>
          ] was indicated as
[I am surprised] (a), either as a pointer to
information content or as or as a substitute of
spoken information, a “stand-alone”
paralinguistic feature [Message /Response: I am
surprised] [
          <xref ref-type="bibr" rid="ref25">26</xref>
          ]. Alternative interpretations of the
paralinguistic feature are “I am listening very
carefully” (b), “What I am saying is
important”(c), “I have no intention of doing
otherwise” (d) [
          <xref ref-type="bibr" rid="ref25">26</xref>
          ], indicated with the respective
annotations according to the parameters of the
language(s) and the speaker(s) concerned.
        </p>
        <p>
          This type of (annotated) data for paralinguistic
features constituting unspoken information may
contribute to the management of problematic
input in typical Data Mining and Sentiment
Analysis-Opinion Mining applications, especially
if the semantic content of a spoken utterance is
complemented or contradicted by a gesture, facial
expression, movement – or even by tone of voice.
Typical Data Mining and Sentiment
AnalysisOpinion Mining applications mostly rely on word
groups, word sequences and/or sentiment lexica
[
          <xref ref-type="bibr" rid="ref38">39</xref>
          ], including recent approaches with the use of
neural networks [
          <xref ref-type="bibr" rid="ref39">40</xref>
          ] [
          <xref ref-type="bibr" rid="ref40">41</xref>
          ] [
          <xref ref-type="bibr" rid="ref41">42</xref>
          ].
        </p>
        <p>As previously mentioned, with the present
approach, this type of language-specific data –
linguistic features and paralinguistic features- can
be used as seed data for Sentiment Analysis and
related applications. It can also be used as a
baseline for comparison and evaluation of multiple
user-input, especially if the quality of the
crowdsourced data is not guaranteed. The
languagespecific (seed) data can also be integrated in HCI
applications intended for native or near-native
speakers of a particular natural language or for a
defined pair or set of languages.</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>5. Unspoken Non-Language Specific and Domain-Specific Information</title>
      <p>
        In the case of non-language-specific
information that is, however, domain-specific (B),
the data can be integrated in domain-specific
applications. For example, in Sentiment Analysis
applications for restaurant reviews, the
emotionally neutral words “bill” or “waiter” are
connected with the dimension-meaning of a
negative statement [
        <xref ref-type="bibr" rid="ref21">22</xref>
        ] with the “CONTEXT:
W” relation (Fig.6). In other words, a positive or
negative dimension may be automatically related
to a word, depending on context – a feature of
crucial importance in Sentiment Analysis.
spoken utterance in a broad range of interaction
types. These interaction types range from
taskspecific dialogue and question-answer
interactions to interviews, political discussions
and spoken interaction concerning negation and
persuasion and/or expression of opinion.
      </p>
      <p>
        Prosodic emphasis, change of tone of voice and
speaker/individual-specific paralinguistic features
can be inserted as additional information with the
“Context” relation, as in the case of
languagespecific paralinguistic features presented in
previous research. The context-specific and
language-specific “W-LANG” CONTEXT and
“P-LANG” CONTEXT relations for linguistic
and paralinguistic information not uttered can be
integrated with non-language-specific /
domainspecific “W” CONTEXT and “P” CONTEXT
relations for linguistic and paralinguistic
information within a knowledge graph (Fig. 8).
All types of linguistic and/or paralinguistic
CONTEXT relations may co-occur within the
same speech segment, although this is not
considered common.
automatic execution of such processes [
        <xref ref-type="bibr" rid="ref42">43</xref>
        ],
however, further research is required.
      </p>
    </sec>
    <sec id="sec-11">
      <title>6. Conclusions and Further Research</title>
      <p>
        Crowd-sourced data resulted to new insights in
the analysis and processing of information not
uttered in spoken interaction [
        <xref ref-type="bibr" rid="ref43">44</xref>
        ] and its
integration in knowledge graphs, with its
subsequent use in vectors and other forms of
training data as dataset for training a neural
network for Natural Language Processing (NLP)
tasks. Insights from crowd-sourced data enabled a
differentiation between perceived linguistic and
paralinguistic information not uttered compatible
to language-specific and socio-cultural norms and
unspoken perceived information that is either
strictly circumstantial or strictly domain/context
dependent. This enables a differentiation between
circumstantial factors/evidence
(individual/context-specific or domain specific
for Sentiment Analysis/HCI) and
socioculturally-biased factors/evidence in data analysis
and training data and its integration in knowledge
graphs (1). In the latter case,
language/socioculturally-specific factors are more likely to
account for speaker-participant
psychologymentality and sensitivities and for cases of
intended or unintended offense or bullying,
differentiating them from any random
occurrences /individual-specific peculiarities
(especially for paralinguistic features), thus,
contributing to “Socially Responsible AI”.
      </p>
      <p>As proposed, context-specific additional
dimensions of individual spoken words may be
described as a context-specific information (atmo)
“sphere” surrounding the spoken word. The
concrete meaning – actual semantic content of the
word (retrievable and processable in Natural
Language Processing-NLP) is surrounded by two
context-specific layers, with its context-specific
and language-specific dimensions in the inner
layer of the sphere (A) and its context-specific and
non-language-specific dimensions in the outer
layer of the “sphere” (B). The outer layers of the
word (atmo) “sphere” demonstrate similarities to
the outer circles of the Plutchik Wheel of
Emotions containing complex emotions,
recognizable within a (socio-culturally
determined) context, such as “contempt” and
“disapproval”. In contrast, concretely identifiable
emotions – including intense and universally
recognizable emotions, such as “rage” and “grief”
- are located in the inner circles of the Plutchik
Wheel of Emotions and are typically easily
detected and processed by current practices in
Sentiment Analysis and Opinion Mining. In other
words, the proposed information (atmo) “sphere”
surrounding the spoken word mirrors the overall
shape and very general – basic- features in the
Plutchik Wheel of Emotions (2).</p>
      <p>The distinct types of integration of the
“Context” factor and related information in
knowledge graphs –as provided by crowd-sourced
data – outline the distinct types of implementation
for the enrichment of models and refining NLP
tasks -– especially when videos and multimodal
data are processed (3). In addition to their
integration in knowledge graphs, the pre-defined
words can also be used as an enhanced
“Bag-ofWords” approach (Seed Data) in strategies and
applications such as spoken Dialog Systems. In
the case of Dialog Systems and related HCI/HRI
applications, with the proposed processing
strategy, the mere utterance of a single word may
imply a complete phrase / sentence with
domainspecific (alternative types of) information (4).</p>
      <p>Since the present approach focuses on the data
preparation stage, targeting to its contribution to
“Socially Responsible AI”, further research is
geared towards the extensive implementation,
evaluation (with quantitative evaluation
measurements) and improvement of the training
data created by the knowledge graphs, especially
for a wider range of languages and speakers.</p>
    </sec>
    <sec id="sec-12">
      <title>7. References</title>
      <p>[1] S. Mittal, A. Joshi, T. Finin, Thinking, Fast
and Slow: Combining Vector Spaces and
Knowledge Graphs (2017) URL:
arXiv:1708.03310v2 [cs.AI]</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mountantonakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tzitzikas</surname>
          </string-name>
          ,
          <article-title>Knowledge Graph Embeddings over Hundreds of Linked Datasets</article-title>
          , in: Garoufallou E.,
          <string-name>
            <surname>Fallucchi</surname>
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>William De</surname>
          </string-name>
          Luca E. (Eds.),
          <source>Metadata and Semantic Research MTSR</source>
          <year>2019</year>
          , volume
          <volume>1057</volume>
          of Communications in Computer and Information Science, Springer, Cham,
          <year>2019</year>
          , pp.
          <fpage>150</fpage>
          -
          <lpage>162</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -36599- 8_
          <fpage>13</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H. N.</given-names>
            ,
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Takashu</surname>
          </string-name>
          ,
          <article-title>Analyzing Knowledge Graph Embedding Methods from a Multi-Embedding Interaction Perspective</article-title>
          ,
          <source>in: Proceedings of the 1st International Workshop on Data Science for Industry 4</source>
          .0 (
          <issue>DSI4</issue>
          ) at EDBT/ICDT 2019 Joint Conference,
          <year>2019</year>
          . URL: https://arxiv.org/abs/
          <year>1903</year>
          .11406
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Survey on Knowledge Graph Embeddings for Link Prediction</article-title>
          , Symmetry,
          <volume>13</volume>
          ,
          <issue>485</issue>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .3390/sym13030485
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. J.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. O.</given-names>
            <surname>Sing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>A Comprehensive</surname>
          </string-name>
          <article-title>Survey of Graph Neural Networks for Knowledge Graphs</article-title>
          , IEEE Access,
          <volume>10</volume>
          (
          <year>2022</year>
          ).
          <fpage>75729</fpage>
          -
          <lpage>75741</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2022</year>
          .
          <volume>3191784</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hilbert</surname>
          </string-name>
          ,
          <source>Toward a Synthesis of Cognitive Biases: How Noisy Information Processing Can Bias Human Decision Making, Psychological Bulletin 138(2) March</source>
          <year>2012</year>
          (
          <year>2012</year>
          )
          <fpage>211</fpage>
          -
          <lpage>237</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Plutchik</surname>
          </string-name>
          ,
          <article-title>A psychoevolutionary theory of emotions</article-title>
          ,
          <source>Social Science Information</source>
          <volume>21</volume>
          (
          <year>1982</year>
          )
          <fpage>529</fpage>
          -
          <lpage>553</lpage>
          . doi:
          <volume>10</volume>
          .1177/053901882021004003
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Basu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Soraghan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. Di</given-names>
            <surname>Caterina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Petropoulakis</surname>
          </string-name>
          ,
          <article-title>Human emotion recognition in video using subtraction preprocessing</article-title>
          ,
          <source>in: Proceedings of the 2019 11th International Conference on Machine Learning and Computing, Zhuhai China</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>374</fpage>
          -
          <lpage>379</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Poria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Cambria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hazarika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mazumder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L-P.</given-names>
            <surname>Morency</surname>
          </string-name>
          ,
          <article-title>Context-Dependent Sentiment Analysis in User-Generated Videos, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</article-title>
          , Vancouver, Canada,
          <source>July 30 - August 4</source>
          ,
          <year>2017</year>
          , Association for Computational Linguistics - ACL,
          <year>2017</year>
          , pp.
          <fpage>873</fpage>
          -
          <lpage>883</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P17</fpage>
          -1081
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Yakaew</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dailey</surname>
          </string-name>
          , T. Racharak,
          <article-title>Multimodal Sentiment Analysis on Video Streams using Lightweight Deep Neural Networks</article-title>
          ,
          <source>in: Proceedings of the 10th International Conference on Pattern Recognition Applications and Methods (ICPRAM</source>
          <year>2021</year>
          ),
          <year>2021</year>
          , pp.
          <fpage>442</fpage>
          -
          <lpage>451</lpage>
          . doi:
          <volume>10</volume>
          .5220/0010304404420451
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Floros</surname>
          </string-name>
          ,
          <article-title>Visualizing and Processing Information Not Uttered in Spoken Political and Journalistic Data: From Graphical Representations to Knowledge Graphs in an Interactive Application</article-title>
          , in: M. Kurosu M. (Ed.),
          <article-title>Human-Computer Interaction, Design and User Experience Case Studies</article-title>
          , volume
          <volume>13303</volume>
          of Lecture Notes in Computer Science, Springer, Cham,
          <year>2022</year>
          , pp.
          <fpage>211</fpage>
          -
          <lpage>226</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          - 05409-9_
          <fpage>16</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Floros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mourouzidis</surname>
          </string-name>
          ,
          <article-title>Graphic Representations of Spoken Interactions from Journalistic Data: Persuasion and Negotiations</article-title>
          , in: M.
          <string-name>
            <surname>Kurosu</surname>
          </string-name>
          (Ed.),
          <article-title>Human-Computer Interaction, Design and User Experience Case Studies</article-title>
          , volume
          <volume>12764</volume>
          of Lecture Notes in Computer Science LNCS, Springer, Cham,
          <year>2021</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>1</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -78468-
          <issue>3</issue>
          _
          <fpage>1</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mourouzidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Floros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          ,
          <article-title>Generating Graphic Representations of Spoken Interactions from Journalistic Data</article-title>
          , in: M. Kurosu, (Ed.), volume
          <volume>11566</volume>
          of Lecture Notes in Computer Science LNCS, Springer, Basel,
          <year>2019</year>
          , pp.
          <fpage>559</fpage>
          -
          <lpage>570</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [14]
          <string-name>
            <surname>I. Trofimova</surname>
          </string-name>
          , Observer Bias:
          <article-title>An Interaction of Temperament Traits with Biases in the Semantic Perception of Lexical Material. PLoSONE 9(1): e85677 (</article-title>
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mourouzidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Floros</surname>
          </string-name>
          ,
          <article-title>Generating Graphic Representations of Spoken Interactions Revisited: The Tension Factor and Information Not Uttered in Journalistic Data</article-title>
          , in: M.
          <string-name>
            <surname>Kurosu</surname>
          </string-name>
          (Ed.)
          <string-name>
            <surname>Human-Computer Interaction</surname>
          </string-name>
          .
          <article-title>Design and User Experience</article-title>
          , volume
          <volume>12181</volume>
          of Lecture Notes in Computer Science, LNCS, Springer Nature, Switzerland,
          <year>2020</year>
          , pp.
          <fpage>523</fpage>
          -
          <lpage>537</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -49059-1_
          <fpage>39</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          ,
          <article-title>Measuring Cognitive Bias in Spoken Interaction and Conversation: Generating Visual Representations</article-title>
          ,
          <source>in: Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity</source>
          for
          <string-name>
            <surname>Well-Being</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <source>Proceedings from the AAAI Spring Symposium</source>
          , Stanford University,
          <source>Technical Report, SS-18-03</source>
          , Palo Alto, CA: AAAI Press,
          <year>2018</year>
          , pp.
          <fpage>204</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Marcu</surname>
          </string-name>
          ,
          <article-title>Discourse trees are good indicators of importance in text</article-title>
          , in: I. Mani, M. Maybury (Eds.),
          <source>Advances in Automatic Text Summarization</source>
          , The MIT Press, Cambridge, MA,
          <year>1999</year>
          , pp.
          <fpage>123</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.P.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <article-title>An oscillator model of the timing of turn taking</article-title>
          ,
          <source>Psychonomic Bulletin and Review</source>
          <volume>12</volume>
          (
          <issue>6</issue>
          ) (
          <year>2005</year>
          )
          <fpage>957</fpage>
          -
          <lpage>968</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>L.</given-names>
            <surname>Carlson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Marcu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Okurowski</surname>
          </string-name>
          ,
          <article-title>Building a Discourse-Tagged Corpus in the Framework of Rhetorical Structure Theory</article-title>
          ,
          <source>in: Proceedings of the 2nd SIGDIAL Workshop on Discourse and Dialogue, Eurospeech</source>
          <year>2001</year>
          , Denmark,
          <year>2001</year>
          . URL:https://aclanthology.org/W01- 1605.pdf
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stede</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Taboada</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. Das</surname>
          </string-name>
          ,
          <article-title>Annotation Guidelines for Rhetorical Structure</article-title>
          . Manuscript. University of Potsdam and Simon Fraser University,
          <year>March 2017</year>
          . URL: https://www.sfu.ca/~mtaboada/docs/researc h/RST_Annotation_Guidelines.pdf
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zeldes</surname>
          </string-name>
          , rstWeb
          <article-title>- A Browser-based Annotation Interface for Rhetorical Structure Theory and Discourse Relations, in: Proceedings of NAACL-HLT 2016 System Demonstrations</article-title>
          . San Diego, CA 2016, pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . http://aclweb.org/anthology/N/N16/N16- 3001.pdf
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Speech and Language Processing, an Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition, 3rd</article-title>
          . ed. Draft: https://web.stanford.edu/~jurafsky/slp3/ed3 book_jan122022.pdf
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J.D.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Asadi</surname>
          </string-name>
          , G. Zweig,
          <article-title>Hybrid Code Networks: practical and efficient endto-end dialog control with supervised and reinforcement learning</article-title>
          ,
          <source>in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</source>
          , Vancouver, Canada,
          <source>July 30 - August 4</source>
          ,
          <year>2017</year>
          ,
          <article-title>Association for Computational Linguistics (ACL</article-title>
          ),
          <year>2017</year>
          , pp.
          <fpage>665</fpage>
          -
          <lpage>677</lpage>
          . URL: https://aclanthology.org/P17-1062/
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Grice</surname>
          </string-name>
          ,
          <article-title>Studies in the Way of Words</article-title>
          . Harvard University Press, Cambridge, MA 1989.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>H.P.</given-names>
            <surname>Grice</surname>
          </string-name>
          ,
          <article-title>Logic and conversation</article-title>
          , in: P. Cole, J. Morgan, (Eds.),
          <source>Syntax and Semantics</source>
          , volume
          <volume>3</volume>
          , Academic Press, New York (
          <year>1975</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          ,
          <source>Issues in Multilingual Information Processing of Spoken Political and Journalistic Texts in the Media and Broadcast News</source>
          , Cambridge Scholars, Newcastle upon Tyne, UK,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [27]
          <string-name>
            <surname>M.D</surname>
          </string-name>
          ,
          <string-name>
            <surname>Rocklage</surname>
            ,
            <given-names>D.D.</given-names>
          </string-name>
          <string-name>
            <surname>Rucker</surname>
            ,
            <given-names>L.F.</given-names>
          </string-name>
          <string-name>
            <surname>Nordgren</surname>
          </string-name>
          , Psychological Science
          <volume>29</volume>
          (
          <issue>5</issue>
          ) (
          <year>2018</year>
          )
          <fpage>749</fpage>
          -
          <lpage>760</lpage>
          . doi:
          <volume>10</volume>
          .1177/0956797617744797
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>N. J.</given-names>
            ,
            <surname>Evans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <article-title>Rethinking the Persuasion Knowledge Model: Schematic Antecedents and Associative Outcomes of Persuasion Knowledge Activation for Covert Advertising</article-title>
          ,
          <source>Journal of Current Issues &amp; Research in Advertising, 36:2</source>
          (
          <year>2015</year>
          )
          <fpage>157</fpage>
          -
          <lpage>176</lpage>
          . doi:
          <volume>10</volume>
          .1080/10641734.
          <year>2015</year>
          .1023873
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>K.</given-names>
            <surname>Skonk</surname>
          </string-name>
          , 5 Types of Negotiation Skills,
          <source>Program on Negotiation Daily Blog</source>
          , Harvard Law School, May the 14th
          <year>2020</year>
          . URL: https://www.pon.harvard.edu/daily/negotiati
          <article-title>on-skills-daily/types-of-negotiation-skills/ last accessed</article-title>
          <year>2022</year>
          /12/11.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          , English, German and the International “
          <article-title>Semi-professional” Translator: A Morphological Approach to Implied Connotative Features</article-title>
          ,
          <source>Journal of Language and Translation</source>
          , Sejong University, Korea,
          <year>September 2010</year>
          ,
          <volume>11</volume>
          (
          <issue>2</issue>
          ) (
          <year>2010</year>
          )
          <fpage>7</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>W.</given-names>
            <surname>Koller</surname>
          </string-name>
          , Der Begriff der Äquivalenz in der Übersetzungswissenschaft, in: C. FabriciusHansen, J. Ostbo (Eds.), Übertragung, Annährung, Angleichung,
          <source>Sieben Beiträge zu Theorie und Praxis des Übersetzens</source>
          , Peter Lang,
          <source>Frankfurt am Main</source>
          ,
          <year>2000</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mourouzidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Floros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Iliakis</surname>
          </string-name>
          ,
          <article-title>Controlling Interaction in Multilingual Conversation Revisited: A Perspective for Services and Interviews in Mandarin Chinese</article-title>
          , in: M.
          <string-name>
            <surname>Kurosu</surname>
          </string-name>
          (Ed.), volume
          <volume>10271</volume>
          of Lecture Notes in Computer Science LNCS, Springer-Verlag, Heidelberg, Germany,
          <year>2017</year>
          , pp.
          <fpage>573</fpage>
          -
          <lpage>583</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <article-title>A comparative analysis of the ambiguity resolution of two English-Chinese MT approaches: RBMT and SMT</article-title>
          , Dalian University of Technology Journal,
          <volume>31</volume>
          (
          <issue>3</issue>
          ) (
          <year>2010</year>
          )
          <fpage>114</fpage>
          -
          <lpage>119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>B.</given-names>
            <surname>Paltridge</surname>
          </string-name>
          ,
          <source>Discourse Analysis: An Introduction</source>
          , Bloomsbury Publishing, London,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <article-title>Politeness in Chinese Face-to-Face Interaction</article-title>
          , in: volume
          <volume>67</volume>
          of Advances in Discourse Processes series, Elsevier Science, Amsterdam,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>Z. W.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. Y.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Aoyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ozeki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nakamura</surname>
          </string-name>
          , Capture, Recognition, and
          <article-title>Visualization of Human Semantic Interactions in Meetings</article-title>
          , in: Proceedings of PerCom, Mannheim, Germany,
          <year>2010</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hatim</surname>
          </string-name>
          , Communication Across Cultures:
          <source>Translation Theory and Contrastive Text Linguistics</source>
          , University of Exeter Press, Exeter, UK,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>R.</given-names>
            <surname>Wardhaugh</surname>
          </string-name>
          , An Introduction to Sociolinguistics, 2nd. ed.,
          <source>Blackwell</source>
          , Oxford, UK,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Sentiment Analysis</article-title>
          and
          <string-name>
            <given-names>Opinion</given-names>
            <surname>Mining</surname>
          </string-name>
          , Morgan &amp; Claypool, San Rafael, CA,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [40]
          <string-name>
            <surname>C. M. Arockiaraj</surname>
          </string-name>
          , Applications of Neural Networks In Data Mining,
          <source>International Journal Of Engineering And Science</source>
          , volume
          <volume>3</volume>
          , Issue 1 (May
          <year>2013</year>
          ), (
          <year>2013</year>
          )
          <fpage>8</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Hedderich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Klakow</surname>
          </string-name>
          ,
          <article-title>Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data</article-title>
          ,
          <source>in: Proceedings of the Workshop on Deep Learning</source>
          Approaches for
          <string-name>
            <surname>Low-Resource</surname>
            <given-names>NLP</given-names>
          </string-name>
          , Melbourne, Australia, Association for Computational Linguistics-ACL,
          <year>2018</year>
          , pp.
          <fpage>12</fpage>
          -
          <lpage>18</lpage>
          . https://aclanthology.org/W18-3402/
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kopru</surname>
          </string-name>
          ,
          <string-name>
            <surname>J-D. Ruvini</surname>
          </string-name>
          ,
          <article-title>Neural Network based Extreme Classification and Similarity Models for Product Matching</article-title>
          ,
          <source>in: Proceedings of NAACL-HLT</source>
          <year>2018</year>
          , New Orleans, Louisiana, June 1 - 6,
          <year>2018</year>
          , Association for Computational
          <source>LinguisticsACL</source>
          <year>2018</year>
          , pp.
          <fpage>8</fpage>
          -
          <lpage>15</lpage>
          . URL: https://aclanthology.org/N18-3002/
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>N.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X. L.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Faloutsos</surname>
          </string-name>
          ,
          <article-title>Estimating Node Importance in Knowledge Graphs Using Graph Neural Networks</article-title>
          ,
          <source>in: Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '19), August 4-8</source>
          ,
          <year>2019</year>
          , Anchorage,
          <string-name>
            <surname>AK</surname>
          </string-name>
          , USA. ACM, New York, NY, USA,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1145/3292500.3330855
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alexandris</surname>
          </string-name>
          ,
          <article-title>Evaluating Cognitive Bias in Two-Party and Multi-Party Spoken Interactions, in: Proceedings of Interpretable AI for Well-being: Understanding Cognitive Bias and Social Embeddedness (IAW 2019) in conjunction with AAAI Spring Symposium</article-title>
          (SS-19-03), Stanford University, Palo Alto, CA,
          <year>2019</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-2448
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>