<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The SEEMPAD Dataset for Emphatic and Persuasive Argumentation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Elena Cabrio</string-name>
          <email>elena.cabrio@unice.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serena Villata</string-name>
          <email>villata@i3s.unice.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universite ́ Coˆ te d'Azur</institution>
          ,
          <addr-line>Inria, CNRS, I3S</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>English. Emotions play an important role in argumentation as humans mix rational and emotional attitudes when they argue with each other to take decisions. The SEEMPAD project aims at investigating the role of emotions in human argumentation. In this paper, we present a resource resulting from two field experiments involving humans in debates, where arguments put forward during such debates are annotated with the emotions felt by the participants. In addition, in the second experiment, one of the debaters plays the role of the persuader, to convince the other participants about the goodness of her viewpoint applying different persuasion strategies. To the best of our knowledge, this is the first dataset of arguments annotated with the emotions collected from the participants of a debate, using facial emotion recognition tools.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Italiano. Le emozioni giocano un
ruolo importante nell’argomentazione in
quanto gli esseri umani uniscono
atteggiamenti razionali ad atteggiamenti
puramente emotivi quando discutono tra loro
per prendere decisioni. Il progetto
SEEMPAD si propone di studiare il ruolo delle
emozioni nell’argomentazione umana. In
questo articolo, presentiamo una risorsa
ottenuta tramite due esperimenti empirici
che coinvolgono le persone nei
dibattiti. Gli argomenti presentati durante tali
dibattiti sono annotati con le emozioni
provate dai partecipanti nel momento in
cui l’argomento viene proposto nella
discussione. Inoltre, durante il secondo
esperimento, uno dei partecipanti svolge il
ruolo di persuasore, al fine di convincere
gli altri partecipanti della bonta´ del suo
punto di vista applicando diverse
strategie di persuasione. Questa risorsa e`
peculiare nel suo genere, ed e` l’unica a
contenere argomenti annotati con le emozioni
provate dai partecipanti durante un
dibattito (emozioni registrate tramite
strumenti di riconoscimento automatico delle
emozioni facciali).</p>
    </sec>
    <sec id="sec-2">
      <title>1 Introduction</title>
      <p>
        Argumentation in Artificial Intelligence (AI) is
defined as a formal framework to support decision
making
        <xref ref-type="bibr" rid="ref2 ref8">(Rahwan and Simari, 2009; Atkinson et
al., 2017)</xref>
        . In this context, argumentation is used
to achieve the so called critical thinking.
However, humans are proved to behave differently as
they mix rational and emotional attitudes.
      </p>
      <p>
        In order to study the role emotions play in
argumentation, we proposed an empirical evaluation of
the connection between argumentation and
emotions in online debate interactions
        <xref ref-type="bibr" rid="ref10 ref11 ref4">(Villata et al.,
2017; Villata et al., 2018)</xref>
        . In particular, in the
context of the SEEMPAD project,1 we designed
a field experiment
        <xref ref-type="bibr" rid="ref10">(Villata et al., 2017)</xref>
        with
human participants which investigates the
correspondences between the arguments and their relations
(i.e., support and attack) put forward during a
debate, and the emotions detected by facial
emotion recognition systems in the debaters. In
addition, given the importance of persuasion in
argumentation, we also designed a second field
experiment
        <xref ref-type="bibr" rid="ref11 ref4">(Villata et al., 2018)</xref>
        to study the
correlation between the arguments, the relations
between them, the emotions detected on the
participants, and one of the classical persuasion
strategies proposed by Aristotle in rethoric (i.e., logos,
ethos, and pathos), played by some participants in
the debate to convince the others. In our
studies, we selected a behavioral method to extract
1https://project.inria.fr/seempad/
the emotional manifestations. We used a set of
webcams (one for each participant in the
discussion) whose recordings have been analyzed with
the FaceReader software2 to detect a set of discrete
emotions from facial expressions (i.e., happiness,
anger, fear, sadness, disgust, and surprise).
Participants were placed far from each other, and they
were debating through a purely text-based online
debate chat (IRC). As a post-processing phase, we
aligned the textual arguments the debaters
proposed in the chat with the emotions the debaters
were feeling while these arguments have been
proposed in the debate.
      </p>
      <p>
        In this paper, we describe the two annotated
resources resulting from this post-processing of the
data we collected in our two field experiments.
Our resource, called the SEEMPAD resource, is
composed of two different annotated datasets, one
for each of these experiments3. The datasets
collect all the arguments put forward during the
debates. These arguments have been paired by
attack and support relations, as in standard
Argument Mining annotations
        <xref ref-type="bibr" rid="ref1 ref11 ref4 ref5 ref6 ref7">(Cabrio and Villata,
2018; Lippi and Torroni, 2016)</xref>
        . Moreover,
arguments are annotated with the source of the
argument, and the emotional status captured from all
the participants, when the arguments are put
forward in the debate.
      </p>
      <p>To the best of our knowledge, this is the first
argumentation dataset annotated with the emotions
captured from the output of facial emotion
recognition tools. In addition, this resource can be
used both for argument mining tasks (i.e., relation
prediction), and for emotion classification in text,
where instances of text annotated with the
emotions detected on the participants are usually not
available. Finally, text-based emotion
classification would benefit from the different annotation
layers that are present in our dataset.</p>
      <p>In the reminder of the paper, Sections 2 and 3
describe the dataset resulting from the two field
experiments. Conclusions end the paper.
2</p>
    </sec>
    <sec id="sec-3">
      <title>Dataset of argument pairs associated with the speaker’s emotional status</title>
      <p>This section describes the dataset of textual
arguments we have created from the debates among the
2https://www.noldus.com/
human-behavior-research/products/
facereader</p>
      <p>
        3Available at
seempad/datasets/
http://project.inria.fr/
participants in Experiment 1
        <xref ref-type="bibr" rid="ref10">(Villata et al., 2017)</xref>
        .
The dataset is composed of four main layers: (i)
the basic annotation of the arguments4 proposed in
each debate (i.e., the annotation in xml of the
debate flow downloaded from the debate platform);
(ii) the annotation of the relations of support and
attack among the arguments; (iii) starting from the
basic annotation of the arguments, the annotation
of each argument with the emotions felt by each
participant involved in the debate; and (iv) starting
from the basic annotation, the opinion of each
participant about the debated topic at the beginning,
in the middle and at the end of debate is extracted
and annotated with its polarity.
      </p>
      <p>The basic dataset is composed of 598 different
arguments proposed by the participants in 12
different debates. The debated issues and the number
of arguments for each debate are reported in
Table 1. We selected the topics of the debates among
the set of popular discussions addressed in online
debate platforms like iDebate5 and DebateGraph6.</p>
      <p>
        In the dataset, each argument proposed in the
debate is annotated with an id, the participant
putting this argument on the table, and the time
interval the argument has been proposed.7 Then,
arguments pairs have been annotated with the
relation holding between them, i.e., support or attack.
For each debate we apply the following procedure,
validated in
        <xref ref-type="bibr" rid="ref3">(Cabrio and Villata, 2013)</xref>
        :
1. the main issue (i.e., the issue of the debate
proposed by the moderator) is considered as
the starting argument;
2. each opinion is extracted and considered as
an argument;
3. since attack and support are binary relations,
the arguments are coupled with:
a the starting argument, or
b other arguments in the same discussion
to which the most recent argument refers
4Note that we annotated as an argument each utterance
proposed by the participants in the debate. We did not need
then to define guidelines to distinguish arguments or their
components in the debate, as it is usually done in the
Argument Mining field
        <xref ref-type="bibr" rid="ref11 ref4 ref5">(Cabrio and Villata, 2018)</xref>
        .
      </p>
      <p>5http://idebate.org/
6www.debategraph.org/
7Note that when the argument was put forward by the
debater in one single utterance the two time instances (i.e.,
time-from and time-to) coincide. We used the time interval
only when the argument was composed of several separated
utterances put forward in the chat across some minutes.
(e.g., when an argument proposed by a
certain user supports or attacks an argument
previously expressed by another user);
4. the resulting pairs of arguments are then
tagged with the appropriate relation, i.e.,
attack or support.</p>
      <p>To show a step-by-step application of the
procedure, let us consider the debated issue Ban
Animal Testing. At step 1, we consider the issue
of the debate proposed by the moderator as the
starting argument (a):
(a) The topic of the first debate is that animal
testing should be banned.</p>
      <p>Then, at step 2, we extract all the users opinions
concerning this issue (both pro and con), e.g., (b),
(c) and (d):
(b) I don’t think the animal testing should be
banned, but researchers should reduce the pain to
the animal.
(c) I totally agree with that.
(d) I think that using animals for different kind of
experience is the only way to test the accuracy of
the method or drugs. I cannot see any difference
between using animals for this kind of purpose
and eating their meat.
(e) Animals are not able to express the result of
the medical treatment but humans can.</p>
      <p>At step 3a we couple the arguments (b) and
(d) with the starting issue since they are directly
linked with it, and at step 3b we couple argument
(c) with argument (b), and argument (e) with
argument (d) since they follow one another in the
discussion. At step 4, the resulting pairs of arguments
are then tagged by one annotator with the
appropriate relation, i.e.: (b) attacks (a), (d) attacks (a),
(c) supports (b) and (e) attacks (d). The reader
may argue about the existence of a relation (i.e., a
support) between (c) and (d), where (d) supports
(c). However, in this case, no relation holds as
argument (d) does not really supports argument (c),
which basically share the same semantic content
of argument (b). Therefore, as no relation holds
between (b) and (d), no relation holds either
between (c) and (d). We decided to not annotate the
supports/attacks between arguments proposed by
the same participant (e.g., situations where
participants are contradicting themselves). Note that this
does not mean that we assume that such situations
do not arise: no restriction was imposed to the
participants of the debates, so situations where a
participant attacked/supported her own arguments are
represented in our dataset. The same annotation
task has been independently carried out also by a
second annotator on a sample of 100 pairs
(randomly extracted), obtaining an IAA of = 0.82.
The IAA is computed on the assignement of the
label “support” or “attack” to the same set of pairs
provided to the two annotators.</p>
      <p>Topic
BAN ANIMAL TESTING
GO NUCLEAR
HOUSEWIVES SHOULD BE PAID
RELIGION DOES MORE HARM
THAN GOOD
ADVERTISING IS HARMFUL
BULLIES ARE LEGALLY
RESPONSIBLE
DISTRIBUTE CONDOMS IN SCHOOLS
ENCOURAGE FEWER PEOPLE TO
GO TO THE UNIVERSITY
FEAR GOVERNMENT POWER OVER
INTERNET
BAN PARTIAL BIRTH ABORTIONS
USE RACIAL PROFILING FOR
AIRPORT SECURITY
CANNABIS SHOULD BE LEGALIZED
TOTAL</p>
      <p>Table 1 reports on the number of arguments and
pairs we extracted applying the methodology
described before. In total, our dataset contains 598
different arguments and 263 argument pairs (127
expressing the support relation and 136 the attack
relation among the involved arguments).</p>
      <p>
        The dataset resulting from such annotation adds
to all previously annotated information (i.e.,
argument id, the argument’s owner, argument’s
relations with the other arguments (attack, support)),
the dominant emotion detected using the
FaceReader system for each participant in the debate.
We investigate the correlation between arguments
and emotions in the debates, and a data analysis
has been performed to determine the proportions
of emotions for all participants. For more details
about the correlation between emotions and
arguments, we refer the interested reader to
        <xref ref-type="bibr" rid="ref10">(Villata et
al., 2017)</xref>
        .
      </p>
      <p>An example, from the debate about the topic
“Religion does more harm than good” where
arguments are annotated with emotions, is as follows:
&lt;argument id="30" debate_id="4"
participant="4" time-from="20:43" time-to="20:
43" emotion_p1="neutral" emotion_p2=
"neutral" emotion_p3="neutral" emotion_
p4="neutral"&gt; Indeed but there exist
some advocates of the devil like Bernard
Levi who is decomposing arabic countries.
&lt;/argument&gt;
&lt;argument id="31" debate_id="4"
participant="1" time-from="20:43" time-to="20:
43" emotion_p1="angry"
emotion_p2="neutral" emotion_p3="angry" emotion_p4=
"disgusted"&gt;I don’t totally agree with
you Participant2: science and religion
don’t explain each other, they tend to
explain the world but in two different
ways.&lt;/argument&gt;</p>
      <p>In this example, the argument “I don’t totally
agree with you Participant2: science and religion
don’t explain each other, they tend to explain the
world but in two different ways.” is proposed by
Participant 4 in the debate, and the emotions
resulting from this argument when it has been put
forward in the chat are neutrality for Participant
2, anger for Participant 1 and Participant 3, and
disgust for Participant 4.</p>
      <p>Finally, as an additional annotation layer, for
each participant we have selected one argument
at the beginning of the debate, one argument in
the middle of the discussion, and one argument at
the end of the debate. These arguments are then
annotated by the annotators with their sentiment
classification with respect to the issue of the
debate: negative, positive, or undecided. The
negative sentiment is assigned to an argument when the
opinion expressed in such argument is against the
debated topic, while the positive sentiment label is
assigned when the argument expresses a viewpoint
that is in favor of the debated issue. The undecided
sentiment is assigned when the argument does not
express a precise opinion in favor or against the
debated topic. Selected arguments are evaluated
as the most representative arguments proposed by
each participant to convey her own opinion, in the
three distinct moments of the debate. The
rationale is that this annotation allows to easily detect
when a participant has changed her mind with
respect to the debated topic. An example is provided
below, where Participant4 starts the debate being
undecided and then turns to be positive about
banning partial birth abortions in the middle and at the
end of the debate:
&lt;arg id="5" participant="4" time-from=
"20:36" time-to="20:36"
polarity="undecided"&gt;Description’s gruesome but does the
fetus fully lives at that point and
therefore, conscious of something ? Hard to
answer. If yes, I might have an
hesitation to accept it. If not, the woman is
probably mature enough to judge.
&lt;/argument&gt;
&lt;arg id="24" participant="4" time-from=
"20:46" time-to="20:46"
polarity="positive"&gt;In the animal world, malformed or
sick babies are systematically abandoned.
&lt;/argument&gt;
&lt;arg id="38" participant="4" time-from=
"20:52" time-to="20:52"
polarity="positive"&gt;Abortion is legal and it doesn’t
matter much when and how. It’s an individual
choice for whatever reason it might be.
&lt;/argument&gt;</p>
    </sec>
    <sec id="sec-4">
      <title>3 Dataset of arguments biased by persuasive strategies</title>
      <p>
        We now describe the corpus of textual
arguments, about other discussion topics, collected
during Experiment 2
        <xref ref-type="bibr" rid="ref11 ref4">(Villata et al., 2018)</xref>
        , in
which, together with the participants of the
experiment, a persuader (PP) was involved to convince
the other participants about the goodness of her
viewpoint, applying different persuasion
strategies. Three kinds of argumentative persuasion
exist since Aristotle: Ethos, Logos, and Pathos
        <xref ref-type="bibr" rid="ref1 ref12 ref9">(Ross
and Roberts, 2010; Walton, 2007; Allwood, 2016)</xref>
        .
      </p>
      <p>Ethos deals with the character of the speaker,
whose intent is to appear credible. The main
influencing factors for Ethos encompass elements such
as vocabulary, and social aspects like rank or
popularity. Additionally, the speaker can use
statements to position himself and to reveal social
hierarchies. Logos is the appeal to logical reason: the
speaker wants to present an argument that appears
to be sound to the audience. For the
argumentation, the focus of interest is on the arguments,
the argument schemes, the different forms of proof
and the reasoning. Pathos encompasses the
emotional influence on the audience. If the goal of
argumentation is to persuade the audience, then it
is necessary to put the audience in the appropriate
emotional states. The public speaker has several
possibilities to awaken emotions in the audience,
like techniques and presentation styles (e.g.,
storytelling), reducing the ability of the audience to
port one) were collected and annotated. The
number of proposed arguments varies a lot depending
on the participants (some were more active, others
proposed very few arguments even if solicited), as
well as the number of attacks/supports between the
arguments. We computed the IAA for the relation
annotation task on 1/3 of the pairs of the dataset
(54 randomly extracted pairs), obtaining = 0.83.
be critical or to reason.8 It is worth noticing that
the persuasive strategies are not always mutually
exclusive in real world scenario, however, for the
sake of simplicity, we consider in this paper that
when one of the strategies is applied the other do
not hold. In addition to a persuasion strategy, the
persuader participated into the debate with a
precise stance (pro or con) with respect to the debated
issue. Such stance does not change during the
debate.</p>
      <p>Each argument is annotated with the following
elements: debate identifier, argument identifier,
participant, and time in which it has been
published. For each debate, pairs have been created
following the same methodology described in
Section 2, and all the relations of attack and support
between the arguments proposed by the persuader
and those of the other participants are annotated.</p>
      <p>In this way, we are able to investigate the reactions
to PP strategy by tracking the proposed arguments
in the debate and the mental engagement index of
the other participants. An example of Ethos
strategy used against gun rights is the following:
This paper presented the SEEMPAD resource for
empathic and persuasive argumentation. These
datasets have been built on the data resulting from
two field experiments on humans to assess the
impact of emotions during the argumentation in
online debates. Several Natural Language
Processing tasks can be can be thought on this dataset.</p>
      <p>
        First of all, given that the dataset resulting from the
Experiment 1 is a gold standard of arguments
annotated with emotions, systems for emotion
classification can use it as a benchmark for
evaluation. In addition, a comparison of systems’
performances on this data compared with the standard
&lt;arg id="16" debate_id="8" participant="5" dataset for emotion classification would be
interetdiumcea=t"i1o9n:a4l6:4fi1"e&gt;ldI’ivneUbSeAe,nawnodrktihnegreinno-the esting, given that in SEEMPAD emotions have not
thing worse than a kid talking about the been manually annotated but they have been
capgun of his father. As you cannot say "the tured from the participants’ facial emotion
expresraikgihdt otnolyca"r.ryThegunnnsoirsigfohrt apteoapllle.without sions. Second, the dataset from Experiment 2 can
&lt;/argument&gt; be used to address a new task in argument mining,
namely persuasive strategy detection, in line with
the work of
        <xref ref-type="bibr" rid="ref4 ref5">(Duthie and Budzynska, 2018)</xref>
        and
        <xref ref-type="bibr" rid="ref1 ref6 ref7">(Habernal and Gurevych, 2016)</xref>
        .
      </p>
      <p>Table 2 describes this second dataset. Ten topics
of debate were selected from highly debated ones
in the mentioned online debate platforms, to avoid
proposing topics of no interest for the participants.</p>
      <p>In total, 791 arguments, and 162 arguments pairs
(74 linked by an attack relation and 88 by a
sup8For more details, refer to the work of K. Budzynska.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>Jens</given-names>
            <surname>Allwood</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Argumentation, activity and culture</article-title>
          .
          <source>In Proceedings of COMMA</source>
          <year>2016</year>
          ,
          <article-title>page 3</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Katie</given-names>
            <surname>Atkinson</surname>
          </string-name>
          , Pietro Baroni, Massimiliano Giacomin, Anthony Hunter, Henry Prakken, Chris Reed, Guillermo Simari, Matthias Thimm, and
          <string-name>
            <given-names>Serena</given-names>
            <surname>Villata</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Towards artificial argumentation</article-title>
          .
          <source>AI Magazine</source>
          ,
          <volume>38</volume>
          (
          <issue>3</issue>
          ):
          <fpage>25</fpage>
          -
          <lpage>36</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Elena</given-names>
            <surname>Cabrio</surname>
          </string-name>
          and
          <string-name>
            <given-names>Serena</given-names>
            <surname>Villata</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>A natural language bipolar argumentation approach to support users in online debate interactions</article-title>
          .
          <source>Argument &amp; Computation</source>
          ,
          <volume>4</volume>
          (
          <issue>3</issue>
          ):
          <fpage>209</fpage>
          -
          <lpage>230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Elena</given-names>
            <surname>Cabrio</surname>
          </string-name>
          and
          <string-name>
            <given-names>Serena</given-names>
            <surname>Villata</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Five years of argument mining: a data-driven analysis</article-title>
          .
          <source>In Proc. of IJCAI</source>
          <year>2018</year>
          , pages
          <fpage>5427</fpage>
          -
          <lpage>5433</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>Rory</given-names>
            <surname>Duthie</surname>
          </string-name>
          and
          <string-name>
            <given-names>Katarzyna</given-names>
            <surname>Budzynska</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>A deep modular RNN approach for ethos mining</article-title>
          .
          <source>In Proc. of IJCAI</source>
          <year>2018</year>
          , pages
          <fpage>4041</fpage>
          -
          <lpage>4047</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Ivan</given-names>
            <surname>Habernal</surname>
          </string-name>
          and
          <string-name>
            <given-names>Iryna</given-names>
            <surname>Gurevych</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Which argument is more convincing? analyzing and predicting convincingness of web arguments using bidirectional LSTM</article-title>
          .
          <source>In Proc. of ACL</source>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Marco</given-names>
            <surname>Lippi</surname>
          </string-name>
          and
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Torroni</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Argumentation mining: State of the art and emerging trends</article-title>
          .
          <source>ACM Trans. Internet Techn</source>
          .,
          <volume>16</volume>
          (
          <issue>2</issue>
          ):
          <volume>10</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          :
          <fpage>25</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>Iyad</given-names>
            <surname>Rahwan and Guillermo R. Simari</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <source>Argumentation in Artificial Intelligence</source>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>W.D. Ross</surname>
            and
            <given-names>W.R.</given-names>
          </string-name>
          <string-name>
            <surname>Roberts</surname>
          </string-name>
          .
          <year>2010</year>
          . Rhetoric - Aristotle. Cosimo Classics Philosophy.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>Serena</given-names>
            <surname>Villata</surname>
          </string-name>
          , Elena Cabrio, Ime`ne Jraidi, Sahbi Benlamine, Maher Chaouachi, Claude Frasson, and
          <string-name>
            <given-names>Fabien</given-names>
            <surname>Gandon</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Emotions and personality traits in argumentation: An empirical evaluation</article-title>
          .
          <source>Argument &amp; Computation</source>
          ,
          <volume>8</volume>
          (
          <issue>1</issue>
          ):
          <fpage>61</fpage>
          -
          <lpage>87</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>Serena</given-names>
            <surname>Villata</surname>
          </string-name>
          , Sahbi Benlamine, Elena Cabrio, Claude Frasson, and
          <string-name>
            <given-names>Fabien</given-names>
            <surname>Gandon</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Assessing persuasion in argumentation through emotions and mental states</article-title>
          .
          <source>In Proc. of FLAIRS</source>
          <year>2018</year>
          , pages
          <fpage>134</fpage>
          -
          <lpage>139</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>Douglas N.</given-names>
            <surname>Walton</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Media argumentation - dialect, persuasion and rhetoric</article-title>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>