<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On Natural Language Generation of Formal Argumentation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Federico Cerutti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alice Toniolo</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Timothy J. Norman</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universita` degli Studi di Brescia Dipartimento di Ingegneria dell'Informazione Brescia</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Southampton Department of Electronics and Computer Science Southampton</institution>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of St. Andrews School of Computer Science St. Andrews</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <fpage>15</fpage>
      <lpage>29</lpage>
      <abstract>
        <p>In this paper we provide a first analysis of the research questions that arise when dealing with the problem of communicating pieces of formal argumentation through natural language interfaces. It is a generally held opinion that formal models of argumentation naturally capture human argument, and some preliminary studies have focused on justifying this view. Unfortunately, the results are not only inconclusive, but seem to suggest that explaining formal argumentation to humans is a rather articulated task. Graphical models for expressing argumentation-based reasoning are appealing, but often humans require significant training to use these tools effectively. We claim that natural language interfaces to formal argumentation systems offer a real alternative, and may be the way forward for systems that capture human argument.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Our aim here is to explore the challenges that we need to face when thinking about
natural language interfaces to formal argumentation. Dung [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] states that formal
argumentation “captures naturally the way humans argue to justify their solutions to many
social problems.” This is one of the most common claims used to support research in
formal argumentation. More recently there have been a number of empirical studies to
investigate this claim [
        <xref ref-type="bibr" rid="ref10 ref24 ref26 ref27">24, 10, 26, 27</xref>
        ]. The results, however, have been far from
conclusive.
      </p>
      <p>
        The use of graphical models to represent arguments is the most common approach
used in the formal argumentation community to capture argument structures. This has
been successfully applied in a number of real world situations: Laronge [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] (an
American barrister and researcher), for example, describes how he used argumentation graphs
during trials. Despite this, our claim is that to produce and to consume a graphical
representation of a structure of arguments there is need for significant levels of training.
      </p>
      <p>Instead of training users on another (graphical) language for representing argument
structures, we can leverage our societal model, through which we are trained in reading
and writing; that is, using natural language. We claim that natural language
representations of formal argumentation are the way forward to develop formal models that
capture human argumentation. In this paper we investigate one aspect of natural
language interfaces to formal argumentation: moving from formal arguments to natural
language text by exploiting Natural Language Generation (NLG) systems.</p>
      <p>
        In CISpaces.org [
        <xref ref-type="bibr" rid="ref8 ref9">9, 8</xref>
        ] we followed a rather pragmatic approach. Indeed, we
implemented: (1) a template-based NLG system; (2) a greedy, heuristics-based approach for
chaining together premises and conclusion of arguments; (3) an assert-justify writing
style suitable for speed reading.
      </p>
      <p>
        In this paper, we go beyond the current simple template-based NLG system
implemented in CISpaces.org, and we ground our investigation on an existing example
(Section 2) from collaboration between the BBC and the Dundee argumentation group
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], namely an excerpt of the BBC Radio 4 Moral Maze programme from 22nd June
2012. In this way the reader can always relate to the original piece of text from where
our investigation started. The excerpt has been already formalised into an argument
network (i.e. a graph linking together different pieces of information together to display the
web of arguments exchanged). In Figure 1 and in Section 3 we review all the necessary
elements for our investigation: the notion of argumentation schemes; how to represent
argument networks in the Argument Interchange Format (AIF); a (simple) approach to
structured argumentation to build arguments and approximate arguments from an AIF
argument network; Dung’s theory of argumentation; and basic elements of NLG.
      </p>
      <p>In NLG one of the most difficult tasks is to determine the communicative goal; i.e.,
deciding what we would like to communicate. Therefore, Section 4 is entirely dedicated
to investigating relevant communicative goals in the context of formal argumentation.</p>
      <p>The result we wish to achieve in this paper is a blueprint that outlines the
complex research questions and their dense interconnections that our community needs to
address in order to identify models that naturally capture human argumentation.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Running Example</title>
      <p>On 22nd June 2012, in the middle of the European debt crisis, the Moral Maze program
on BBC Radio 4 addressed the topic of individual and national debt. Among others,
Nick Dearden, director of the Jubilee Debt Campaign, and Claire Fox, from the Institute
of Ideas, were ‘witnesses’ (contributors offering a specific point of view in the debate)
during the program.</p>
      <p>
        What follows is an excerpt that has been analysed to identify the argument network
in the dialogue and made available at http://aifdb.org/argview/1724 [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
In this paper we focus on the sub-part of the argument network depicted in Figure 1
and we highlight in the text the elements that contribute to the generation of such an
argument network.
      </p>
      <p>CLAIRE FOX: I understand that. I suppose my concern is just this: I want the
freedom to be able to write off debts but — I’m sure you recognise this — there is
this sort of sense amongst a lot of young people, who just think, “I want that,
so I’ll have that now. Thank you.” And so, if you want the moral hazard,
instead of kind of just going on about the bankers, is there not a danger that if we
just said we’d write off debt, that it actually isn’t very helpful for our side, for
ordinary people, to actually have that? [T1] There’s no discipline there [T2]. In
some ways you need that discipline, don’t you, to be a saver, to think, “I won’t
get into debt?” [T3]
NICK DEARDEN: In some ways I agree with you. If you want the economy to run
smoothly, you have to incentivise certain types of behaviour. [T4] So, for
example in South Korea, in terms of how South Korean grew, it did incentivise
saving, at certain times, by certain economic policies [T5]. On the other hand, I
think what people don’t realise, or only half realise, is the fact that we have
actually written off massive amounts of debt [T6]. But it certainly isn’t the debts
of the people who most need it in society [T7].
3
3.1</p>
    </sec>
    <sec id="sec-3">
      <title>Background</title>
      <sec id="sec-3-1">
        <title>Argumentation Schemes</title>
        <p>
          Argumentation schemes [
          <xref ref-type="bibr" rid="ref33 ref34">33, 34</xref>
          ] are abstract reasoning patterns commonly used in
everyday conversational argumentation, legal, scientific argumentation, etc. Schemes have
been derived from empirical studies of human argument and debate. They can capture
traditional deductive and inductive approaches as well as plausible reasoning. Each
scheme has a set of critical questions that represents standard ways of critically probing
into an argument to find aspects of it that are open to criticism.
        </p>
        <p>For instance, in the dialogue reported in Section 2, part of Nick’s position can be
mapped into an argument from example that has the following structure:
Premise: In this particular case, individual a has property F and also property G.
Conclusion: Therefore, generally, if x has property F, then it also has property G.</p>
        <p>One of the critical questions here is: “Is the proposition claimed in the premise in
fact true?”
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Representing an Argument Network</title>
        <p>
          The Argument Interchange Format (AIF) [
          <xref ref-type="bibr" rid="ref11 ref24">11, 24</xref>
          ] is the current proposal for a standard
notation for argument structures. It is based on a graph that specifies two types of nodes:
information nodes (or I-nodes) and scheme nodes (or S-nodes). These are represented
by two disjoint sets, NI Y NS “ N and NI X NS “ H, where information nodes
represent claims, premises, data, etc., and scheme nodes capture the application of
patterns of reasoning belonging to a set S “ SR Y SC Y SP , SR X SC “ SC X SP “
SP X SR “ H. Reasoning patterns can be of three types: rule of inference SR; criteria
of preference SP ; and criteria of conflicts SC .
        </p>
        <p>The relation fulfils Ñ NS ˆ S expresses that a scheme node instantiates a particular
scheme. Scheme nodes, moreover, can be one of three types: rule of inference
application nodes NSRA; preference application nodes NSP A; or conflict application nodes
NSCA, with S “ NSRA Y NSP A Y NSCA, and NSRA X NSP A “ NSP A X NSCA “
NSCA X NSRA “ H.
[T4]
in South Korea, in terms of
how South Korean grew, it did
incentivise saving, at certain
times, by certain economic
policies</p>
        <p>Example
If you want the economy to run</p>
        <p>smoothly, you have to
incentivise certain types of
behaviour.</p>
        <p>[T6]</p>
        <p>people don’t realise, or
only half realise, is the fact
that we have actually written
o massive amounts of debt
[T7]</p>
        <p>But it certainly isn’t the
debts of the people who most
need it in society
EstablishedRule</p>
        <p>SignFromOtherEvents
[T2]</p>
        <p>There’s no discipline there</p>
        <p>In some ways you need that
discipline, don’t you, to be
a saver, to think, “I
won’t get into debt”?
[T3]</p>
        <p>Nodes are connected by edges whose semantics is implicitly defined by their use.
For instance, an information node connected to a RA scheme node, with the arrow
terminating in the latter, would suggest that the information node serves as a premise
for an inference rule. Figure 1 shows an AIF representation of the arguments exchanged
in the dialogue introduced in Section 2. Rectangular nodes represent information nodes,
while rhombic ones represent scheme nodes: green for RA nodes, and red for CA nodes.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Deductive Argumentation</title>
        <p>
          Using deductive argumentation means that each argument is defined using a logic, and
in the following we adopt the simple, but elegant logic proposed by Besnard &amp; Hunter
[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Thus, we let L be a logical language. If ↵ is an atom in L, then ↵ is a positive literal
in L , and ↵ is a negative literal in L. For a literal , the complement of the positive
literal “ ↵ is “ ↵ (resp. if “ ↵ is not a positive literal, its complement is the
positive literal “ ↵ ).
        </p>
        <p>A simple rule is of the form ↵ 1 ^ . . . ^ ↵ k Ñ where ↵ 1, . . . , ↵ k, are literals.
A simple logical knowledge base is a set of literals and a set of simple rules. Given a
simple logic knowledge base, , the simple consequence relation $s is defined, such
that $s if and only if there is a rule ↵ 1 ^ . . . ^ ↵ n Ñ P and @i either ↵ i P
or $s ↵ i. Now, given Ñ and a literal ↵ , x, ↵ y is a simple argument if and only
if $s ↵ and E 1 à such that 1 $s ↵ . is the support (or premises, assumptions)
of the argument, and ↵ is the claim (or conclusion) of the argument. Given an argument
a “ x, ↵ y, the function Supportpaq returns , and Claimpaq returns ↵ .</p>
        <p>For simple arguments a and b we consider the following types of simple attack:
– a is a simple undercut of b if there is a simple rule ↵ 1 ^ ↵ k Ñ in Supportpbq
and there is an ↵ i P t↵ 1, . . . , ↵ ku such that Claimpaq is the complement of ↵ i;
– a is a simple rebut of b if Claimpaq is the complement of Claimpbq.</p>
        <p>
          Following Black &amp; Hunter [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], an approximate argument is a pair x, ↵ y. If $s ↵ ,
then x, ↵ y is also valid; if &amp;s K, then x, ↵ y is also consistent; if $s ↵ , and there
is no 1 à such that 1 $s ↵ , then x, ↵ y is also minimal; if $s ↵ , and &amp;s K,
then x, ↵ y is also expansive (i.e. it is valid and consistent, but it may have unnecessary
premises).
        </p>
        <p>Building on top of Figure 1 and transforming each NSRA node into a simple rule, a
simple knowledge base for our running example is:
m “
$ [T1], [T2], [T3], [T4], [T5], ,
’’’’&amp; [[TT54]] ÑÑ [[TT43]],, /.///
’’’’% [[TT26]] ^^ [[TT37]] ÑÑ [[TT13]], ///-/
Therefore, the following are the simple arguments that can be built from
m:
Am “
’&amp;’$ ab ““ xxtt[[TT54]],, [[TT54]] ÑÑ [[TT43]]uu,[T[T34]y]y,, .//,
’ c “ xt[T2], [T3], [T2] ^ [T3] Ñ [T1]u, [T1]y, //
’% d “ xt[T6], [T7], [T6] ^ [T7] Ñ [T3]u, [T3]y
with d rebutting b, and d undercutting c.</p>
        <p>
          However, there are many more approximate arguments. For instance, c1 “
xt[T5], [T4], [T3], [T2], [T5] Ñ [T4], [T4] Ñ [T3], [T2] ^ [T3] Ñ [T1]u, [T1]y is
an approximate argument in favour of [T1] taking into consideration all the inferences
that might help concluding it. Conversely, c2 “ xtu, [T1]y is the minimal (invalid)
approximate argument in favour of [T1].
3.4
An argumentation framework [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] consists of a set of arguments and a binary attack
relation between them.
        </p>
        <p>Definition 1. An argumentation framework (AF ) is a pair “ xA, Ry where A is a
set of arguments and R Ñ A ˆ A. We say that b attacks a iff xb, ay P R, also denoted
as b Ñ a. The set of attackers of an argument a will be denoted as a´ fi tb : b Ñ au,
the set of arguments attacked by a will be denoted as a` fi tb : a Ñ bu. We also
extend these notations to sets of arguments, i.e. given E, S Ñ A, E Ñ a iff Db P E
s.t. b Ñ a; a Ñ E iff Db P E s.t. a Ñ b; E Ñ S iff Db P E, a P S s.t. b Ñ a;
E´ fi tb | Da P E, b Ñ au and E` fi tb | Da P E, a Ñ bu.</p>
        <p>Each argumentation framework, therefore, has an associated directed graph where
the vertices are the arguments, and the edges are the attacks.</p>
        <p>The basic properties of conflict–freeness, acceptability, and admissibility of a set of
arguments are fundamental for the definition of argumentation semantics.
Definition 2. Given an AF</p>
        <p>“ xA, Ry:
– a set S Ñ A is a conflict–free set of if E a, b P S s.t. a Ñ b;
– an argument a P A is acceptable with respect to a set S Ñ A of</p>
        <p>b Ñ a, D c P S s.t. c Ñ b;
– a set S Ñ A is an admissible set of if S is a conflict–free set of
element of S is acceptable with respect to S, i.e. S Ñ F pSq.</p>
        <p>and every</p>
        <p>An argumentation semantics prescribes for any AF a set of extensions, denoted
as E p q, namely a set of sets of arguments satisfying the conditions dictated by . For
instance, here is the definition of preferred (denoted as PR) semantics.
Definition 3. Given an AF “ xA, Ry, a set S Ñ A is a preferred extension of ,
i.e. S P EPRp q, iff S is a maximal (w.r.t. set inclusion) admissible set of .</p>
        <p>Given a semantics , an argument a is said to be credulously accepted w.r.t. if a
belongs to at least one -extension. a is skeptically accepted w.r.t. if a belongs to all
the -extensions.</p>
        <p>
          It can be noted that each complete extension S implicitly defines a three-valued
labelling function Lab on the set of arguments: an argument a is labelled in iff a P S;
is labelled out iff D b P S s.t. b Ñ a; and is labelled undec if neither of the above
conditions holds. In the light of this correspondence, argumentation semantics can be
equivalently defined in terms of labellings rather than of extensions [
          <xref ref-type="bibr" rid="ref2 ref7">7, 2</xref>
          ].
Fig. 2. The AF
        </p>
        <p>m for Figure 1 interpreted using deductive argumentation.</p>
        <p>
          We can now introduce the concept of the issues of an argumentation framework
whose status is enough to determine the status of all the arguments in the framework
[
          <xref ref-type="bibr" rid="ref14 ref5 ref6">14, 5, 6</xref>
          ].
        </p>
        <p>Definition 4. Given an AF “ xA, Ry and L the set of all complete labellings of ,
for any a, b P A, a ” b iff @Lab P L, Labpaq “ Labpbq; or @Lab P L, (Labpaq “
in ñ Labpbq “ outq ^ pLabpaq “ out ñ Labpbq “ in).</p>
        <p>The set of arguments in the equivalent class ” is the set of issues of :</p>
        <sec id="sec-3-3-1">
          <title>Eissuesp q “</title>
          <p>" S Ñ A | @xa, by P S ˆ S, a ” b; and
@S1 Å S, Dxc, dy P S1 ˆ S1, pc ” dq
*</p>
          <p>Continuing with our running example, Figure 2 depicts the argumentation
framework from Section 3.3 applying deductive argumentation on the argument network of
Figure 1: m “ xAm, Rmy “ xta, b, c, du, td Ñ b, b Ñ d, d Ñ cuy.</p>
          <p>There are two preferred extensions: EPRp mq “ tta, b, cu, ta, duu. Moreover, tb, du P
Eissuesp mq, and if Labpbq “ in, Labpdq “ out (and viceversa).
3.5</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>Natural Language Generation</title>
        <p>
          A Natural Language Generation (NLG) system requires [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]:
– A knowledge source to be used;
– A communicative goal to be achieved;
– A user model; and
– A discourse history.
        </p>
        <p>In general, the knowledge source is the information about the domain, while the
communicative goal describes the purpose of the text to be generated. The user model
is a characterisation of the intended audience, and the discourse history is a model of
what has been said so far.</p>
        <p>
          An NLG system divides processing into a pipeline [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] composed of the three stages
described in Table 1. First it determines the content and structure of a document
(document planning); then it looks at syntactic structures and choices of words that needs to
be used to communicate the content chosen in the document planning (microplanning).
Finally, it maps the output of the microplanner into text (realisation).
        </p>
        <p>Each stage includes tasks that can be primarily concerned with either content or
structure. Document planning requires content determination — deciding what
information should be communicated in the output document — and document structuring
— how to order the information to be conveyed.</p>
        <p>Microplanning requires (1) lexicalisation — deciding what syntactic constructions
our NLG system should use; (2) referring expression generation — how to relate with
entities; and (3) aggregation — how to map the structures created by the document
planning onto linguistic structures such as sentences and paragraphs.</p>
        <p>Document planning and microplanning are the most strategic and complex modules
in this pipeline. They focus on identifying the communicative goal and how it relates to
the user model, thus producing the blueprint of the document that will be generated.</p>
        <p>
          It is the responsibility of the document planning module, in particular of the
document structuring task, to consider the rhetorical relations (or discourse relations) that
hold between messages or groups of messages. For instance, Rhetorical Structure
Theory (RST) [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] stresses the idea that the coherency of a text depends on the
relationships between pairs of text spans (an uninterrupted linear interval of text) a nucleus and
a satellite. In [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] a variety of relationships are provided, but for the purpose of this
work, we focus on:
Evidence The nucleus contains a claim, while the satellite(s) contain(s) evidence
supporting such a claim.
        </p>
        <p>Justify The nucleus contains a claim, while the satellite(s) contain(s) justification for
such a claim.</p>
        <p>Antithesis Nucleus and satellite are in contrast.</p>
        <p>
          Finally, once the more strategical tasks are performed, there is need for linguistic
realisation — from abstract representations of sentences to actual text — and structure
realisation — converting abstract structures such as paragraphs and sections into the
mark-up symbols chosen for the document. The realisation module is the most
algorithmic in this pipeline, and there are already several implementations for supporting it,
for instance SimpleNLG [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>In the next section we highlight the cases of document planning and microplanning
we believe are most interesting from an argumentation perspective.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Generating Natural Language Interfaces to Formal</title>
    </sec>
    <sec id="sec-5">
      <title>Argumentation</title>
      <p>Let us now return to our running example to illustrate four relevant communicative
goals. For the moment we are not making any assumptions about the user model, and
we will assume no pre-existing discourse history. Therefore, given that the knowledge
source is fixed, we envisage the following four communicative goals, each of which
raises interesting and challenging questions:</p>
      <sec id="sec-5-1">
        <title>1. Presenting a single argument or an approximate argument;</title>
        <p>2. Presenting an entire argument network;
3. Explaining the acceptability status of a single argument or an approximate
argument; and
4. Explaining the extensions, given some semantics.</p>
        <p>In the following, we discuss elements of content determination for each of these
goals; i.e. deciding what messages should be included in the document to be generated.
Examples will be provided based on our running example. In parts, the generated texts
will sound a little awkward because we deliberately chose not to modify the content of
the arguments. We will elaborate on this in Section 5.
4.1</p>
        <sec id="sec-5-1-1">
          <title>Presenting a Simple or an Approximate Argument</title>
          <p>Simple and approximate arguments are composed of premises and a claim. There are
two main strategies traditionally adopted to represent such a construct:
– forward writing, from premises to claim;
– backward writing,4 from claim to premises.</p>
          <p>Let us consider the argument b “ xt[T4], [T4] Ñ [T3]u, [T3]y. In the case of
forward writing, we can write something like:</p>
          <p>If you want the economy to run smoothly, you have to incentivise certain
type of behaviour. [T4] In some ways you need that discipline, don’t you, to be
a saver, to think, “I won’t get into debt”?. [T3]
An explicit signal such as Therefore can be used to highlight the Justify relation:</p>
          <p>If you want the economy to run smoothly, you have to incentivise certain
type of behaviour. [T4]
Therefore In some ways you need that discipline, don’t you, to be a saver, to
think, “I won’t get into debt”?. [T3]</p>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>Similarly in the case of backward writing, we may write:</title>
        <p>In some ways you need that discipline, don’t you, to be a saver, to think, “I
won’t get into debt”?. [T3]
Indeed If you want the economy to run smoothly, you have to incentivise
certain type of behaviour. [T4]</p>
      </sec>
      <sec id="sec-5-3">
        <title>4Often also named assert-justify.</title>
        <p>
          More interesting is the case of considering an approximate argument. Assuming that
the communicative goal here is not to confuse the reader, it is probably reasonable not
to include irrelevant elements in the presentation. Relevance theory may be seen as an
attempt to analyse an essential feature of most human communication: the expression
and recognition of intentions [
          <xref ref-type="bibr" rid="ref17 ref29">17, 29</xref>
          ].
        </p>
        <p>
          For instance, the First or Cognitive Principle of Relevance proposed by Sperber
and Wilson [
          <xref ref-type="bibr" rid="ref28 ref29">28, 29</xref>
          ] states: Human cognition tends to be geared to the maximisation of
relevance. Therefore, it would be evidence of poor judgment to expand the argument
by considering irrelevant elements. Unfortunately, defining the concept of relevance in
formal argumentation is a non-trivial task [
          <xref ref-type="bibr" rid="ref23 ref32">32, 23</xref>
          ]. To take a reasonable starting point,
let us adapt the definition of relevant evidence from the Rule 401 of the Federal Rules
of Evidence5 to our context. Therefore, an approximate argument is relevant for an
argument if:
1. Its premises have any tendency to make the argument’s conclusion more or less
probable than it would be without them; and
2. It provides additional information that might advance the debate.
        </p>
        <p>Given this (loose) definition of relevance, we could argue that the approximate
argument b1 “ xt[T5], [T5] Ñ [T4], [T4] Ñ [T3]u, [T3]y is relevant for b. Similarly
as before, there are different possibilities to write such a chain of inferences. In
addition, it raises questions related to microplanning: perhaps, we desire to merge together
different pieces such as in the following example using backward writing style.</p>
        <p>In some ways you need that discipline, don’t you, to be a saver, to think, “I
won’t get into debt”?. [T3]
Indeed If you want the economy to run smoothly, you have to incentivise
certain type of behaviour. [T4] , e.g. in South Korea, in terms of how South
Korean grew, it did incentivise saving, at certain times, by certain economic
policies [T5]
[T4] and [T5] are merged by a comma and e.g. . In this way, we highlight the Evidence
relation and identify the connection as an argument from example instead of forming
two independent sentences, which is an aspect traditionally related to microplanning.</p>
        <p>Finally, if we have knowledge that an inference rule fulfils an argumentation scheme,
then we can also map the elements of an argument into such a scheme. For instance,
argument b is classified as an argument by established rule, that is represented by the
following scheme:
Major Premise: If carrying out types of actions including the state of affairs A is the
established rule for x, then (unless the case is an exception), x must
carry out A.</p>
        <p>Minor Premise: Carrying out types of actions including state of affairs A is the
established rule for a.</p>
        <p>Conclusion: Therefore a must carry out A.</p>
        <p>5https://www.law.cornell.edu/rules/fre/rule_401 (on 13/05/2017)</p>
        <p>It is interesting to note that the minor premise is left implicit in the formalisation
of our example, and it might be an element that should be highlighted to the user. How
to report an implicit premise may depend on the communicative goal. For example, if
the system is intended for users to improve and make their arguments more explicit we
may generate the text:</p>
        <p>In some ways you need that discipline, don’t you, to be a saver, to think, “I
won’t get into debt”?. [T3] Indeed If you want the economy to run smoothly,
you have to incentivise certain type of behaviour. [T4]
although we have no evidence that this is the established rule.</p>
        <p>
          In this case, the fact that there is a lack of the minor premise is added to the generated
text. On the other hand, if the system is intended to report an analysis of a conversation,
we need to take into account that the premise is implicit as it may be already known
by the participants. Hence, the system could assume that the premise holds, and
generate an explicit sentence and we assume that this is the established rule. Further
research is, however, needed in understanding how to generate text in case of unstated
premises (e.g., in case of enthymemes [
          <xref ref-type="bibr" rid="ref33 ref4">33, 4</xref>
          ]). With similar considerations, we could
also include critical questions that have either already been answered, or that, if
answered, could strengthen the argument.
4.2
        </p>
        <sec id="sec-5-3-1">
          <title>Presenting an Argument Network</title>
          <p>One of the strengths of formal argumentation is its ability to handle conflicts. In the
previous section we focused on how to represent a single argument. However, even in
our small running example, we can note that (1) there are more than one argument; and
(2) there are conflicts among them (Figures 1 and 2).</p>
          <p>The simplest strategy for representing an argument network is just to enumerate all
the arguments and to list the conflicts among them, eventually linking that with
critical questions if the information is available (e.g. This counterargument answers
the critical question that states . . . ).</p>
          <p>
            However, we argue that there is a better approach, motivated by the need to satisfy
Grice’s maxim of relation and relevance [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ]; i.e. that in a conversation one needs to
be relevant and say things that are pertinent to the discussion. Let us consider Figure
CON
PRO
c
d
b
3 that depicts the m argumentation framework (cf. Fig. 2) annotated with a second,
bipolar-inspired [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ],6 binary relation between arguments. Such a relation is grounded in
the original argument network, Figure 1.
          </p>
          <p>Once such an annotated graph is obtained, a line of reasoning annotated together
(e.g. xa, b, cy in Figure 3) could be merged together in order to provide a single
approximate argument. Once all the lines of reasoning annotated together are identified and
merged, then we need a strategy to order their presentation; e.g. ordering by length of
each line of reasoning versus ordering by the number of attacks received. We also need
appropriate signals for antithesis relations need to be used. For instance:
if you want the moral hazard, instead of kind of just going on about the
bankers, is there not a danger that if we just said we’d write off debt, that it
actually isn’t very helpful for our side, for ordinary people, to actually have that?
[T1] Indeed There’s no discipline there [T2] and In some ways you need
that discipline, don’t you, to be a saver, to think, “I won’t get into debt”?. [T3]
Indeed If you want the economy to run smoothly, you have to incentivise
certain type of behaviour. [T4] , e.g. in South Korea, in terms of how South
Korean grew, it did incentivise saving, at certain times, by certain economic
policies [T5] However, people don’t realise, or only half realise, is the fact
that we have actually written off massive amounts of debt [T6] and But it
certainly isn’t the debts of the people who most need it in society [T7].
4.3</p>
        </sec>
        <sec id="sec-5-3-2">
          <title>Explaining the Acceptability Status of a Single Argument or an Approximate</title>
        </sec>
        <sec id="sec-5-3-3">
          <title>Argument</title>
          <p>
            Two of the traditional decision problems in abstract argumentation [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] are checking
whether an argument is credulously or skeptically accepted (or not). An interested user
might select one of the arguments (or one of the propositions in the argument network)
and ask whether or not it is credulously or skeptically accepted.
          </p>
          <p>6By bipolar-inspired we intend that such a relation does not represent conflicts. At the same
time, however, we are not in the position to argue that it would be a relationship of support.</p>
          <p>
            To answer such a query, we can exploit research in proof theory and argument games
[
            <xref ref-type="bibr" rid="ref22">22</xref>
            ]. For instance, to prove that c (in Figure 2) is credulously accepted according to
preferred semantics, we can (1) compute the extensions; and (2) compute the dispute
tree [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ] needed to prove it. The dispute tree is depicted in Figure 4 which suggests that
only the argument network comprising c, d, and b needs to be considered.
4.4
          </p>
        </sec>
        <sec id="sec-5-3-4">
          <title>Explaining Extensions</title>
          <p>Explaining multiple extensions is analogous to present different argumentation
(sub-)networks that are linked together by attacks. Therefore, the solution for this
communication goal builds upon the procedures envisaged in Sections 4.2 and 4.3.</p>
          <p>It is worth mentioning that a possible way to identify the attack connections between
those extensions is to consider a set of arguments belonging to a set of issues S P
Eissuesp q such that |S| “ |E p q| for the chosen semantics.7 Those arguments
could provide foci of attention: for instance, tb, du P Eissuesp mq. Therefore, the text
presented in Section 4.2 could be adapted in a straightforward manner to communicate
the existence of two preferred extensions that gravitate around the issue tb, du.
5</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>In this paper we have provided a blueprint for a complex set of research questions
that arise when considering how to generate natural language representations of formal
argumentation structures.</p>
      <p>First of all, as already noted at the beginning of Section 4, we assume neither a
model of the user, nor pre-existing contexts that need to be referenced in the piece of
text generated. These elements will need to be addressed in future research.</p>
      <p>Moreover some pieces of generated text sound quite awkward. To address such an
issue, each piece of information inserted in an argument network should represent a
single text-agnostic normal form proposition. This is clearly a constraint that might be
unnatural if an untrained user tries to generate an argument network, and this raises
an interesting research question concerning how to enable untrained users to formalise
their lines of reasoning.</p>
      <p>
        It is worth noting that the research questions highlighted in this paper have been
grounded on a piece of argumentation formalised from a discussion between humans.
We claim neither that our list is exhaustive nor that it is applicable to all possible
argument networks. For instance, providing a natural language interface to an argument
network built by an expert might lead to different, unforeseen, communicative goals.
For instance, related research enabling the scrutiny of autonomous systems by allowing
agents to generate plans through argument and dialogue [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] had the specific
communication goal of justifying the purpose of each step within a plan.
      </p>
      <p>
        Moreover we support the effort of extending AIF to include additional
information: for instance, the AIFdb [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] project extends the AIF model to include schemes:
of rephrasing; of locution describing communicative intentions which speakers use to
7Assuming that the semantics can be related to the notion of a complete labelling (Def. 4).
introduce propositional contents; and of interaction or protocol describing relations
between locutions. Dealing with those pieces of information raise further research
questions. Indeed, as we saw in Section 3.3, arguments a, b, and c form a conflict-free
set, and in Section 4 we often considered them altogether in an approximate argument.
However, while argument c was put forward by Claire Fox in the original dialogue
(Section 2), arguments a and b belong to Nick Dearden. The question here is how to include
the sources of those arguments in the generated natural language text.
      </p>
      <p>
        Finally, we will investigate the potential of applying NLG to existing systems using
formal argumentation in real-world applications, such as CISpaces [
        <xref ref-type="bibr" rid="ref31 ref8 ref9">31, 9, 8</xref>
        ] — a system
for collaborative intelligence analysis — and ArgMed [
        <xref ref-type="bibr" rid="ref18 ref35">18, 35</xref>
        ] — a system for reasoning
about the results of clinical trials.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Amgoud</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cayrol</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lagasquie-Schiex</surname>
            ,
            <given-names>M.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Livet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>On bipolarity in argumentation frameworks</article-title>
          .
          <source>International Journal of Intelligent Systems</source>
          <volume>23</volume>
          ,
          <fpage>1062</fpage>
          -
          <lpage>1093</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baroni</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caminada</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giacomin</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>An introduction to argumentation semantics</article-title>
          .
          <source>Knowledge Engineering Review</source>
          <volume>26</volume>
          (
          <issue>4</issue>
          ),
          <fpage>365</fpage>
          -
          <lpage>410</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Besnard</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Constructing argument graphs with deductive arguments: a tutorial</article-title>
          .
          <source>Argument &amp; Computation</source>
          <volume>5</volume>
          (
          <issue>1</issue>
          ),
          <fpage>5</fpage>
          -
          <lpage>30</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Black</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A Relevance-theoretic Framework for Constructing and Deconstructing Enthymemes</article-title>
          .
          <source>Journal of Logic and Computation</source>
          <volume>22</volume>
          (
          <issue>1</issue>
          ),
          <fpage>55</fpage>
          -
          <lpage>78</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Booth</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caminada</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Podlaszewski</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rahwan</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Quantifying Disagreement in Argument-based Reasoning</article-title>
          .
          <source>In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS</source>
          <year>2012</year>
          )
          <article-title>(</article-title>
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Booth</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caminada</surname>
            ,
            <given-names>M.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dunne</surname>
            ,
            <given-names>P.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Podlaszewski</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rahwan</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Complexity Properties of Critical Sets of Arguments</article-title>
          .
          <source>In: Computational Models of Argument: Proceedings of COMMA 2014</source>
          . pp.
          <fpage>173</fpage>
          -
          <lpage>184</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Caminada</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>On the Issue of Reinstatement in Argumentation</article-title>
          .
          <source>In: Proceedings of the 10th European Conference on Logics in Artificial Intelligence (JELIA</source>
          <year>2006</year>
          ). pp.
          <fpage>111</fpage>
          -
          <lpage>123</lpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Cerutti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>T.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toniolo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A tool to highlight weaknesses and strengthen cases: Cispaces.org</article-title>
          .
          <source>In: Legal Knowledge and Information Systems - JURIX</source>
          <year>2018</year>
          :
          <article-title>The Thirtyfirst Annual Conference</article-title>
          , Groningen, The Netherlands,
          <fpage>12</fpage>
          -
          <lpage>14</lpage>
          December
          <year>2018</year>
          . pp.
          <fpage>186</fpage>
          -
          <lpage>189</lpage>
          (
          <year>2018</year>
          ), https://doi.org/10.3233/978-1-
          <fpage>61499</fpage>
          -935-5-186
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Cerutti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>T.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toniolo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Middleton</surname>
            ,
            <given-names>S.E.</given-names>
          </string-name>
          :
          <article-title>Cispaces.org: From fact extraction to report generation</article-title>
          .
          <source>In: Computational Models of Argument - Proceedings of COMMA</source>
          <year>2018</year>
          , Warsaw, Poland,
          <fpage>12</fpage>
          -
          <issue>14</issue>
          <year>September 2018</year>
          . pp.
          <fpage>269</fpage>
          -
          <lpage>280</lpage>
          (
          <year>2018</year>
          ), https://doi.org/10. 3233/978-1-
          <fpage>61499</fpage>
          -906-5-269
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Cerutti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tintarev</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oren</surname>
          </string-name>
          , N.:
          <article-title>Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an Empirical Evaluation</article-title>
          .
          <source>In: 21st European Conference on Artificial Intelligence</source>
          . pp.
          <fpage>207</fpage>
          -
          <lpage>212</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Chesnevar</surname>
            ,
            <given-names>C.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGinnis</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Modgil</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rahwan</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reed</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Simari</surname>
            ,
            <given-names>G.R.</given-names>
          </string-name>
          , South,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Vreeswijk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.A.W.</given-names>
            ,
            <surname>Willmot</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          :
          <article-title>Towards an argument interchange format</article-title>
          .
          <source>The Knowledge Engineering Review</source>
          <volume>21</volume>
          (
          <issue>04</issue>
          ),
          <volume>293</volume>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Dung</surname>
            ,
            <given-names>P.M.</given-names>
          </string-name>
          :
          <article-title>On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n-person games</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>77</volume>
          (
          <issue>2</issue>
          ),
          <fpage>321</fpage>
          -
          <lpage>357</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Dunne</surname>
            ,
            <given-names>P.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wooldridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Complexity of abstract argumentation</article-title>
          . In: Rahwan,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Simari</surname>
          </string-name>
          ,
          <string-name>
            <surname>G</surname>
          </string-name>
          . (eds.) Argumentation in AI,
          <source>chap. 5</source>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>104</lpage>
          . Springer-Verlag (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Gabbay</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          : Fibring Argumentation Frames.
          <source>Studia Logica</source>
          <volume>93</volume>
          (
          <issue>2</issue>
          /3),
          <fpage>231</fpage>
          -
          <lpage>295</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Gatt</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reiter</surname>
          </string-name>
          , E.:
          <article-title>SimpleNLG: A realisation engine for practical applications</article-title>
          .
          <source>In: Proceedings of the 12th European Workshop on Natural Language Generation</source>
          . pp.
          <fpage>90</fpage>
          -
          <lpage>93</lpage>
          . Association for Computational Linguistics (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Grice</surname>
            ,
            <given-names>H.P.</given-names>
          </string-name>
          :
          <article-title>Logic and Conversation</article-title>
          . In: Cole,
          <string-name>
            <surname>P.</surname>
          </string-name>
          , Morgan, J. (eds.)
          <source>Syntax and Semantics</source>
          <volume>3</volume>
          :
          <string-name>
            <given-names>Speech</given-names>
            <surname>Acts</surname>
          </string-name>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>57</lpage>
          (
          <year>1975</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Grice</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Studies in the Way of Words</article-title>
          , vol.
          <volume>65</volume>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Aggregating evidence about the positive and negative effects of treatments</article-title>
          .
          <source>Artificial intelligence in medicine 56(3)</source>
          ,
          <fpage>173</fpage>
          -
          <lpage>90</lpage>
          (nov
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Laronge</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          :
          <article-title>Evaluating universal sufficiency of a single logical form for inference in court</article-title>
          .
          <source>Law, Probability and Risk</source>
          <volume>11</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>159</fpage>
          -
          <lpage>196</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Lawrence</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bex</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reed</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Snaith</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>AIFdb: Infrastructure for the Argument Web</article-title>
          . In: COMMA. pp.
          <fpage>515</fpage>
          -
          <lpage>516</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>W.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          :
          <article-title>Rethorical Structure Theory: Toward a functional theory of text organisation</article-title>
          .
          <source>Text</source>
          <volume>8</volume>
          ,
          <fpage>243</fpage>
          -
          <lpage>281</lpage>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Modgil</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caminada</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Proof Theories and Algorithms for Abstract Argumentation Frameworks</article-title>
          . In: Simari,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Rahwan</surname>
          </string-name>
          , I. (eds.) Argumentation in Artificial Intelligence, chap.
          <source>Proof Theo</source>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>129</lpage>
          . Springer US, Boston, MA (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Paglieri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castelfranchi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Trust, relevance, and arguments</article-title>
          .
          <source>Argument &amp; Computation</source>
          <volume>5</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>216</fpage>
          -
          <lpage>236</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Rahwan</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reed</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>The Argument Interchange Format</article-title>
          .
          <source>In: Argumentation in Artificial Intelligence</source>
          , pp.
          <fpage>383</fpage>
          -
          <lpage>402</lpage>
          . Springer US, Boston, MA (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Reiter</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dale</surname>
          </string-name>
          , R.:
          <source>Building Natural Language Generation Systems</source>
          . No. Cambridge (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Rosenfeld</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kraus</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Providing arguments in discussions on the basis of the prediction of human argumentative behavior</article-title>
          .
          <source>ACM Transactions on Interactive Intelligent Systems (TiiS) 6</source>
          (
          <issue>4</issue>
          ),
          <volume>30</volume>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Rosenfeld</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kraus</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Strategical argumentative agent for human persuasion</article-title>
          .
          <source>In: Proceedings of the 22nd European Conference on Artificial Intelligence (ECAI)</source>
          . vol.
          <volume>285</volume>
          , p.
          <fpage>320</fpage>
          . IOS Press (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Sperber</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Wilson, D.: Relevance : communication and cognition 2nd Ed. Blackwell Publishers (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Sperber</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Wilson, D.:
          <article-title>Relevance Theory</article-title>
          .
          <source>In: The Handbook of Pragmatics</source>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Tintarev</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kutlak</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oren</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deemter</surname>
            ,
            <given-names>K.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Green</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masthoff</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vasconcelos</surname>
          </string-name>
          , W.: SAsSy - Scrutable Autonomous Systems. In: Do-Form:
          <article-title>Enabling Domain Experts to use Formalised Reasoning (</article-title>
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Toniolo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>T.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Etuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cerutti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ouyang</surname>
            ,
            <given-names>R.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Srivastava</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oren</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dropps</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Allen</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sullivan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Agent Support to Reasoning with Different Types of Evidence in Intelligence Analysis</article-title>
          .
          <source>In: Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems (AAMAS</source>
          <year>2015</year>
          ). pp.
          <fpage>781</fpage>
          --
          <lpage>789</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>van Eemeren</surname>
            ,
            <given-names>F</given-names>
          </string-name>
          and Grootendorst, R.:
          <source>A Systematic Theory of Argumentation. No. 1</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Walton</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reed</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Macagno</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Argumentation schemes</article-title>
          . Cambridge University Press, NY (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Walton</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Argumentation Theory: A Very Short Introduction</article-title>
          .
          <source>In: Argumentation in Artificial Intelligence</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          . Springer US, Boston, MA (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Macbeth</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>An updated systematic review of lung chemo-radiotherapy using a new evidence aggregation method</article-title>
          .
          <source>Lung cancer (Amsterdam</source>
          , Netherlands)
          <volume>87</volume>
          (
          <issue>3</issue>
          ),
          <fpage>290</fpage>
          -
          <lpage>5</lpage>
          (mar
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>