<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Mining Fine-grained Argument Elements</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Adam Wyner</string-name>
          <email>azwyner@abdn.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computing Science University of Aberdeen Meston Building, Meston Walk Aberdeen</institution>
          ,
          <addr-line>AB24 3UK, Scotland</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper discusses the architecture and development of an Argument Workbench, which supports an analyst in reconstructing arguments from across textual sources. The workbench takes a semi-automated, interactive approach searching in a corpus for fine-grained argument elements, which are concepts and conceptual patterns in expressions that are associated with argumentation schemes. The expressions can then be extracted from a corpus and reconstituted into instantiated argumentation schemes for and against a given conclusion. Such arguments can then be input to an argument evaluation tool.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>We have large corpora of unstructured textual
information such as in consumer websites
(Amazon), newspapers (BBC’s “Have Your Say”, or in
policy responses to public consultations. The
information is complex, high volume, fragmentary,
and either linearly (Amazon or BBC) or alinearly
(policy responses) presented as a series of
comments or statements. Given the lack of structure of
the corpora, the cumulative argumentative
meaning of the texts is obscurely distributed across
texts. In order to make coherent sense of the
information, the content must be extracted, analysed,
and restructured into a form suitable for further
formal and automated reasoning (e.g.
ASPARTIX (Egly et al., 2008) that is grounded in
Argumentation Frameworks (Dung, 1995)). There
remains a significant knowledge acquisition
bottleneck (Forsythe and Buchanan, 1993) between the
textual source and formal representation.</p>
      <p>Argumentation text is rich, multi-dimensional,
and fine-grained, consisting of (among others): a
range of (explicit and implicit) discourse relations
between statements in the corpus, including
indicators for conclusions and a premises; speech acts
and propositional attitudes; contrasting sentiment
terminology; and domain terminology that is
represented in the verbs, nouns, and modifiers of
sentences. Moreover, linguistic expression is various,
given alternative syntactic or lexical forms for
related semantic meaning. It is difficult for people to
reconstruct argument from text, and even moreso
for a computer.</p>
      <p>
        Yet, the presentation of argumentation in text is
not a random or arbitrary combination of such
elements, but is somewhat structured into reasoning
patterns, e.g. defeasible argumentation schemes
        <xref ref-type="bibr" rid="ref12">(Walton, 1996)</xref>
        . Furthermore, the scope of
linguistic variation is not unlimited, nor unconstrained:
diathesis alternations (related syntactic forms)
appear in systematic patterns
        <xref ref-type="bibr" rid="ref1">(Levin, 1993)</xref>
        ; a
thesarus is a finite compendium of lexical
semantic relationships (Fellbaum, 1998); discourse
relations
        <xref ref-type="bibr" rid="ref13">(Webber et al., 2011)</xref>
        and speech acts
        <xref ref-type="bibr" rid="ref10">(Searle
and Vanderveken, 1985)</xref>
        (by and large) signal
systematic semantic relations between sentences or
between sentences and contexts; and the
expressivity of contrast and sentiment is scoped
        <xref ref-type="bibr" rid="ref7 ref8">(Horn,
2001; Pang and Lee, 2008)</xref>
        . A more open-ended
aspect of argumentation in text is domain
knowledge that appears as terminology. Yet here too,
in a given corpus on a selected topic,
discussants demonstrate a high degree of topical
coherence, signalling that similar or related
conceptual domain models are being deployed. Though
argumentation text is complex and coherence is
obscured, taken together it is also underlyingly
highly organised; after all, people do argue, which
is meaningful only where there is some
understanding about what is being argued about and
how the meaning of their arguments is
linguistically conveyed. Without such underlying
organisation, we could not successfully
reconstruction and evaluate arguments from source
materials, which is contrary to what is accomplished in
argument analysis.
      </p>
      <p>The paper proposes that the elements and
structures of the lexicon, syntax, discourse,
argumentation, and domain terminology can be deployed
to support the identification and extraction of
relevant fine-grained textual passages from across
complex, distributed texts. The passages can then
be reconstituted into instantiated argumentation
schemes. It discusses an argument workbench that
takes a semi-automated, interactive approach,
using a text mining development environment, to
flexibly query for concepts (i.e. semantically
annotated) and patterns of concepts within sentences,
where the concepts and patterns are associated
with argumentation schemes. The concepts and
patterns are based on the linguistic and domain
information. The results of the queries are
extracted from a corpus and interactively
reconstituted into instantiated argumentation schemes for
and against a given conclusion. Such arguments
can then be input to an argument evaluation tool.
From such an approach, a “grammar” for
arguments can be developed and resources (e.g. gold
corpora) provided.</p>
      <p>
        The paper presents a sample use case, elements
and structures, tool components, and outputs of
queries. Broadly, the approach builds on
        <xref ref-type="bibr" rid="ref15 ref16 ref17 ref17">(Wyner
et al., 2013; Wyner et al., 2014; Wyner et al.,
2012)</xref>
        . The approach is contrasted against
statistical/machine learning, high level approaches that
specify a grammar, and tasks to annotate single
passages of argument.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Tool Development and Use</title>
      <p>In this section, some of the main elements of the
tool and how it is used are briefly outlined.
2.1</p>
      <sec id="sec-2-1">
        <title>Use Case and Materials</title>
        <p>The sample use case is based on Amazon
consumer reviews about purchasing a camera.
Consumer reviews can be construed as presenting
arguments concerning a decision about what to buy
based on various factors. Consumers argue in such
reviews about what features a camera has, the
relative advantages, experiences, and sources of
misinformation. These are qualitative, linguistically
expressed arguments.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Components of Analysis</title>
        <p>
          The analysis has several subcomponents: a
consumer argumentation scheme, discourse
indicators, sentiment terminology, and a domain model.
The consumer argumentation scheme (CAS) is
derived from the value-based practical reasoning
argumentation scheme (Atkinson and Bench-Capon,
2007); it represents the arguments for or against
buying the consumer item relative to preferences
and values. A range of explicit discourse
indicators
          <xref ref-type="bibr" rid="ref13">(Webber et al., 2011)</xref>
          are automatically
annotated, such as those signalling premise, e.g.
because, conclusion e.g. therefore, or contrast and
exception, e.g. not, except. Sentiment
terminology
          <xref ref-type="bibr" rid="ref5">(Nielsen, 2011)</xref>
          is signalled by lexical
semantic contrast: The flash worked poorly is the
semantic negation of The flash worked flawlessly,
where poorly is a negative sentiment and
flawlessly is a positive sentiment. Contrast indicators
can similarly be used. Domain terminology
specifies the objects and properties that are relevant to
the users. To some extent the terminology can be
automatically acquired (term frequency) or
manually derived and structured into an ontology, e.g
from consumer report magazines or available
ontologies. Given the modular nature of the
analysis as well as the tool, auxilary components can
be added such as speech act verbs, propositional
attitude verbs, sentence conjunctions to split
sentences, etc. Each such component adds a further
dimension to the analysis of the corpus.
2.3
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Components of the Tool</title>
        <p>
          To recognise the textual elements of Section 2.2,
we use the GATE framework (Cunningham et al.,
2002) for language engineering applications. It is
an open source desktop application written in Java
that provides a user interface for professional
linguists and text engineers to bring together a wide
variety of natural language processing tools in a
pipeline and apply them to a set of documents.
Our approach to GATE tool development follows
          <xref ref-type="bibr" rid="ref14 ref6">(Wyner and Peters, 2011)</xref>
          . Once a GATE pipeline
has been applied to a corpus, we can view the
annotations of a text either in situ or extracted using
GATE’s ANNIC (ANNotations In Context) corpus
indexing and querying tool.
        </p>
        <p>In GATE, the gazetteers associate textual
passages in the corpus that match terms on the lists
with an annotation. The annotations introduced by
gazetteers are used by JAPE rules, creating
annotations that are visible as highlighted text, can be
reused to construct higher level annotations, and
are easily searchable in ANNIC. Querying for an
annotation or a pattern of annotations, we retrieve
all the terms with the annotation.
The ANNIC tool indexes the annotated text and
supports semantic querying. Searching in the
corpus for single or complex patterns of annotations
returns all those strings that are annotated with
pattern along with their context and source
document. Complex queries can also be formed. A
query and a sample result appear in Figure 1,
where the query finds all sequences where the
first string is annotated with PremiseIndicator,
followed some tokens, then a string annotated with
Positive sentiment, some tokens, and finally
ending with a string that is annotated as
CameraProperty. The search returned a range of candidate
structures that can be further scrutinised; the query
can be iteratively refined to zero on in other
relevant passages. The example can be taken as part
of a positive justification for buying the camera.
The query language (the language of the
annotations) facilitates complex search for any of the
annotations in the corpus, enabling exploration of the
statements in the corpus.
2.5</p>
      </sec>
      <sec id="sec-2-4">
        <title>Analysis of Arguments and their</title>
      </sec>
      <sec id="sec-2-5">
        <title>Evaluation</title>
        <p>The objective of the tool is to find specific
patterns of terminology in the text that can be used
to instantiate the CAS argumentation scheme both
for and against purchase of a particular model of
camera. We iteratively search the corpus for
properties, instantiate the argumentation scheme, and
identify attacks. Once we have instantiated
arguments in attack relations, we may evaluate the
argumentation framework. Our focus in this paper
is the identification of arguments and attacks from
the source material rather than evaluation. It is
important to emphasise that we provide an analyst’s
support tool, so some degree of judgement is
required.</p>
        <p>From the results of queries on the corpus, we
have identified the following premises bearing on
image quality, where we paraphrase the source
and infer the values from context. Agents are also
left implicit, assuming that a single agent does not
make contradictory statements. The premises
instantiate the CAS in a positive form, where A1 is
an argument for buying the camera; similarly, we
can identify statements and instantiated
argumentation schemes against buying the camera.</p>
        <sec id="sec-2-5-1">
          <title>A1. P1: The pictures are perfectly exposed.</title>
          <p>P2: The pictures are well-focused.</p>
          <p>V1: These properties promote image quality.
C1: Therefore, you (the reader) should by
the Canon SX220.</p>
          <p>Searching in the corpus we can find statements
contrary to the premises in A1, constituting an
attack on A1. To defeat these attacks and maintain
A1, we would have to search further in the corpus
for contraries to the attacks. Searching for such
statements and counterstatements is facilitated by
the query tool.
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Discussion</title>
      <p>
        The paper presents an outline of an implemented,
semi-automatic, interactive rule-based text
analytic tool to support analysts in identifying
finegrained, relevant textual passages that can be
reconstructed into argumentation schemes and
attacks. As such, it is not evaluated with respect
to recall and precision
        <xref ref-type="bibr" rid="ref3">(Mitkof, 2003)</xref>
        in
comparison to a gold standard, but in comparison to
user facilitation (i.e. analysts qualitative
evaluation of using the tool or not), a work that
remains to be done. The tool is an advance over
graphically-based argument extraction tools that
rely on the analysts’ unstructured, implicit,
nonoperationalised knowledge of discourse indicators
and content
        <xref ref-type="bibr" rid="ref11 ref17 ref2 ref7 ref8">(van Gelder, 2007; Rowe and Reed,
2008; Liddo and Shum, 2010; Bex et al., 2014)</xref>
        .
      </p>
      <p>
        There are logic programming approaches that
automatically annotate argumentative texts:
        <xref ref-type="bibr" rid="ref14 ref6">(Pallotta
and Delmonte, 2011)</xref>
        classify statements
according to rhetorical roles using full sentence parsing
and semantic translation;
        <xref ref-type="bibr" rid="ref9">(Saint-Dizier, 2012)</xref>
        provides a rule-oriented approach to process specific,
highly structured argumentative texts.
        <xref ref-type="bibr" rid="ref4">(Moens et
al., 2007)</xref>
        manually annotates legal texts then
constructs a grammar that is tailored to automatically
annotated the passages. Such rule-oriented
approaches do not use argumentation schemes or
domain models; they do not straightforwardly
provide for complex annotation querying; and they
are stand-alone tools that are not integrated with
other NLP tools.
      </p>
      <p>
        The development of the tool can proceed
modularly, adding argumentation schemes, developing
more articulated domain models, disambiguating
discourse indicators
        <xref ref-type="bibr" rid="ref13">(Webber et al., 2011)</xref>
        ,
introducing auxilary linguistic indicators such as other
verb classes, and other parts of speech that
distinguish sentence components. The tool will be
applied to more extensive corpora and have output
that is associated with argument graphing tools.
      </p>
      <p>More elaborate query patterns could be executed
to derive more specific results. In general, the
openness and lexibility of the tool provide a
platform for future, detailed solutions to a range of
argumentation related issues.</p>
      <p>The interactive, incremental, semi-automatic
approach taken here is in contrast to
statistical/machine learning approaches. Such
approaches rely on prior creation of gold standard
corpora that are annotated manually and
adjudicated (considering interannotator agreement). The
gold standard corpora are then used to induce a
model that (if succesful) annotates corpora com- References
parably well to the human annotation. For
example, where sentences in a corpora are annotated as [AtkTinresvoonraJn.dMB.enBcehn-Ccha-pCoanp2o0n0.7]2K00a7ti.e APrtakcintiscoanl
raenadpremise or conclusion, the model ought also to an- soning as presumptive argumentation using action
notate the sentences similarly; in effect, what a based alternating transition systems. Artificial
Inperson uses to classify a sentence as premise or telligence, 171(10-15):855–874.
conclusion can be acquired by the computer. Sta- [Bex et al.2014] Floris Bex, Mark Snaith, John
tistical approaches yield a probability that some Lawrence, and Chris Reed. 2014. Argublogging:
element is classified one way or the other; the jus- An application for the argument web. J. Web Sem.,
tification, such as found in a rule-based system, 25:9–15.
for the classification cannot be given. Moreover, [Cunningham et al.2002] Hamish Cunningham, Diana
refinement of results in statistical approaches rely Maynard, Kalina Bontcheva, and Valentin Tablan.
on enlarging the training data. Importantly, the 2002. GATE: A framework and graphical
developrule-based approach outlined here could be used tmioennst.eInnvPirrooncmeeednitnfgosr orof
bthuest4N0LthPAtnonoilvsearnsdaraypMplieceat-to support the creation of gold standard corpora ing of the Association for Computational Linguistics
on which statistical models can be trained. Finally, (ACL’02), pages 168–175.
we are not aware of statistical models that support
the extraction of the fine-grained information that
appears to be required for extracting argument
elements.
[Dung1995] Phan Minh Dung. 1995. On the
acceptability of arguments and its fundamental role in
nonmonotonic reasoning, logic programming and
nperson games. Artificial Intelligence, 77(2):321–
358.</p>
      <sec id="sec-3-1">
        <title>We should emphasis an important aspect of this</title>
        <p>
          tool in relation to the intended use on corpora. [Egly et al.2008] Uwe Egly, Sarah Alice Gaggl, and
The tool is designed to apply to reconstruct or cSotedfianngsWfoolrtraanrg. u2m00en8t.aAtionnswfrear-mseetwporrokgsr.amAmrginugmeenn-t
construct arguments that are identified in complex, and Computation, 1(2):147–177.
high volume, fragmentary, and alinearly presented
comments or statements. This is in contrast to [Fellbaum1998] Christiane Fellbaum, editor. 1998.
many approaches that, by and large, follow the PWroersdsN.et: An Electronic Lexical Database. MIT
structure of arguments within a particular (large
and complex) document, e.g. the BBC’s Moral [Forsythe and Buchanan1993] Diana E. Forsythe and
Maze
          <xref ref-type="bibr" rid="ref17">(Bex et al., 2014)</xref>
          , manuals
          <xref ref-type="bibr" rid="ref9">(Saint-Dizier, fBorruceexpGe.rtBsuycshteamnasn: . s1o9m9e3.piKtfnalolwslaenddgesuagcgqeusistiiotinosn.
2012)</xref>
          , and legal texts
          <xref ref-type="bibr" rid="ref4">(Moens et al., 2007)</xref>
          . In In Readings in knowledge acquisition and learning:
addition, the main focus of our tool is not just automating the construction and improvement of
exthe premise-claim relationship, but rich concep- pert systems, pages 117–124. Morgan Kaufmann
tual patterns that indicate the content of expres- Publishers Inc., San Francisco, CA, USA.
sions and are essential in instantiating argumenta- [Horn2001] Laurence Horn. 2001. A Natural History
tion schemes. of Negation. CSLI Publications.
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Levin1993]
          <string-name>
            <given-names>Beth</given-names>
            <surname>Levin</surname>
          </string-name>
          .
          <year>1993</year>
          .
          <article-title>English Verb Classes and Alternations: A Preliminary Investigation</article-title>
          . University of Chicago Press.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[Liddo and Shum2010] Anna De Liddo and Simon Buckingham Shum</source>
          .
          <year>2010</year>
          .
          <article-title>Cohere: A prototype for contested collective intelligence</article-title>
          .
          <source>In ACM Computer Supported Cooperative Work (CSCW</source>
          <year>2010</year>
          ) - Workshop: Collective Intelligence In Organizations - Toward a Research Agenda, Savannah, Georgia, USA, February.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Mitkof2003] Ruslan Mitkof, editor.
          <source>2003. The Oxford Handbook of Computational Linguistics</source>
          . Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Moens et al.2007]
          <string-name>
            <surname>Marie-Francine</surname>
            <given-names>Moens</given-names>
          </string-name>
          , Erik Boiy, Raquel Mochales-Palau, and
          <string-name>
            <given-names>Chris</given-names>
            <surname>Reed</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Automatic detection of arguments in legal texts</article-title>
          .
          <source>In Proceedings of the 11th International Conference on Artificial Intelligence and Law (ICAIL '07)</source>
          , pages
          <fpage>225</fpage>
          -
          <lpage>230</lpage>
          , New York, NY, USA. ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>[Nielsen2011] Finn</surname>
            <given-names>A</given-names>
          </string-name>
          ˚rup Nielsen.
          <year>2011</year>
          .
          <article-title>A new ANEW: Evaluation of a word list for sentiment analysis in microblogs</article-title>
          .
          <source>CoRR, abs/1103</source>
          .2903.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[Pallotta and Delmonte2011] Vincenzo Pallotta and Rodolfo Delmonte</source>
          .
          <year>2011</year>
          .
          <article-title>Automatic argumentative analysis for interaction mining</article-title>
          .
          <source>Argument and Computation</source>
          ,
          <volume>2</volume>
          (
          <issue>2</issue>
          -3):
          <fpage>77</fpage>
          -
          <lpage>106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>[Pang and Lee2008] Bo Pang</article-title>
          and
          <string-name>
            <given-names>Lillian</given-names>
            <surname>Lee</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>Opinion mining and sentiment analysis</article-title>
          .
          <source>Foundations and Trends in Information Retrieval</source>
          ,
          <volume>2</volume>
          (
          <issue>1</issue>
          -2):
          <fpage>1</fpage>
          -
          <lpage>135</lpage>
          , January.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Rowe and Reed2008] Glenn Rowe and Chris Reed</source>
          .
          <year>2008</year>
          .
          <article-title>Argument diagramming: The Araucaria Project</article-title>
          . In Alexandra Okada, Simon Buckingham Shum, and Tony Sherborne, editors,
          <source>Knowledge Cartography: Software Tools and Mapping Techniques</source>
          , pages
          <fpage>163</fpage>
          -
          <lpage>181</lpage>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [
          <string-name>
            <surname>Saint-Dizier2012] Patrick</surname>
          </string-name>
          Saint-Dizier.
          <year>2012</year>
          .
          <article-title>Processing natural language arguments with the &lt;TextCoop&gt; platform</article-title>
          .
          <source>Argument &amp; Computation</source>
          ,
          <volume>3</volume>
          (
          <issue>1</issue>
          ):
          <fpage>49</fpage>
          -
          <lpage>82</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>[Searle and Vanderveken1985] John Searle and Daniel Vanderveken</source>
          .
          <year>1985</year>
          .
          <article-title>Foundations of Illocutionary Logic</article-title>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [van Gelder2007]
          <string-name>
            <surname>Tim van Gelder</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>The rationale for Rationale</article-title>
          .
          <source>Law, Probability and Risk</source>
          ,
          <volume>6</volume>
          (
          <issue>1</issue>
          -4):
          <fpage>23</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [Walton1996]
          <string-name>
            <given-names>Douglas</given-names>
            <surname>Walton</surname>
          </string-name>
          .
          <year>1996</year>
          .
          <article-title>Argumentation Schemes for Presumptive Reasoning</article-title>
          . Erlbaum, Mahwah,
          <string-name>
            <surname>N.J.</surname>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [Webber et al.2011]
          <string-name>
            <given-names>Bonnie</given-names>
            <surname>Webber</surname>
          </string-name>
          , Markus Egg, and
          <string-name>
            <given-names>Valia</given-names>
            <surname>Kordoni</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Discourse structure and language technology</article-title>
          .
          <source>Natural Language Engineering</source>
          , December. Online first.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>[Wyner and Peters2011] Adam Wyner and Wim Peters</source>
          .
          <year>2011</year>
          .
          <article-title>On rule extraction from regulations</article-title>
          . In Katie Atkinson, editor,
          <source>Legal Knowledge and Information Systems - JURIX 2011: The Twenty-Fourth Annual Conference</source>
          , pages
          <fpage>113</fpage>
          -
          <lpage>122</lpage>
          . IOS Press.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [Wyner et al.2012]
          <string-name>
            <given-names>Adam</given-names>
            <surname>Wyner</surname>
          </string-name>
          , Jodi Schneider,
          <string-name>
            <given-names>Katie</given-names>
            <surname>Atkinson</surname>
          </string-name>
          , and
          <string-name>
            <surname>Trevor</surname>
          </string-name>
          Bench-Capon.
          <year>2012</year>
          .
          <article-title>Semiautomated argumentative analysis of online product reviews</article-title>
          .
          <source>In Proceedings of the 4th International Conference on Computational Models of Argument (COMMA</source>
          <year>2012</year>
          ), pages
          <fpage>43</fpage>
          -
          <lpage>50</lpage>
          . IOS Press.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [Wyner et al.
          <source>2013] Adam Wyner</source>
          , Tom van Engers,
          <string-name>
            <given-names>and Anthony</given-names>
            <surname>Hunter</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Working on the argument pipeline: Through flow issues between natural language argument, instantiated arguments, and argumentation frameworks</article-title>
          . In ??, editor,
          <source>Proceedings of the Workshop on Computational Models of Natural Argument</source>
          , volume LNCS, pages ?
          <fpage>?</fpage>
          -?? Springer. To appear.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [Wyner et al.2014]
          <string-name>
            <given-names>Adam</given-names>
            <surname>Wyner</surname>
          </string-name>
          , Katie Atkinson, and
          <string-name>
            <surname>Trevor</surname>
          </string-name>
          Bench-Capon.
          <year>2014</year>
          .
          <article-title>A functional perspective on argumentation schemes</article-title>
          .
          <source>In Peter McBurney</source>
          ,
          <string-name>
            <given-names>Simon</given-names>
            <surname>Parsons</surname>
          </string-name>
          , and Iyad Rahwan, editors,
          <source>Post-Proceedings of the 9th International Workshop on Argumentation in Multi-Agent Systems (ArgMAS</source>
          <year>2013</year>
          ), pages ?
          <fpage>?</fpage>
          -?? To appear.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>