<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Reinforcement Learning for Argumentation: Describing a PhD research</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sultan</forename><surname>Alahmari</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of York</orgName>
								<address>
									<addrLine>Deramore Lane</addrLine>
									<postCode>YO10 5GH</postCode>
									<settlement>Heslington</settlement>
									<region>York</region>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tommy</forename><surname>Yuan</surname></persName>
							<email>tommy.yuan@york.ac.uk</email>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of York</orgName>
								<address>
									<addrLine>Deramore Lane</addrLine>
									<postCode>YO10 5GH</postCode>
									<settlement>Heslington</settlement>
									<region>York</region>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Daniel</forename><surname>Kudenko</surname></persName>
							<email>daniel.kudenko@york.ac.uk</email>
							<affiliation key="aff2">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of York</orgName>
								<address>
									<addrLine>Deramore Lane</addrLine>
									<postCode>YO10 5GH</postCode>
									<settlement>Heslington</settlement>
									<region>York</region>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Reinforcement Learning for Argumentation: Describing a PhD research</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">98A97734CA08943D29EC90581B449CA3</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T04:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract/>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">OVERVIEW</head><p>Artificial intelligence (AI) is increasingly studied in many fields such as philosophy, law and decision making. One of the approaches to AI is the use of agent and multi-agent systems. Agents are key element for building complex largescale distributed systems <ref type="bibr" target="#b8">[9]</ref>. In multi-agent systems, each agent interacts with the environment and communicates with other agents in order to achieve the designated goal. Communication means to share and exchange information, cooperate and coordinate with each other in order to achieve a common goal.</p><p>Argumentation is a type of communication between agents and a process attempting to form an agreement about what to believe. There has been increasing research in argumentation and dialogue systems in the past decade <ref type="bibr" target="#b22">[23]</ref>. The agent as a dialogue participant needs sophisticated dialogue strategies in order to make high quality dialogue contributions. By reviewing the state of art literature in computerised dialogue systems (e.g. <ref type="bibr" target="#b20">[21]</ref>; <ref type="bibr" target="#b21">[22]</ref>), it is observed that their dialogue strategies (i.e. strategic heuristics) are hardwired into the computational agent. One of the main issues with this is that an agent might be incapable of dealing with new dialogue situations that have not been coded, and indeed this is an impossible task given the dynamic nature of argumentation. It would be ideal to make an agent search for an optimal strategy by itself e.g. via trial and error, and thus the agent with the best strategy will win the argument <ref type="bibr" target="#b7">[8]</ref>.</p><p>Machine learning has an important role to play in order to meet these challenges. To make agents learn the dialogue strategies, it would be more flexible for them to make an argument through exploration (trial and error). It is believed that learning can make agents more flexible to adapt to new environments and new dialogue situations. One of the popular machine learning approaches with regards to learning agents is known as reinforcement learning (RL).</p><p>Reinforcement learning focuses on how to map an action for each state by interacting with the environment and observing the state change <ref type="bibr" target="#b14">[15]</ref>. Sutton and Barto <ref type="bibr" target="#b14">[15]</ref> define reinforcement learning as an agent learning what to do and how to connect each situation with an action to maximise the cumulative reward. The learner or agent is not told what action should be taken, rather the learner needs to explore a policy that yields the maximum cumulative reward by trying them out. In reinforcement learning, the agent interacts with the environment by taking an action and receiving a reward for the action taken as seen in figure <ref type="figure">1</ref>. To make an Figure <ref type="figure">1</ref>: Reinforcement learning agent-environment interaction agent learn to argue there is a need to identify states, actions, environment and the rewards. In this research abstract argumentation systems (AAS) is initially used <ref type="bibr" target="#b4">[5]</ref> to represent the argumentation. Reasons are listed as follows:</p><p>(1) It has the ability to represent informal human reasoning in a way that a computer can perform calculation. In this way, argumentation bridges the gap between human and machine reasoning <ref type="bibr" target="#b10">[11]</ref>. (2) It is easier to compute acceptable arguments in order to evaluate variance argument semantics e.g. grounded extension. (3) It provides a great opportunity for the agent to explore the relationship between arguments. (4) It is a powerful method to solve problems since it can be easily implemented in logic programming <ref type="bibr" target="#b4">[5]</ref>. The classical state representation of agents in literature (e.g. <ref type="bibr" target="#b18">[19]</ref>; <ref type="bibr" target="#b3">[4]</ref>) involves states being represented as nodes in the argumentation graph and action by the attack relation between arguments.</p><p>The main objective of our research is to investigate whether reinforcement learning agent can be used to create an argumentation AI with improved performance and efficiency comparable to state-of-the-art systems. Performance is related to how well the agent learns over time. The measurement of performance for a good argumentation; for instance, is whether argumentation can be won or lost or how many arguments from a learning agent obtains accepted against other heuristic strategy agents. Efficiency is related to whether the agent can learn within a limited or insufficient time. So the aim is to find out if the agent can learn rapidly or not. It should also ensure that the agent obtains full knowledge from the environment so as to be able to use an efficient method to find an optimal decision for each state <ref type="bibr" target="#b16">[17]</ref>.</p><p>The light of this hypothesis, the following steps will be taken:</p><p>(1) Initially, a basic abstract argument game model is used due to its simplicity in implementing arguments. This in turn makes it possible to investigate how reinforcement learning can be applicable to a simple dialogue scenario. (2) Evaluation of an argumentation setting with a human or another AI agent by observing learning performance over time. (3) Investigating suitable means for reinforcement learning of a complicated dialogue scenario and studying the results in order to generalise the RL method. Complicated dialogue scenario involves more move types e.g. questions, challenges, assertion, withdrawal and moves from abstract argument level to propositional level.</p><p>This work will also investigate other different scenario such as backtracking ( <ref type="bibr" target="#b13">[14]</ref>; <ref type="bibr" target="#b15">[16]</ref>), arguments content, weight of individual argument amongst others <ref type="bibr" target="#b5">[6]</ref>. Additionally, challenging issue such as states representation <ref type="bibr" target="#b0">[1]</ref> as well as reward function will also be explored.</p><p>To prove the hypothesis, we have built the argumentation software to facilitate experiment for reinforcement learning agent arguing against different agents. A software testbed, Argumento+, named after its predecessor Argumento as reported in <ref type="bibr" target="#b23">[24]</ref>, has been built using the Java programming language. Argumento+ contains the RL agent as well as three other agents namely, random, maximum probability utility and minimum probability utility agent for the sake of the evaluation. The agents play abstract argument games. RL agent plays game against them to maximise the cumulative reward by winning more games. Indeed, if RL agent win the game, it will receive rewards based on the number of acceptable arguments i.e. grounded extensions. We considered grounded extension because it contains an argument that has no doubt in comparison with other arguments <ref type="bibr" target="#b18">[19]</ref>. Consequently, it will be a more acceptable argument.</p><p>We have performed an initial experiment to investigate whether RL agent learns to argue against baseline agents <ref type="bibr" target="#b0">[1]</ref>. RL agent adopts a commonly used RL method, that is Q-learning algorithm. The aim of Q-learning is to allow an agent to learn through experience and map each state with an action by choosing the maximum value from the Q-table which is updated after each episode. The initial experiment and evaluation generally encourage the adopting of reinforcement learning agent in argumentation with a long term delayed reward which are considered as grounded extensions <ref type="bibr" target="#b0">[1]</ref>.</p><p>In the future, this work will attempt to suggest ways to improve the RL agent performance by carrying out further works on the initial experiment results. The state representation of the arguments still needs to be more sophisticated in order to make each one of them unique <ref type="bibr" target="#b0">[1]</ref>, and as a result this will make it easy for the agent to distinguish between states. Even though initial suggestion pointed at making the state a combination of the current state and previous state, it is still difficult to uniquely identify each state. To sort out this issue, it will be worth investigating if this can be resolved by representing each state as: (levelOf T ree, agentID, currentState, previousState)</p><p>Backtracking ( <ref type="bibr" target="#b13">[14]</ref>; <ref type="bibr" target="#b15">[16]</ref>) will also be considered to improve the simple argument game by developing some game rules in <ref type="bibr" target="#b17">[18]</ref>. Moreover, to make the game more competitive and effective it is important to make the agent consider the opponent's strategy <ref type="bibr" target="#b6">[7]</ref>. Hence, the learning agent needs to consider how to learn to argue with the opponent by expanding its knowledge base with new arguments. In addition, in complex argumentation scenario, we need to consider moving from high level abstraction to the argument contents by using propositional logic. Weighted arguments will also be considered in this research since some arguments are more important than others. We will consider choosing a suitable argument model for the complicated scenario. There are many models; for instance, Prakken's dialogue game P ersuasion with dispute <ref type="bibr" target="#b12">[13]</ref>, Bench-Capon T DG dialogue game ( <ref type="bibr" target="#b1">[2]</ref>; <ref type="bibr" target="#b2">[3]</ref>), DC by Mackenzie <ref type="bibr" target="#b9">[10]</ref>, Utilisation by Moore in <ref type="bibr" target="#b11">[12]</ref> and DE system (Yuan et al. <ref type="bibr" target="#b19">[20]</ref>), all of these models will be critically reviewed.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="18" xml:id="foot_0">18th Workshop on Computational Models of Natural Argument Floris Bex, Floriana Grasso, Nancy Green (eds)16th July 2017, London, UK</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">18th Workshop on Computational Models of Natural Argument Floris Bex, Floriana Grasso, Nancy Green (eds) 16th July 2017, London, UK</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Reinforcement learning for abstract argumentation: Q-learning approach</title>
		<author>
			<persName><forename type="first">Sultan</forename><surname>Alahmari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tommy</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniel</forename><surname>Kudenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Adaptive and Learning Agents workshop</title>
				<meeting><address><addrLine>AAMAS</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Specification and implementation of Toulmin dialogue game</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Trevor</surname></persName>
		</author>
		<author>
			<persName><surname>Bench-Capon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of JURIX</title>
				<meeting>JURIX</meeting>
		<imprint>
			<date type="published" when="1998">1998</date>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="page" from="5" to="20" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A method for the computational modelling of dialectical argument with dialogue games</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Trevor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Bench-Capon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paul</forename><forename type="middle">H</forename><surname>Geldard</surname></persName>
		</author>
		<author>
			<persName><surname>Leng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence and Law</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="233" to="254" />
			<date type="published" when="2000">2000. 2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Strategic dialogue management via deep reinforcement learning</title>
		<author>
			<persName><forename type="first">Heriberto</forename><surname>Cuayáhuitl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Simon</forename><surname>Keizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Oliver</forename><surname>Lemon</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1511.08099</idno>
		<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</title>
		<author>
			<persName><forename type="first">Phan</forename><surname>Minh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dung</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial intelligence</title>
		<imprint>
			<biblScope unit="volume">77</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="321" to="357" />
			<date type="published" when="1995">1995. 1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Weighted argument systems: Basic definitions, algorithms, and complexity results</title>
		<author>
			<persName><forename type="first">Anthony</forename><surname>Paul E Dunne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Hunter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Simon</forename><surname>Mcburney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Parsons</surname></persName>
		</author>
		<author>
			<persName><surname>Wooldridge</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">175</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="457" to="486" />
			<date type="published" when="2011">2011. 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Learning Opponent Strategies through First Order Induction</title>
		<author>
			<persName><forename type="first">Katie</forename><surname>Long Genter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Santiago</forename><surname>Ontañón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ashwin</forename><surname>Ram</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">FLAIRS Conference</title>
				<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="1" to="2" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A platform for the evaluation of automated argumentation strategies</title>
		<author>
			<persName><forename type="first">Piotr</forename><forename type="middle">S</forename><surname>Kośmicki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Rough Sets and Current Trends in Computing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="494" to="503" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">Ryszard</forename><surname>Kowalczyk</surname></persName>
		</author>
		<ptr target="https://www.swinburne.edu.au/ict/success/research-projects-and-grants/intelligent-agent/" />
		<title level="m">Intelligent Agent Technology Research</title>
				<imprint>
			<date type="published" when="2014-06">2014. 2014. 06-April-2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Question-begging in non-cumulative systems</title>
		<author>
			<persName><forename type="first">Jim</forename><forename type="middle">D</forename><surname>Mackenzie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of philosophical logic</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="117" to="133" />
			<date type="published" when="1979">1979. 1979</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The added value of argumentation</title>
		<author>
			<persName><forename type="first">Sanjay</forename><surname>Modgil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesca</forename><surname>Toni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Floris</forename><surname>Bex</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ivan</forename><surname>Bratko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Carlos</forename><forename type="middle">I</forename><surname>Chesnevar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wolfgang</forename><surname>Dvořák</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marcelo</forename><forename type="middle">A</forename><surname>Falappa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiuyi</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alice</forename><surname>Sarah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alejandro</forename><forename type="middle">J</forename><surname>Gaggl</surname></persName>
		</author>
		<author>
			<persName><surname>García</surname></persName>
		</author>
		<author>
			<persName><surname>Others</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Agreement technologies</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="357" to="403" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Dialogue game theory for intelligent tutoring systems</title>
		<author>
			<persName><forename type="first">David</forename><surname>John</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Moore</forename></persName>
		</author>
		<imprint>
			<date type="published" when="1993">1993</date>
		</imprint>
		<respStmt>
			<orgName>Leeds Metropolitan University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. Dissertation</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Relating protocols for dynamic dispute with logics for defeasible argumentation</title>
		<author>
			<persName><surname>Henry Prakken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Synthese</title>
		<imprint>
			<biblScope unit="volume">127</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="187" to="219" />
			<date type="published" when="2001">2001. 2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">Henry</forename><surname>Prakken</surname></persName>
		</author>
		<ptr target="http://www.staff.science.uu.nl/∼prakk101/al/chongqing10.html" />
		<title level="m">Argumentation Logics: Games for abstract argumentation</title>
				<imprint>
			<date type="published" when="2010-01">2010. 2010. 01-April-2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Reinforcement learning: An introduction</title>
		<author>
			<persName><forename type="first">S</forename><surname>Richard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><forename type="middle">G</forename><surname>Sutton</surname></persName>
		</author>
		<author>
			<persName><surname>Barto</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>MIT press Cambridge</publisher>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Credulous and sceptical argument games for preferred semantics</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Gerard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Henry</forename><surname>Vreeswik</surname></persName>
		</author>
		<author>
			<persName><surname>Prakken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">European Workshop on Logics in Artificial Intelligence</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="239" to="253" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Efficient Exploration for Reinforcement Learning</title>
		<author>
			<persName><forename type="first">Eric</forename><surname>Wiewiora</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Citeseer</publisher>
		</imprint>
	</monogr>
	<note type="report_type">Ph.D. Dissertation</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">An introduction to multiagent systems</title>
		<author>
			<persName><forename type="first">Michael</forename><surname>Wooldridge</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">An introduction to multiagent systems</title>
		<author>
			<persName><forename type="first">Michael</forename><surname>Wooldridge</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Computational Agents as a Test-Bed to Study the Philosophical Dialogue Model&quot; DE&quot;: A Development of Mackenzie&apos;s DC</title>
		<author>
			<persName><forename type="first">Tangming</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alec</forename><surname>Grierson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Informal Logic</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page">3</biblScope>
			<date type="published" when="2003">2003. 2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A humancomputer debating system prototype and its dialogue strategies</title>
		<author>
			<persName><forename type="first">Tangming</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alec</forename><surname>Grierson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="133" to="156" />
			<date type="published" when="2007">2007. 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A humancomputer dialogue system for educational debate: A computational dialectics approach</title>
		<author>
			<persName><forename type="first">Tangming</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alec</forename><surname>Grierson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Artificial Intelligence in Education</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="3" to="26" />
			<date type="published" when="2008">2008. 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Towards an arguing agents competition: Building on argumento</title>
		<author>
			<persName><forename type="first">Tangming</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jenny</forename><surname>Schulze</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joseph</forename><surname>Devereux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chris</forename><surname>Reed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IJCAI2008 Workshop on Computational Models of Natural Argument</title>
				<meeting>IJCAI2008 Workshop on Computational Models of Natural Argument</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">A computer game for abstract argumentation</title>
		<author>
			<persName><forename type="first">Tangming</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Viðar</forename><surname>Svansson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alec</forename><surname>Grierson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th Workshop on Computational Models of Natural Argument (CMNA07)</title>
				<meeting>the 7th Workshop on Computational Models of Natural Argument (CMNA07)</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
