<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Extracting Argumentative Dialogues from the Neural Network that Computes the Dungean Argumentation Semantics</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Yoshiaki</forename><surname>Gotou</surname></persName>
							<email>gotou@cs.ie.niigata-u.ac.jp</email>
							<affiliation key="aff0">
								<orgName type="institution">Niigata University</orgName>
								<address>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Takeshi</forename><surname>Hagiwara</surname></persName>
							<email>hagiwara@ie.niigata-u.ac.jp</email>
							<affiliation key="aff1">
								<orgName type="institution">Niigata University</orgName>
								<address>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hajime</forename><surname>Sawamura</surname></persName>
							<email>sawamura@ie.niigata-u.ac.jp</email>
							<affiliation key="aff2">
								<orgName type="institution">Niigata University</orgName>
								<address>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Extracting Argumentative Dialogues from the Neural Network that Computes the Dungean Argumentation Semantics</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">9CACBFAD8ED53943B523DA879D8D5246</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T22:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Argumentation is a leading principle both foundationally and functionally for agent-oriented computing where reasoning accompanied by communication plays an essential role in agent interaction. We constructed a simple but versatile neural network for neural network argumentation, so that it can decide which argumentation semantics (admissible, stable, semistable, preferred, complete, and grounded semantics) a given set of arguments falls into, and compute argumentation semantics via checking. In this paper, we are concerned with the opposite direction from neural network computation to symbolic argumentation/dialogue. We deal with the question how various argumentation semantics can have dialectical proof theories, and describe a possible answer to it by extracting or generating symbolic dialogues from the neural network computation under various argumentation semantics.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Much attention and effort have been devoted to the symbolic argumentation so far <ref type="bibr">[Rahwan and</ref><ref type="bibr">Simari, 2009][Prakken and</ref><ref type="bibr" target="#b2">Vreeswijk, 2002]</ref> <ref type="bibr" target="#b0">[Besnard and Doutre, 2004]</ref>, and its application to agent-oriented computing. We think that argumentation can be a leading principle both foundationally and functionally for agent-oriented computing where reasoning accompanied by communication plays an essential role in agent interaction. Dung's abstract argumentation framework and argumentation semantics <ref type="bibr" target="#b2">[Dung, 1995]</ref> have been one of the most influential works in the area and community of computational argumentation as well as logic programming and non-monotonic reasoning.</p><p>In 2005, A. Garcez et al. proposed a novel approach to argumentation, called the neural network argumentation <ref type="bibr" target="#b1">[d'Avila Garcez et al., 2005]</ref>. In the papers <ref type="bibr">[Makiguchi and</ref><ref type="bibr">Sawamura, 2007a][Makiguchi and</ref><ref type="bibr">Sawamura, 2007b]</ref>, we dramatically developed their initial ideas on the neural network argumentation to various directions in a more mathematically convincing manner. More specifically, we illuminated the following questions which they overlooked in their paper but that deserve much attention since they are beneficial for understanding or characterizing the computational power and outcome of the neural network argumentation from the perspective of the interplay between neural network argumentation and symbolic argumentation.</p><p>1. Can the neural network argumentation algorithm deal with self-defeating or other pathological arguments?</p><p>2. Can the argument status of the neural network argumentation correspond to the well-known status in symbolic argumentation framework such as in <ref type="bibr" target="#b2">[Prakken and Vreeswijk, 2002]</ref>?</p><p>3. Can the neural network argumentation compute the fixpoint semantics for argumentation?</p><p>4. Can symbolic argumentative dialogues be extracted from the neural network argumentation?</p><p>The positive solutions to them helped us deeply understand relationship between symbolic and neural network argumentation, and further promote the syncretic approach of symbolism and connectionism in the field of computational argumentation <ref type="bibr">[Makiguchi and</ref><ref type="bibr">Sawamura, 2007a][Makiguchi and</ref><ref type="bibr">Sawamura, 2007b]</ref>. They, however, paid attention only to the grounded semantics for argumentation in examining relationship between symbolic and neural network argumentation.</p><p>Ongoingly, we constructed a simple but versatile neural network for neural network argumentation, so that it can decide which argumentation semantics (admissible, stable, semi-stable semantics, preferred, complete, and grounded semantics) <ref type="bibr" target="#b2">[Dung, 1995</ref><ref type="bibr" target="#b1">][Caminada, 2006]</ref> a given set of arguments falls into, and compute argumentation semantics via checking <ref type="bibr" target="#b2">[Gotou, 2010]</ref>. In this paper, we are concerned with the opposite direction from neural network computation to symbolic argumentation/dialogue. We deal with the question how various argumentation semantics can have dialectical proof theories, and describe a possible answer to it by extracting or generating symbolic dialogues from the neural network computation under various argumentation semantics.</p><p>The results illustrate that there can exist an equal bidirectional relationship between the connectionism and symbolism in the area of computational argumentation. And also they lead to a fusion or hybridization of neural network computation and symbolic one <ref type="bibr" target="#b1">[d'Avila Garcez et al., 2009</ref><ref type="bibr" target="#b2">][Levine and Aparicio, 1994</ref><ref type="bibr" target="#b2">][Jagota et al., 1999]</ref>.</p><p>The paper is organized as follows. In the next section, we explicate our basic ideas on the neural network checking argumentation semantics by tracing an illustrative example. In Section 3, with our new construction of neural network for argumentation, we develop a dialectical proof theory induced by the neural network argumentation for each argumentation semantics by Dung <ref type="bibr" target="#b2">[Dung, 1995]</ref>. In Section 4, we describe some related works although there is no work really related to our work except for Garcez et al.'s original one and our work. The final section discusses the major contribution of the paper and some future works.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Basic Ideas on the neural argumentation</head><p>Due to the space limitation, we will not describe the technical details for constructing a neural network for argumentation and its computing method in this paper (see <ref type="bibr" target="#b2">[Gotou, 2010]</ref> for them). Instead, we illustrate our basic ideas by using a simple argumentation example and following a neural network computation trace for it. We assume readers are familiar with the Dungean semantics such as admissible, stable, semi-stable, preferred, complete, and grounded semantics <ref type="bibr" target="#b2">[Dung, 1995</ref><ref type="bibr" target="#b1">][Caminada, 2006]</ref>.</p><p>Let us consider an argumentation network on the left side of Figure <ref type="figure" target="#fig_0">1</ref> that is a graphic presentation of the argumentation framework AF =&lt; AR, attacks &gt;, where AR = {i, k, j}, and attacks According to the Dungean semantics <ref type="bibr" target="#b2">[Dung, 1995</ref><ref type="bibr" target="#b1">][Caminada, 2006]</ref>, the argumentation semantics for AF is determined as follows: Admissible set = {∅, {i}, {j}, {i, j}}, Complete extension = {{i, j}}, Preferred extension = {{i, j}}, Semi-stable extension = {{i, j}}, Stable extension = {{i, j}}, and Grounded extension = {{i, j}}.</p><formula xml:id="formula_0">= {(i, k), (k, i), (j, k)}. i k j i o k o j o i h 2 k h 2 j h 2 i h 1 k h 1 j h 1 i i k i j i weight is a weight is -b weight is -1</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Neural network architecture for argumentation</head><p>In the Dungean semantics, the notions of 'attack', 'defend (acceptable)' and 'conflict-free' play the most important role in constructing various argumentation semantics. This is true in our neural network argumentation as well. Let AF =&lt; AR, attacks &gt; be as above, and S be a subset of AR, to be examined. The argumentation network on the left side of Figure <ref type="figure" target="#fig_0">1</ref> is first translated into the neural network on the right side of Figure <ref type="figure" target="#fig_0">1</ref>. Then, the network architecture consists of the following constituents:</p><p>• A double hidden layer network: It is a double hidden layer network and has the following four layers: input layer, first hidden layer, second hidden layer and output layer, which have the ramified neurons for each argument, such as α i , α h1 , α h2 and α o for the argument α. • A recurrent neural network (for judging grounded extension): The double hidden layer network like on the right side of Figure <ref type="figure" target="#fig_0">1</ref> is piled up high until the input and output layers converge (stable state) like in Figure <ref type="figure" target="#fig_2">2</ref>. The symbol τ represents the pile number (τ ≥ 0) which amounts to the turning number of the input-output cycles of the neural network. In the stable state, we set τ = converging. Then, S τ =n stands for a set of arguments at τ = n. • A feedforward neural network (except judging grounded extension): When we compute argumentation semantics except grounded extension with a recurrent neural network, it surely converges at τ = 1. Hence, the first output vector equals to second output vector. We judge argumentation semantics by using only first input vector and converged output vector. As a result we can regard a recurrent neural network as a feedforward neural network except judging grounded extension.</p><p>• The vectors of the neural network: The initial input vector for the neural network is a list consisting of 0 and a that represent the membership of a set of arguments to be examined. For example, it is [a, 0, 0] for S = S τ =0 = {i} ⊆ AR. The output vectors from each layer take as the values only "-a", "0", "a" or "-b". <ref type="foot" target="#foot_0">1</ref> The intuitive meaning of them for each output vector are as follows:</p><p>Output layer -"a" in the output vector from the output layer represents membership in</p><formula xml:id="formula_1">S ′ τ = {X ∈ AR | def ends(S τ , X)} 2 and the argu- ment is not attacked by S ′ τ .</formula><p>-"-a" in the output vector from the output layer represents membership in S ′+ τ .<ref type="foot" target="#foot_2">3</ref> -"0" in the output vector from the output layer represents the argument belongs to neither S ′ τ nor S ′+ τ .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Second hidden layer</head><p>-"a" in the output vector from the second hidden layer represents membership in S ′ τ and the argument is not attacked by S ′ τ .</p><p>-"0" in the output vector from the second hidden layer represents membership not in S ′ τ or the argument is attacked by S ′ τ .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fisrt hidden layer</head><p>-"a" in the output vector from the first hidden layer represents membership in S τ and the argument is not attacked by S τ .</p><p>-"-b" in the output vector from the first hidden layer represents the membership in S + τ .</p><p>-"0" in the output vector from the first hidden layer represents the others. Input layer -"a" in the output vector from the input layer represents membership in S τ . -"0" in the output vector from the input layer represents the argument does not belong to S. Stage1. Operation of input layer at τ = 0</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A trace of the neural network</head><formula xml:id="formula_2">S τ =0 = S = {i}.</formula><p>Hence, [a, 0, 0] is given to the input layer of the neural network in Figure <ref type="figure" target="#fig_0">1</ref>. Each input neuron computes its output value by its activation function (see the graph of the activation function, an identity function, on the right side of the input layer of Figure <ref type="figure" target="#fig_2">2</ref>). The activation function makes the input</p><formula xml:id="formula_3">neuron output value input value threshold θ weight is a weight is -b weight is -1 i o k o j o a a -a i i k i j i a 0 0 a 0 0 a 0 0 a 2 0 -ab i h1 k h1 j h1 a -b 0 a 2 +b -a-ab 0 i h2 k h2 j h2 a a 0 a 2 a 2 -2a a a -a 1st output vector 1st input vector θ i θ k θ j a 2 a -a -a input value into X o 0 output value from X o θX a 0 input value into X h2 output value from X h2 1 1 0 input value into X i output value from X i -b -b a 0 input value into X h1</formula><p>output value from X h1 a 2 X belongs to {i, k, j} In this computation, the input layer judges S τ =0 = {i} and inputs a 2 to i h1 through the connection between i i and i h1 whose weight is a. At the same time, the input layer inputs −ab to k h through the connection between i i and k h1 whose weight is −b so as to make the first hidden layer know that i ∈ S τ =0 attacks k (in symbols, attacks(i, k)). Since the output values of k i and j i are 0, they input 0 to other first hidden neurons.</p><formula xml:id="formula_4">θ i =a 2 +b θ k =a 2 +2b θ j =0 i o k o j o a a -a i i k i j i a a -a a a 0 a a -a a 2 a 2 -2ab i h1 k h1 j h1 a -b a a 2 +b -2a-ab a 2 i h2 k h2 j h2 a a 0 a 2 a 2 -2a a a -a 2nd output vector 2nd input vector θ i θ k θ j S τ=1 = { i, j } S + τ=1 ={ k } S' τ=1 ={ i, j } S' + τ=1 ={ k } S τ=0 = { i } S + τ=0 ={ k } S' τ=0 ={ i, j } S' + τ=0 ={ k }</formula><p>In summary, after the input layer receives the input vector [a, 0, 0], it turns out to give the hidden layer the vector</p><formula xml:id="formula_5">[ a • a + 0•(−b), a • (−b) + 0 • a + 0 • (−b), 0 • a ]= [a 2 , −ab, 0].</formula><p>Stage 2. Operation of first hidden layer at τ = 0 Now, the first hidden layer receives a vector [a 2 , −ab, 0] from the input layer. Each activation function of i h1 , k h1 and j h1 is a step function as put on the right side of the first hidden layer in Figure <ref type="figure" target="#fig_2">2</ref>. The activation function categorizes values of vectors which are received from the input layer into three values as if the function understand each argument state. Now, the following inequalitis hold: a 2 ≥ a 2 , −ab ≤ −b, −b ≤ 0 ≤ a 2 . According to the activation function, the first hidden layer outputs the vector [a, −b, 0].</p><p>Next, the first hidden layer inputs a 2 + b into the second hidden neuron i h2 through the connections between i h1 and i h2 whose weight is a, k h1 and i h2 whose weight is −1, so that the second hidden layer can know attacks(k, i) with i ∈ S τ =0 . At the same time, the first hidden layer inputs −a − ab into k h2 through the connections between i h1 and k h2 whose weight is −1, k h1 and k h2 whose weight is a, so that the second hidden layer can know attacks(i, k) with k ∈ S + τ =0 and inputs 0 into j h2 so that the second hidden layer can know the argument j is not attacked by any arguments with j ̸ ∈ S τ =0 .</p><p>In summary, after the first hidden layer received the vector [a 2 , −ab, 0], it turns out to pass the output vector [a 2 + b, −a − ab, 0] to the second hidden neurons.</p><p>Stage 3. Operation of second hidden layer at τ = 0</p><p>The second hidden layer receives a vector [a 2 , −ab, 0] from first hidden layer. Each activation function of i h2 , k h2 and j h2 is a step function as put on the right side of the first hidden layer in Figure <ref type="figure" target="#fig_2">2</ref> with its threshold, θ i = a 2 + b, θ k = a 2 + 2b and θ j = 0 respectively. These thresholds are defined by the ways of being attacked as follows:</p><p>• If an argument X can defend X only by itself (in Figure <ref type="figure" target="#fig_0">1</ref>, such X is i since def ends({i}, i)), then the threshold of X h2 (θ X ) is a 2 +tb (t is the number of arguments bilaterally attacking X). • If an argument X can not defend X only by itself and is both bilaterally and unilaterally attacked by some other argument (in Figure <ref type="figure" target="#fig_0">1</ref>, such X is</p><formula xml:id="formula_6">k since ¬def ends({k}, k)&amp;attacks(j, k)&amp;attacks(i, k)), then the threshold of X h2 (θ X ) is a 2 + b(s + t) (s(t)</formula><p>is the number of arguments unilaterally(bilaterally) attacking X).</p><p>Note that l=m=1 for the argument k in Figure <ref type="figure" target="#fig_0">1</ref>. • If an argument X is not attacked by any other arguments (in Figure <ref type="figure" target="#fig_0">1</ref>, such X is j), then the threshold of X h (θ X h ) is 0. • If an argument X can not defend X only by itself and is just unilaterally attacked by some other argument, then the threshold of X h2 (θ X ) is bs (s is the number of arguments unilaterally attacking X).</p><p>By these thresholds and their activation functions (step functions), if S defends X then X h2 outputs a. Otherwise, X h2 outputs 0 in the second hidden layer. As the result, the second hidden layer judges either X ∈ S ′ τ or X ̸ ∈ S ′ τ by two output values (a and 0). In this way, the output vector in the second hidden layer yields [a, 0, a]. This vector means that the second hidden layer judges that the arguments i and j are defended by S τ =0 , resulting in S ′ τ =0 = {i, j}. Next, the second hidden layer inputs a 2 into the output neurons i o and j o through the connections between i h2 and i o , j h2 and j o whose weights are a,so that the output layer can know i, j ∈ S τ =0 and i, j ∈ S ′ τ =0 . At the same time, the second hidden layer inputs −2a into k o through the connections between i h2 and k o , j h2 and k o whose weights are −1,so that output layer can know attacks(i, k) and attacks(j, k) with k ∈ S ′+ τ =0 . Furthermore, it should be noted that another role of the second hidden layer lies in guaranteeing that S ′ τ is conflict-free<ref type="foot" target="#foot_3">4</ref> . It is actually true since the activation function of the second hidden layer makes X h2 for the argument X attacked by S τ output 0. The conflict-freeness is important since it is another notion for characterizing the Dungean semantics.</p><p>In summary, after the second hidden layer received the vector [a 2 + b, −a − ab, 0], it turns out to pass the output vector [a 2 , −2a, a 2 ] to the second hidden neurons.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Stage 4. Operation of output layer at τ = 0</head><p>The output layer now received the vector [a 2 , −2a, a 2 ] from the second hidden layer. Each neuron in the output layer has an activation function as put on the right side of the output layer in Figure <ref type="figure" target="#fig_2">2</ref>.</p><p>This activation function makes the output layer interpret any positive sum of input values into the output neuron X o as X ∈ S ′ τ , any negative sum as X ∈ S ′+ τ , and the value 0 as X ̸ ∈ S ′ τ and X ̸ ∈ S ′+ τ . As the result, the output layer outputs the vector [a, −a, a].</p><p>Summarizing the computation at τ = 0, the neural network received the vector [a, 0, 0] in the input layer and outputted [a, −a, a] from the output layer. This output vector means that the second hidden layer judged S ′ τ =0 = {i, j} and guaranteed its conflict-freeness. With these information passed to the output layer from the hidden layer, the output layer judged S ′+ τ =0 = {k}.</p><p>Stage 5. Inputting the output vector at τ = 0 to the input layer at τ = 1 (shift from τ = 0 to τ = 1)</p><p>At τ = 0, the neural network computed S ′ τ =0 = {i, j} and S ′+ τ =0 = {k}. We continue the computation recurrently by connecting the output layer to the input layer of the same neural network, setting first output vector to second input vector. Thus, at τ = 1, the input layer starts its operation with the input vector [a, −a, a]. We, however, omit the remaining part of the operations starting from here since they are to be done in the similar manner.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Stage 6. Convergence to a stable state</head><p>We stop the computation immediately after the time round τ = 1 since the input vector to the neural network at τ = 1 coincides with the output vector at τ = 1. This means that the neural network amounts to having computed a least fixed point of the characteristic function that was defined with the acceptability of arguments by Dung <ref type="bibr" target="#b2">[Dung, 1995]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Stage 7. Judging admissible set, complete extension and stable extension</head><p>Through the above neural network computation, we have obtained S ′ τ =0 = {i, j} and S ′+ τ =0 = {k} for S τ =0 = {i}, and S ′ τ =1 = {i, j} and S ′+ τ =1 = {k} for S τ =1 = {i, j}. Moreover, we also have such a result that both the sets {i} and {i, j} are conflict-free.</p><p>The condition for admissible set says that a set of arguments S satisfies its conflict-freeness and ∀X ∈ AR(X ∈ S → X ∈ S ′ ). Therefore, the neural network can know that the sets {i} and {i, j} are admissible since it confirmed the condition at the time round τ = 0 and τ = 1 respectively.</p><p>The condition for complete extension says that a set of arguments S satisfies its conflict-freeness and ∀X ∈ AR(X ∈ S ↔ X ∈ S ′ ). Therefore, the neural network can know that the set {i, j} satisfies the condition since it has been obtained at τ = converging. Incidentally, the neural network knows that the set {i} is not a complete extension since it does not appear in the output neuron at τ = converging.</p><p>The condition for stable extension says that a set of arguments S satisfies ∀X ∈ AR(X ̸ ∈ S → X ∈ S ′+ ). The neural network can know that the {i, j} is a stable extension since it confirmed the condition from the facts that S τ =1 = {i, j}, S ′ τ =1 = {i, j} and S ′+ τ =1 = {a}.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Stage 8. Judging preferred extension, semi-stable extension and grounded extension</head><p>By invoking the neural network computation that was stated from the stages 1-7 above for every subset of AR, and AR itself as an input set S, it can know all admissible sets of AF, and hence it also can know the preferred extensions of AF by picking up the maximal ones w.r.t. set inclusion from it. In addition, the neural network can know semi-stable extensions by picking up a maximal S ∪ S + where S is a complete extension in AF. This is possible since the neural network already has computed S + . For the grounded extension, the neural network can know that the grounded extension of AF is S ′ τ =converging when the computation stopped by starting with S τ =0 = ∅. This is due to the fact that the grounded extension is obtained by the iterative computation of the characteristic function that starts from ∅ <ref type="bibr" target="#b2">[Prakken and Vreeswijk, 2002]</ref>.</p><p>Readers should refer to the paper <ref type="bibr" target="#b2">[Gotou, 2010]</ref> for the soundness theorem of the neural network computation illustrated so far.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Extracting Symbolic Dialogues from the Neural Network</head><p>In this section, we will address to such a question as if symbolic argumentative dialogues can be extracted from the neural network argumentation. The symbolic presentation of arguments would be much better for us since it makes the neural net argumentation process verbally understandable. The notorious criticism for neural network as a computing machine is that connectionism usually does not have explanatory reasoning capability. We would say our attempt here is one that can turn such criticism in the area of argumentative reasoning.</p><p>In our former paper <ref type="bibr">[Makiguchi and Sawamura, 2007b]</ref>, we have given a method to extract symbolic dialogues from the neural network computation under the grounded semantics, and showed its coincidence with the dialectical proof theory for the grounded semantics. In this paper, we are concerned with the question how other argumentation semantics can have dialectical proof theories. We describe a possible answer to it by extracting or generating symbolic dialogues from the neural network computation under other more complicated argumentation semantics. We would say this is a great success that was brought by our neural network approach to argumentation since dialectical proof theories for various Dungean argumentation semantics have not been known so far except only some works (e. g., <ref type="bibr" target="#b2">[Vreeswijk and Prakken, 2000]</ref>, <ref type="bibr" target="#b1">[Dung et al., 2006]</ref>).</p><p>First of all, we summarize the trace of the neural network computation as have seen in Section 2 as in Table <ref type="table" target="#tab_0">1</ref>, in order to make it easy to extract symbolic dialogues from our neural network. Wherein, S P RO,τ =k and S OP P,τ =k denote the followings respectively: At time round τ = k(k ≥ 0) in the neural network computation, S P RO,τ =k = S ′ τ =k , and S OP P,τ =k = S ′ + τ =k (see Section 2 for the notations). For example, Table <ref type="table" target="#tab_1">2</ref> is the table for S = {i} summarized from the neural network computation in Fig. <ref type="figure" target="#fig_2">2</ref> We assume dialogue games are performed by proponents (PRO) and opponents (OPP) who have their own sets of arguments that are to be updated in the dialogue process. In advance of the dialogue, proponents have S(= S τ =0 ) as an initial set S P RO,τ =0 , and opponents have an empty set {} as an initial set S OP P,τ =0 .</p><p>We illustrate how to extract dialogues from the summary table <ref type="table">by</ref> showing a concrete extraction process of dialogue moves in Intuitively, the condition says every argument in S P RO,τ =0 is retained until the stable state as can be seen in Table <ref type="table" target="#tab_1">2</ref>. It should be noted that the condition reflects the definition of 'admissible extension' in <ref type="bibr" target="#b2">[Dung, 1995]</ref>. Intuitively, the second condition says that any argument that does not belong to S P RO,τ =0 does not enter into S P RO,τ =t at any time round t up to a stable one k. Those conditions reflect the definition of 'complete extension' in <ref type="bibr" target="#b2">[Dung, 1995]</ref>.</p><p>Definition 3 (Stably dialectical proof theory) The stably dialectical proof theory is the dialogue extraction process in which the summary table generated by the neural network computation satisfies the following conditions: let S P RO,τ =0 be the input set at τ = 0.</p><p>1. S P RO,τ =0 satisfies the conditions of Definition 2.</p><p>2. AR = S P RO,τ =n ∪ S OP P,τ =n , where AF =&lt; AR, attacks &gt; and n denotes a stable time round.</p><p>Intuitively, the second condition says that PRO and OPP cover AR exclusively and exhaustively. Those conditions reflect the definition of 'stable extension' in <ref type="bibr" target="#b2">[Dung, 1995]</ref>.</p><p>For the dialectical proof theories for preferred <ref type="bibr" target="#b2">[Dung, 1995]</ref> and semi-stable semantics <ref type="bibr" target="#b1">[Caminada, 2006]</ref>, we can similarly define them taking into account maximality condition. So we omit them in this paper.</p><p>As a whole, the type of the dialogues in any dialectical proof theories above would be better classified as a persuasive dialogue since it is closer to persuasive dialogue in the dialogue classification by Walton <ref type="bibr" target="#b3">[Walton, 1998]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Related Work</head><p>Garcez et al. initiated a novel approach to argumentation, called the neural network argumentation <ref type="bibr" target="#b1">[d'Avila Garcez et al., 2005]</ref>. However, the semantic analysis for it is missing there. That is, it is not clear what they calculate by their neural network argumentation. Besnard et al. proposed three symbolic approaches to checking the acceptability of a set of arguments <ref type="bibr" target="#b0">[Besnard and Doutre, 2004]</ref>, in which not all of the Dungean semantics can be dealt with. So it may be fair to say that our approach with the neural network is more powerful than Besnard et al.'s methods.</p><p>Vreeswijk and Prakken proposed a dialectical proof theory for the preferred semantics <ref type="bibr" target="#b2">[Vreeswijk and Prakken, 2000]</ref>. It is similar to that for the grounded semantics <ref type="bibr" target="#b2">[Prakken and Sartor, 1997]</ref>, and hence can be simulated in our neural network as well.</p><p>In relation to the neural network construction and computation for the neural-symbolic systems, the structure of the neural network is a similar 3-layer recurrent network, but our neural network computes not only the least fixed point (grounded semantics) but also the fixed points (complete extension). This is a most different aspect from Hölldobler and his colleagues' work <ref type="bibr" target="#b2">[Hölldobler and Kalinke, 1994]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Concluding Remarks</head><p>It is a long time since connectionism appeared as an alternative movement in cognitive science or computing science which hopes to explain human intelligence or soft information processing. It has been a matter of hot debate how and to what extent the connectionism paradigm constitutes a challenge to classicism or symbolic AI. In this paper, we showed that symbolic dialectical proof theories can be obtained from the neural network computing various argumentation semantics, which allow to extract or generate symbolic dialogues from the neural network computation under various argumentation semantics. The results illustrate that there can exist an equal bidirectional relationship between the connectionism and symbolism in the area of computational argumentation. On the other hand, much effort has been devoted to a fusion or hybridization of neural net computation and symbolic one <ref type="bibr" target="#b1">[d'Avila Garcez et al., 2009</ref><ref type="bibr" target="#b2">][Levine and Aparicio, 1994</ref><ref type="bibr" target="#b2">][Jagota et al., 1999]</ref>. The result of this paper as well as our former results on the hybrid argumentation <ref type="bibr">[Makiguchi and</ref><ref type="bibr">Sawamura, 2007a][Makiguchi and</ref><ref type="bibr">Sawamura, 2007b</ref>] yields a strong evidence to show that such a symbolic cognitive phenomenon as human argumentation can be captured within an artificial neural network.</p><p>The simplicity and efficiency of our neural network may be favorable to our future plan such as introducing learning mechanism into the neural network argumentation, implementing the neural network engine for argumentation, which can be used in argumentation-based agent systems, and so on. Specifically, it might be possible to take into account the so-called core method developed in <ref type="bibr" target="#b2">[Hölldobler and</ref><ref type="bibr" target="#b2">Kalinke, 1994] and</ref><ref type="bibr">CLIP in [d'Avila Garcez et al., 2009]</ref> although our neural-symbolic system for argumentation is much more complicated due to the complexities and varieties of the argumentation semantics.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Graphic representation of AF (left) and Neural network translated from the AF (right)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Let us examine to which semantics S = {i} belongs in AF on the left side of Figure1by tracing the neural network computation. The overall visual computation flow is shown in Figure2.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: A trace of the neural network for argumentation with S = {i} and activation functions</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Summary table of the neural network computation S P RO,τ =k S OP P,τ =k</figDesc><table><row><cell>τ = 0</cell><cell>input</cell><cell>S</cell><cell>{}</cell></row><row><cell></cell><cell>output</cell><cell>. . .</cell><cell>. . .</cell></row><row><cell>τ = 1</cell><cell>input</cell><cell>. . .</cell><cell>. . .</cell></row><row><cell></cell><cell>output</cell><cell>. . .</cell><cell>. . .</cell></row><row><cell>. . .</cell><cell>. . .</cell><cell>. . .</cell><cell>. . .</cell></row><row><cell cols="4">Table 2: Summary table of the neural network computation</cell></row><row><cell>in Fig. 2</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">S P RO,τ =k S OP P,τ =k</cell></row><row><cell>τ = 0</cell><cell>input</cell><cell>{i}</cell><cell>{}</cell></row><row><cell></cell><cell>output</cell><cell>{i, j}</cell><cell>{k}</cell></row><row><cell>τ = 1</cell><cell>input</cell><cell>{i, j}</cell><cell>{k}</cell></row><row><cell></cell><cell>output</cell><cell>{i, j}</cell><cell>{k}</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc></figDesc><table><row><cell>:</cell></row><row><cell>1. P(roponent, speaker): PRO declares a topic as a set of be-</cell></row><row><cell>liefs by saying {i} at τ = 0. OPP just hears it with no</cell></row><row><cell>response {} for the moment. (dialogue extraction from the</cell></row><row><cell>first row of Table 2)</cell></row><row><cell>2. P(roponent, or speaker): PRO further asserts the incre-</cell></row><row><cell>mented belief {i, j} because the former beliefs defend j,</cell></row><row><cell>and at the same time states the belief {i, j} conflicts with</cell></row><row><cell>{k} at τ = 0. (dialogue extraction from the second row of</cell></row><row><cell>Table 2)</cell></row><row><cell>3. O(pponent, listener or audience): OPP knows that its belief</cell></row><row><cell>{k} conflicts with PRO's belief {i, j} at τ = 0. (dialogue</cell></row><row><cell>extraction from the second row of Table 2)</cell></row><row><cell>4. No further dialogue moves can be promoted at τ = 1, re-</cell></row><row><cell>sulting in a stable state. (dialogue termination by the third</cell></row><row><cell>and fourth rows of</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 )</head><label>2</label><figDesc>Thus, we can view P(roponent, speaker)'s initial belief {i} as justified one in the sense that it could have persuaded O(pponent, listener or audience) under an appropriate Dungean argumentation semantics. Actually, we would say it is admissibly justified under admissibly dialectical proof theory below. Formally, we introduce the following dialectical proof theories, according to the respective argumentation semantics.</figDesc><table><row><cell>Definition 1 (Admissibly dialectical proof theory) The admis-</cell></row><row><cell>sibly dialectical proof theory is the dialogue extraction pro-</cell></row><row><cell>cess in which the summary table generated by the neural net-</cell></row><row><cell>work computation satisfies the following condition: ∀A ∈</cell></row><row><cell>S P RO,τ =0 ∀k ≥ 0(A ∈ S P RO,τ =k ), where S P RO,τ =0 is the</cell></row><row><cell>input set at τ = 0.</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head></head><label></label><figDesc>Definition 2 (Completely dialectical proof theory) The completely dialectical proof theory is the dialogue extraction process in which the summary table generated by the neural network computation satisfies the following conditions: let S P RO,τ =0 be the input set at τ = 0.</figDesc><table /><note>1. S P RO,τ =0 satisfies the condition of Definition 1.2. ∀A ̸ ∈ SP RO,τ =0 ∀k(A ̸ ∈ S P RO,τ =k )</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Let a,b be positive real numbers and they satisfy √ b &gt; a &gt; 0.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Let S⊆AR and A∈AR. defends(S, A) iff ∀B ∈ AR(attacks(B, A) → attacks(S, B)).</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">Let S ⊆ AR. S + = {X ∈ AR | attacks(S, X)}.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">A set S of arguments is said to be conflict-free if there are no arguments A and B in S such that A attacks B.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Checking the acceptability of a set of arguments</title>
		<author>
			<persName><forename type="first">Doutre</forename><surname>Besnard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Philippe</forename><surname>Besnard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sylvie</forename><surname>Doutre</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">10th International Workshop on Non-Monotonic Reasoning (NMR 2004</title>
				<imprint>
			<date type="published" when="2004">2004. 2004</date>
			<biblScope unit="page" from="59" to="64" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Dialectic proof procedures for assumption-based, admissible argumentation</title>
		<author>
			<persName><forename type="first">Martin</forename><surname>Caminada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Avila</forename><surname>Caminada</surname></persName>
		</author>
		<author>
			<persName><surname>Garcez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Models of Argument: Proceedings of COMMA 2006</title>
				<editor>
			<persName><forename type="first">Paul</forename><forename type="middle">E</forename><surname>Dunne</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Trevor</forename><forename type="middle">J M</forename><surname>Bench-Capon</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2005">2006. 2006. 2005. 2005. 2009. 2009. 2006. 2006</date>
			<biblScope unit="volume">144</biblScope>
			<biblScope unit="page" from="114" to="159" />
		</imprint>
	</monogr>
	<note>Artificial Intelligence</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logics programming and n-person games</title>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">P M</forename><surname>Dung</surname></persName>
		</author>
		<author>
			<persName><surname>Dung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Gotou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gotou</forename><surname>Yoshiaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steffen</forename><surname>Hölldobler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yvonne</forename><surname>Kalinke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Jagota</surname></persName>
		</author>
		<ptr target="http://www.cs.ie.niigata-u.ac.jp/Paper/Storage/graguation_thesis_gotou.pdf" />
	</analytic>
	<monogr>
		<title level="m">Argumentation in Multi-Agent Systems, 4th International Workshop, ArgMAS 2007, Revised Selected and Invited Papers</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Gerard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Henry</forename><surname>Vreeswijk</surname></persName>
		</editor>
		<editor>
			<persName><surname>Prakken</surname></persName>
		</editor>
		<meeting><address><addrLine>Niigata, Japan</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1919">1995. 1995. 2010. December 2010. 1994. 1994. 1999. 1999. 1994. 1994. 2007. 2007. 1997. 1997. 2002. 2002. 2009. 2009. 2000. 1919. 2000</date>
			<biblScope unit="volume">77</biblScope>
			<biblScope unit="page">239</biblScope>
		</imprint>
		<respStmt>
			<orgName>Graduate School of Science and Technology, Niigata University,</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Master&apos;s thesis</note>
	<note>Lecture Notes in Computer Science</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">The New Dialectic: Conversational Contexts of Argument</title>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">D</forename><surname>Walton</surname></persName>
		</author>
		<author>
			<persName><surname>Walton</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998. 1998</date>
			<publisher>Univ. of Toronto Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
