=Paper= {{Paper |id=None |storemode=property |title=Extracting Argumentative Dialogues from the Neural Network that Computes the Dungean Argumentation Semantics |pdfUrl=https://ceur-ws.org/Vol-764/paper07.pdf |volume=Vol-764 |dblpUrl=https://dblp.org/rec/conf/nesy/GotoMS11 }} ==Extracting Argumentative Dialogues from the Neural Network that Computes the Dungean Argumentation Semantics== https://ceur-ws.org/Vol-764/paper07.pdf
   Extracting Argumentative Dialogues from the Neural Network that Computes the
                       Dungean Argumentation Semantics

            Yoshiaki Gotou                            Takeshi Hagiwara                           Hajime Sawamura
        Niigata University, Japan                  Niigata University, Japan                   Niigata University, Japan
       gotou@cs.ie.niigata-u.ac.jp                hagiwara@ie.niigata-u.ac.jp                sawamura@ie.niigata-u.ac.jp


                           Abstract                                    2. Can the argument status of the neural network argumenta-
                                                                          tion correspond to the well-known status in symbolic argu-
     Argumentation is a leading principle both founda-
                                                                          mentation framework such as in [Prakken and Vreeswijk,
     tionally and functionally for agent-oriented comput-
                                                                          2002]?
     ing where reasoning accompanied by communication
     plays an essential role in agent interaction. We con-             3. Can the neural network argumentation compute the fixpoint
     structed a simple but versatile neural network for neu-              semantics for argumentation?
     ral network argumentation, so that it can decide which
     argumentation semantics (admissible, stable, semi-                4. Can symbolic argumentative dialogues be extracted from
     stable, preferred, complete, and grounded semantics)                 the neural network argumentation?
     a given set of arguments falls into, and compute ar-
     gumentation semantics via checking. In this paper,                 The positive solutions to them helped us deeply understand
     we are concerned with the opposite direction from               relationship between symbolic and neural network argumenta-
     neural network computation to symbolic argumenta-               tion, and further promote the syncretic approach of symbolism
     tion/dialogue. We deal with the question how various            and connectionism in the field of computational argumentation
                                                                     [Makiguchi and Sawamura, 2007a][Makiguchi and Sawamura,
     argumentation semantics can have dialectical proof
     theories, and describe a possible answer to it by ex-           2007b]. They, however, paid attention only to the grounded
     tracting or generating symbolic dialogues from the              semantics for argumentation in examining relationship between
     neural network computation under various argumen-               symbolic and neural network argumentation.
     tation semantics.                                                  Ongoingly, we constructed a simple but versatile neural net-
                                                                     work for neural network argumentation, so that it can decide
                                                                     which argumentation semantics (admissible, stable, semi-stable
1 Introduction                                                       semantics, preferred, complete, and grounded semantics) [Dung,
Much attention and effort have been devoted to the symbolic          1995][Caminada, 2006] a given set of arguments falls into,
argumentation so far [Rahwan and Simari, 2009][Prakken and           and compute argumentation semantics via checking [Gotou,
Vreeswijk, 2002][Besnard and Doutre, 2004], and its applica-         2010]. In this paper, we are concerned with the opposite direc-
tion to agent-oriented computing. We think that argumenta-           tion from neural network computation to symbolic argumenta-
tion can be a leading principle both foundationally and func-        tion/dialogue. We deal with the question how various argumen-
tionally for agent-oriented computing where reasoning accom-         tation semantics can have dialectical proof theories, and describe
panied by communication plays an essential role in agent in-         a possible answer to it by extracting or generating symbolic dia-
teraction. Dung’s abstract argumentation framework and argu-         logues from the neural network computation under various argu-
mentation semantics [Dung, 1995] have been one of the most           mentation semantics.
influential works in the area and community of computational             The results illustrate that there can exist an equal bidirectional
argumentation as well as logic programming and non-monotonic         relationship between the connectionism and symbolism in the
reasoning.                                                           area of computational argumentation. And also they lead to a
   In 2005, A. Garcez et al. proposed a novel approach to ar-        fusion or hybridization of neural network computation and sym-
gumentation, called the neural network argumentation [d’Avila        bolic one [d’Avila Garcez et al., 2009][Levine and Aparicio,
Garcez et al., 2005]. In the papers [Makiguchi and Sawamura,         1994][Jagota et al., 1999].
2007a][Makiguchi and Sawamura, 2007b], we dramatically de-              The paper is organized as follows. In the next section, we
veloped their initial ideas on the neural network argumentation to   explicate our basic ideas on the neural network checking argu-
various directions in a more mathematically convincing manner.       mentation semantics by tracing an illustrative example. In Sec-
More specifically, we illuminated the following questions which       tion 3, with our new construction of neural network for argu-
they overlooked in their paper but that deserve much attention       mentation, we develop a dialectical proof theory induced by the
since they are beneficial for understanding or characterizing the     neural network argumentation for each argumentation semantics
computational power and outcome of the neural network argu-          by Dung [Dung, 1995]. In Section 4, we describe some related
mentation from the perspective of the interplay between neural       works although there is no work really related to our work except
network argumentation and symbolic argumentation.                    for Garcez et al.’s original one and our work. The final section
  1. Can the neural network argumentation algorithm deal with        discusses the major contribution of the paper and some future
     self-defeating or other pathological arguments?                 works.




                                                                28
2 Basic Ideas on the neural argumentation                                                 surely converges at τ = 1. Hence, the first output vector
Due to the space limitation, we will not describe the technical                           equals to second output vector. We judge argumentation
details for constructing a neural network for argumentation and                           semantics by using only first input vector and converged
its computing method in this paper (see [Gotou, 2010] for them).                          output vector. As a result we can regard a recurrent neu-
Instead, we illustrate our basic ideas by using a simple argumen-                         ral network as a feedforward neural network except judging
tation example and following a neural network computation trace                           grounded extension.
for it. We assume readers are familiar with the Dungean seman-                       • The vectors of the neural network: The initial input vector
tics such as admissible, stable, semi-stable, preferred, complete,                     for the neural network is a list consisting of 0 and a that rep-
and grounded semantics [Dung, 1995][Caminada, 2006].                                   resent the membership of a set of arguments to be examined.
   Let us consider an argumentation network on the left side                           For example, it is [a, 0, 0] for S = Sτ =0 = {i} ⊆ AR. The
of Figure 1 that is a graphic presentation of the argumen-                             output vectors from each layer take as the values only “-a”,
tation framework AF =< AR, attacks >, where AR =                                       “0”, “a” or “-b”.1 The intuitive meaning of them for each
{i, k, j}, and attacks = {(i, k), (k, i), (j, k)}.                                     output vector are as follows:
                                                                                          Output layer
                                                               weight is a
                                                                                               – “a” in the output vector from the output layer repre-
                                    io       ko       jo       weight is -b
                                                               weight is -1                       sents membership in
                                                                                                  Sτ′ = {X ∈ AR | def ends(Sτ , X)}2 and the argu-
                                    ih   2
                                             kh   2
                                                      jh   2                                      ment is not attacked by Sτ′ .
     i           k           j                                                                 – “-a” in the output vector from the output layer rep-
                                    ih   1
                                             kh   1
                                                      jh   1                                      resents membership in Sτ′+ .3
                                                                                               – “0” in the output vector from the output layer repre-
                                    ii       ki       ji                                          sents the argument belongs to neither Sτ′ nor Sτ′+ .
                                                                                          Second hidden layer
                                                                                               – “a” in the output vector from the second hidden
Figure 1: Graphic representation of AF (left) and Neural net-                                     layer represents membership in Sτ′ and the argument
work translated from the AF (right)                                                               is not attacked by Sτ′ .
                                                                                               – “0” in the output vector from the second hidden
   According to the Dungean semantics [Dung, 1995][Cami-                                          layer represents membership not in Sτ′ or the argu-
nada, 2006], the argumentation semantics for AF is determined                                     ment is attacked by Sτ′ .
as follows: Admissible set = {∅, {i}, {j}, {i, j}}, Complete ex-                          Fisrt hidden layer
tension = {{i, j}}, Preferred extension = {{i, j}}, Semi-stable
                                                                                               – “a” in the output vector from the first hidden layer
extension = {{i, j}}, Stable extension = {{i, j}}, and Grounded
                                                                                                  represents membership in Sτ and the argument is
extension = {{i, j}}.
                                                                                                  not attacked by Sτ .
                                                                                               – “-b” in the output vector from the first hidden layer
Neural network architecture for argumentation                                                     represents the membership in Sτ+ .
In the Dungean semantics, the notions of ‘attack’, ‘defend (ac-                                – “0” in the output vector from the first hidden layer
ceptable)’ and ‘conflict-free’ play the most important role in                                     represents the others.
constructing various argumentation semantics. This is true                                Input layer
in our neural network argumentation as well. Let AF =<
AR, attacks > be as above, and S be a subset of AR, to be                                      – “a” in the output vector from the input layer repre-
examined. The argumentation network on the left side of Figure                                    sents membership in Sτ .
1 is first translated into the neural network on the right side of                              – “0” in the output vector from the input layer repre-
Figure 1. Then, the network architecture consists of the follow-                                  sents the argument does not belong to S.
ing constituents:
   • A double hidden layer network: It is a double hidden layer                    A trace of the neural network
     network and has the following four layers: input layer, first                  Let us examine to which semantics S = {i} belongs in AF on
     hidden layer, second hidden layer and output layer, which                     the left side of Figure 1 by tracing the neural network compu-
     have the ramified neurons for each argument, such as αi ,                      tation. The overall visual computation flow is shown in Figure
     αh1 , αh2 and αo for the argument α.                                          2.
   • A recurrent neural network (for judging grounded exten-
                                                                                   Stage1. Operation of input layer at τ = 0
     sion): The double hidden layer network like on the right
     side of Figure 1 is piled up high until the input and output                  Sτ =0 = S = {i}. Hence, [a, 0, 0] is given to the input layer
     layers converge (stable state) like in Figure 2. The symbol                   of the neural network in Figure 1. Each input neuron computes
     τ represents the pile number (τ ≥ 0) which amounts to the                     its output value by its activation function (see the graph of the
     turning number of the input-output cycles of the neural net-                  activation function, an identity function, on the right side of the
     work. In the stable state, we set τ = converging. Then,                       input layer of Figure 2). The activation function makes the input
     Sτ =n stands for a set of arguments at τ = n.                                    1
                                                                                                                                         √
                                                                                        Let a,b be positive real numbers and they satisfy b > a > 0.
   • A feedforward neural network (except judging grounded                            2
                                                                                        Let S⊆AR and A∈AR. defends(S, A) iff ∀B ∈ AR(attacks(B,
     extension): When we compute argumentation semantics ex-                       A) → attacks(S, B)).
     cept grounded extension with a recurrent neural network, it                      3
                                                                                        Let S ⊆ AR. S + = {X ∈ AR | attacks(S, X)}.




                                                                              29
                                           1st output vector                                        2nd output vector
                                                                                                                                                                output value from Xo
                                            a              -a         a                              a              -a      a                                        a
                                                                                                                                                            -a
                             +
                          S’τ=0={ k }       a              -a         a             +
                                                                                  S’τ=1={ k }        a              -a      a                                                          2
                                                                                                                                                                                             input value into Xo
                                                                                                                                                                     0              a

                                            io            ko          jo                             io            ko       jo                                            -a


                                            a2           -2a          a2                             a2           -2a       a2                         output value from Xh2



                          S’τ=0={ i, j }    a              0          a          S’τ=1={ i, j }      a              0       a             θi=a2+b        a

                                                         kh2                                                                              θk=a2+2b
                                           ih2                        jh2                           ih2           kh2       jh2
                                                    θi           θk         θj                               θi        θk            θj   θj=0                                         input value into Xh2
                                           a2+b          -a-ab        0                            a2+b           -2a-ab    a2                              0             θX

                                                                                                                                                                output value from Xh1
                             +
                           Sτ=0={ k }       a             -b          0             +
                                                                                  Sτ=1={ k }         a             -b       a
                                                                                                                                                                      a
output value
                                           ih1           kh1          jh1                           ih1           kh1       jh1                          -b
                                                                                                                                                                                             input value into Xh1
                                                                                                                                                                      0         2
                                                                                                                                                                                a
                                                2                                                        2                       2
  neuron                                    a             -ab         0                              a             -2ab     a
         threshold θ                                                                                                                                                       -b
input value                                 a              0           0
                          Sτ=0 = { i }                                           Sτ=1 = { i, j }     a              0        a
                                                                                                                                                     output value from Xi

                                            ii            ki          ji                              ii           ki       ji
           weight is a                      a              0           0                             a              -a       a                          1


           weight is -b                                                                                                                                                                input value into Xi
                                            a              0          0                              a              -a      a                           0                 1
           weight is -1
                                            1st input vector                                         2nd input vector                                 X belongs to {i, k, j}


                       Figure 2: A trace of the neural network for argumentation with S = {i} and activation functions


layer simply pass the value to the hidden layer. The input layer                                     In summary, after the first hidden layer received the vector
thus outputs the vector [a, 0, 0].                                                                 [a2 , −ab, 0], it turns out to pass the output vector [a2 + b, −a −
   In this computation, the input layer judges Sτ =0 = {i} and                                     ab, 0] to the second hidden neurons.
inputs a2 to ih1 through the connection between ii and ih1 whose
weight is a. At the same time, the input layer inputs −ab to kh                                    Stage 3. Operation of second hidden layer at τ = 0
through the connection between ii and kh1 whose weight is −b                                       The second hidden layer receives a vector [a2 , −ab, 0] from first
so as to make the first hidden layer know that i ∈ Sτ =0 attacks k                                  hidden layer. Each activation function of ih2 , kh2 and jh2 is a
(in symbols, attacks(i, k)). Since the output values of ki and ji                                  step function as put on the right side of the first hidden layer in
are 0, they input 0 to other first hidden neurons.                                                  Figure 2 with its threshold, θi = a2 + b, θk = a2 + 2b and
   In summary, after the input layer receives the input vector                                     θj = 0 respectively.
[a, 0, 0], it turns out to give the hidden layer the vector [ a · a                                   These thresholds are defined by the ways of being attacked as
+ 0·(−b), a · (−b) + 0 · a + 0 · (−b), 0 · a ]= [a2 , −ab, 0].                                     follows:
Stage 2. Operation of first hidden layer at τ = 0                                                     • If an argument X can defend X only by itself (in Figure
Now, the first hidden layer receives a vector [a , −ab, 0] from         2                               1, such X is i since def ends({i}, i)), then the threshold of
the input layer. Each activation function of ih1 , kh1 and jh1 is a                                    Xh2 (θX ) is a2 +tb (t is the number of arguments bilaterally
step function as put on the right side of the first hidden layer in                                     attacking X).
Figure 2. The activation function categorizes values of vectors                                      • If an argument X can not defend X only by it-
which are received from the input layer into three values as if                                        self and is both bilaterally and unilaterally attacked
the function understand each argument state. Now, the following                                        by some other argument (in Figure 1, such X is
inequalitis hold: a2 ≥ a2 , −ab ≤ −b, −b ≤ 0 ≤ a2 . Accord-                                            k since ¬def ends({k}, k)&attacks(j, k)&attacks(i, k)),
ing to the activation function, the first hidden layer outputs the                                      then the threshold of Xh2 (θX ) is a2 + b(s + t) (s(t) is the
vector [a, −b, 0].                                                                                     number of arguments unilaterally(bilaterally) attacking X).
   Next, the first hidden layer inputs a2 + b into the second                                           Note that l=m=1 for the argument k in Figure 1.
hidden neuron ih2 through the connections between ih1 and ih2                                        • If an argument X is not attacked by any other arguments (in
whose weight is a, kh1 and ih2 whose weight is −1, so that the                                         Figure 1, such X is j), then the threshold of Xh (θXh ) is 0.
second hidden layer can know attacks(k, i) with i ∈ Sτ =0 . At
the same time, the first hidden layer inputs −a − ab into kh2                                         • If an argument X can not defend X only by itself and is
through the connections between ih1 and kh2 whose weight is                                            just unilaterally attacked by some other argument, then the
−1, kh1 and kh2 whose weight is a, so that the second hidden                                           threshold of Xh2 (θX ) is bs (s is the number of arguments
layer can know attacks(i, k) with k ∈ Sτ+=0 and inputs 0 into                                          unilaterally attacking X).
jh2 so that the second hidden layer can know the argument j is                                     By these thresholds and their activation functions (step func-
not attacked by any arguments with j ̸∈ Sτ =0 .                                                    tions), if S defends X then Xh2 outputs a. Otherwise, Xh2




                                                                                        30
outputs 0 in the second hidden layer. As the result, the second            Stage 7. Judging admissible set, complete extension
hidden layer judges either X ∈ Sτ′ or X ̸∈ Sτ′ by two output               and stable extension
values (a and 0). In this way, the output vector in the second             Through the above neural network computation, we have ob-
hidden layer yields [a, 0, a]. This vector means that the second           tained Sτ′ =0 = {i, j} and Sτ′+=0 = {k} for Sτ =0 = {i}, and
hidden layer judges that the arguments i and j are defended by
Sτ =0 , resulting in Sτ′ =0 = {i, j}.                                      Sτ′ =1 = {i, j} and Sτ′+=1 = {k} for Sτ =1 = {i, j}. Moreover,
                                                                           we also have such a result that both the sets {i} and {i, j} are
    Next, the second hidden layer inputs a2 into the output neu-
                                                                           conflict-free.
rons io and jo through the connections between ih2 and io , jh2
                                                                               The condition for admissible set says that a set of arguments S
and jo whose weights are a,so that the output layer can know
                                                                           satisfies its conflict-freeness and ∀X ∈ AR(X ∈ S → X ∈ S ′ ).
i, j ∈ Sτ =0 and i, j ∈ Sτ′ =0 . At the same time, the second hid-
                                                                           Therefore, the neural network can know that the sets {i} and
den layer inputs −2a into ko through the connections between
                                                                           {i, j} are admissible since it confirmed the condition at the time
ih2 and ko , jh2 and ko whose weights are −1,so that output layer
                                                                           round τ = 0 and τ = 1 respectively.
can know attacks(i, k) and attacks(j, k) with k ∈ Sτ′+=0 .                     The condition for complete extension says that a set of ar-
    Furthermore, it should be noted that another role of the second        guments S satisfies its conflict-freeness and ∀X ∈ AR(X ∈
hidden layer lies in guaranteeing that Sτ′ is conflict-free4 . It is        S ↔ X ∈ S ′ ). Therefore, the neural network can know that
actually true since the activation function of the second hidden           the set {i, j} satisfies the condition since it has been obtained at
layer makes Xh2 for the argument X attacked by Sτ output 0.                τ = converging. Incidentally, the neural network knows that
The conflict-freeness is important since it is another notion for           the set {i} is not a complete extension since it does not appear in
characterizing the Dungean semantics.                                      the output neuron at τ = converging.
    In summary, after the second hidden layer received the vec-                The condition for stable extension says that a set of arguments
tor [a2 + b, −a − ab, 0], it turns out to pass the output vector           S satisfies ∀X ∈ AR(X ̸∈ S → X ∈ S ′+ ). The neural network
[a2 , −2a, a2 ] to the second hidden neurons.                              can know that the {i, j} is a stable extension since it confirmed
                                                                           the condition from the facts that Sτ =1 = {i, j}, Sτ′ =1 = {i, j}
Stage 4. Operation of output layer at τ = 0                                and Sτ′+=1 = {a}.
The output layer now received the vector [a2 , −2a, a2 ] from the
second hidden layer. Each neuron in the output layer has an ac-            Stage 8. Judging preferred extension, semi-stable
tivation function as put on the right side of the output layer in          extension and grounded extension
Figure 2.                                                                  By invoking the neural network computation that was stated from
   This activation function makes the output layer interpret any           the stages 1-7 above for every subset of AR, and AR itself as an
positive sum of input values into the output neuron Xo as X ∈              input set S, it can know all admissible sets of AF, and hence
Sτ′ , any negative sum as X ∈ Sτ′+ , and the value 0 as X ̸∈ Sτ′           it also can know the preferred extensions of AF by picking up
and X ̸∈ Sτ′+ . As the result, the output layer outputs the vector         the maximal ones w.r.t. set inclusion from it. In addition, the
[a, −a, a].                                                                neural network can know semi-stable extensions by picking up a
   Summarizing the computation at τ = 0, the neural network                maximal S ∪ S + where S is a complete extension in AF. This
received the vector [a, 0, 0] in the input layer and outputted             is possible since the neural network already has computed S + .
[a, −a, a] from the output layer. This output vector means that               For the grounded extension, the neural network can know that
the second hidden layer judged Sτ′ =0 = {i, j} and guaranteed              the grounded extension of AF is Sτ′ =converging when the com-
its conflict-freeness. With these information passed to the output          putation stopped by starting with Sτ =0 = ∅. This is due to the
layer from the hidden layer, the output layer judged Sτ′+=0 = {k}.         fact that the grounded extension is obtained by the iterative com-
                                                                           putation of the characteristic function that starts from ∅ [Prakken
Stage 5. Inputting the output vector at τ = 0 to the                       and Vreeswijk, 2002].
input layer at τ = 1 (shift from τ = 0 to τ = 1)                              Readers should refer to the paper [Gotou, 2010] for the sound-
                                                                           ness theorem of the neural network computation illustrated so
At τ = 0, the neural network computed Sτ′ =0 = {i, j} and
                                                                           far.
Sτ′+=0 = {k}. We continue the computation recurrently by con-
necting the output layer to the input layer of the same neural
network, setting first output vector to second input vector. Thus,          3 Extracting Symbolic Dialogues from the
at τ = 1, the input layer starts its operation with the input vector         Neural Network
[a, −a, a]. We, however, omit the remaining part of the opera-             In this section, we will address to such a question as if symbolic
tions starting from here since they are to be done in the similar          argumentative dialogues can be extracted from the neural net-
manner.                                                                    work argumentation. The symbolic presentation of arguments
                                                                           would be much better for us since it makes the neural net argu-
Stage 6. Convergence to a stable state                                     mentation process verbally understandable. The notorious criti-
We stop the computation immediately after the time round τ = 1             cism for neural network as a computing machine is that connec-
since the input vector to the neural network at τ = 1 coincides            tionism usually does not have explanatory reasoning capability.
with the output vector at τ = 1. This means that the neural                We would say our attempt here is one that can turn such criticism
network amounts to having computed a least fixed point of the               in the area of argumentative reasoning.
characteristic function that was defined with the acceptability of             In our former paper [Makiguchi and Sawamura, 2007b], we
arguments by Dung [Dung, 1995].                                            have given a method to extract symbolic dialogues from the
                                                                           neural network computation under the grounded semantics, and
   4                                                                       showed its coincidence with the dialectical proof theory for the
   A set S of arguments is said to be conflict-free if there are no argu-
ments A and B in S such that A attacks B.                                  grounded semantics. In this paper, we are concerned with the




                                                                     31
question how other argumentation semantics can have dialecti-        Thus, we can view P(roponent, speaker)’s initial belief {i} as
cal proof theories. We describe a possible answer to it by ex-       justified one in the sense that it could have persuaded O(pponent,
tracting or generating symbolic dialogues from the neural net-       listener or audience) under an appropriate Dungean argumenta-
work computation under other more complicated argumentation          tion semantics. Actually, we would say it is admissibly justified
semantics. We would say this is a great success that was brought     under admissibly dialectical proof theory below. Formally, we
by our neural network approach to argumentation since dialec-        introduce the following dialectical proof theories, according to
tical proof theories for various Dungean argumentation seman-        the respective argumentation semantics.
tics have not been known so far except only some works (e. g.,
[Vreeswijk and Prakken, 2000], [Dung et al., 2006]).                 Definition 1 (Admissibly dialectical proof theory) The admis-
                                                                     sibly dialectical proof theory is the dialogue extraction pro-
   First of all, we summarize the trace of the neural network com-
                                                                     cess in which the summary table generated by the neural net-
putation as have seen in Section 2 as in Table 1, in order to make
                                                                     work computation satisfies the following condition: ∀A ∈
it easy to extract symbolic dialogues from our neural network.
                                                                     SP RO,τ =0 ∀k ≥ 0(A ∈ SP RO,τ =k ), where SP RO,τ =0 is the
Wherein, SP RO,τ =k and SOP P,τ =k denote the followings re-
                                                                     input set at τ = 0.
spectively: At time round τ = k(k ≥ 0) in the neural network
                                ′                         ′
computation, SP RO,τ =k = Sτ =k , and SOP P,τ =k = Sτ+=k (see        Intuitively, the condition says every argument in SP RO,τ =0 is
Section 2 for the notations).                                        retained until the stable state as can be seen in Table 2. It should
                                                                     be noted that the condition reflects the definition of ‘admissible
                                                                     extension’ in [Dung, 1995].
Table 1: Summary table of the neural network computation             Definition 2 (Completely dialectical proof theory) The com-
                          SP RO,τ =k SOP P,τ =k                      pletely dialectical proof theory is the dialogue extraction pro-
        τ =0   input           S           {}                        cess in which the summary table generated by the neural network
               output         ...          ...                       computation satisfies the following conditions: let SP RO,τ =0 be
        τ =1   input          ...          ...                       the input set at τ = 0.
               output         ...          ...
          ..       ..                                                 1. SP RO,τ =0 satisfies the condition of Definition 1.
           .        .         ...          ...                        2. ∀A ̸∈ SP RO,τ =0 ∀k(A ̸∈ SP RO,τ =k )
Table 2: Summary table of the neural network computation             Intuitively, the second condition says that any argument that does
in Fig. 2                                                            not belong to SP RO,τ =0 does not enter into SP RO,τ =t at any
                          SP RO,τ =k SOP P,τ =k                      time round t up to a stable one k. Those conditions reflect the
         τ =0   input         {i}          {}                        definition of ‘complete extension’ in [Dung, 1995].
                output       {i, j}       {k}                        Definition 3 (Stably dialectical proof theory) The stably di-
         τ =1   input        {i, j}       {k}                        alectical proof theory is the dialogue extraction process in which
                output       {i, j}       {k}                        the summary table generated by the neural network computation
                                                                     satisfies the following conditions: let SP RO,τ =0 be the input set
   For example, Table 2 is the table for S = {i} summarized          at τ = 0.
from the neural network computation in Fig. 2                         1. SP RO,τ =0 satisfies the conditions of Definition 2.
   We assume dialogue games are performed by proponents
                                                                      2. AR = SP RO,τ =n ∪ SOP P,τ =n , where AF                    =<
(PRO) and opponents (OPP) who have their own sets of argu-
                                                                         AR, attacks > and n denotes a stable time round.
ments that are to be updated in the dialogue process. In advance
of the dialogue, proponents have S(= Sτ =0 ) as an initial set       Intuitively, the second condition says that PRO and OPP cover
SP RO,τ =0 , and opponents have an empty set {} as an initial set    AR exclusively and exhaustively. Those conditions reflect the
SOP P,τ =0 .                                                         definition of ‘stable extension’ in [Dung, 1995].
   We illustrate how to extract dialogues from the summary table        For the dialectical proof theories for preferred [Dung, 1995]
by showing a concrete extraction process of dialogue moves in        and semi-stable semantics [Caminada, 2006], we can similarly
Table 2:                                                             define them taking into account maximality condition. So we
 1. P(roponent, speaker): PRO declares a topic as a set of be-       omit them in this paper.
    liefs by saying {i} at τ = 0. OPP just hears it with no             As a whole, the type of the dialogues in any dialectical proof
    response {} for the moment. (dialogue extraction from the        theories above would be better classified as a persuasive dialogue
    first row of Table 2)                                             since it is closer to persuasive dialogue in the dialogue classifi-
                                                                     cation by Walton [Walton, 1998].
 2. P(roponent, or speaker): PRO further asserts the incre-
    mented belief {i, j} because the former beliefs defend j,        4 Related Work
    and at the same time states the belief {i, j} conflicts with
    {k} at τ = 0. (dialogue extraction from the second row of        Garcez et al. initiated a novel approach to argumentation, called
    Table 2)                                                         the neural network argumentation [d’Avila Garcez et al., 2005].
                                                                     However, the semantic analysis for it is missing there. That is,
 3. O(pponent, listener or audience): OPP knows that its belief      it is not clear what they calculate by their neural network argu-
    {k} conflicts with PRO’s belief {i, j} at τ = 0. (dialogue        mentation. Besnard et al. proposed three symbolic approaches
    extraction from the second row of Table 2)                       to checking the acceptability of a set of arguments [Besnard and
 4. No further dialogue moves can be promoted at τ = 1, re-          Doutre, 2004], in which not all of the Dungean semantics can be
    sulting in a stable state. (dialogue termination by the third    dealt with. So it may be fair to say that our approach with the
    and fourth rows of Table 2)                                      neural network is more powerful than Besnard et al.’s methods.




                                                                32
   Vreeswijk and Prakken proposed a dialectical proof theory for       [d’Avila Garcez et al., 2009] Artur S. d’Avila Garcez, Luı́s C.
the preferred semantics [Vreeswijk and Prakken, 2000]. It is              Lamb, and Dov M. Gabbay. Neural-Symbolic Cognitive Rea-
similar to that for the grounded semantics [Prakken and Sartor,           soning. Springer, 2009.
1997], and hence can be simulated in our neural network as well.       [Dung et al., 2006] P. M. Dung, R. A. Kowalski, and F. Toni.
   In relation to the neural network construction and computa-            Dialectic proof procedures for assumption-based, admissible
tion for the neural-symbolic systems, the structure of the neural         argumentation. Artificial Intelligence, 170:114–159, 2006.
network is a similar 3-layer recurrent network, but our neural
                                                                       [Dung, 1995] P.M. Dung. On the acceptability of arguments
network computes not only the least fixed point (grounded se-
mantics) but also the fixed points (complete extension). This is a         and its fundamental role in nonmonotonic reasoning, logics
most different aspect from Hölldobler and his colleagues’ work           programming and n-person games. Artificial Intelligence,
[Hölldobler and Kalinke, 1994].                                          77:321–357, 1995.
                                                                       [Gotou, 2010] Yoshiaki Gotou.         Neural Networks calcu-
                                                                          lating Dung’s Argumentation Semantics.              Master’s
5 Concluding Remarks                                                      thesis, Graduate School of Science and Technology,
                                                                          Niigata University, Niigata, Japan, December 2010.
It is a long time since connectionism appeared as an alterna-
                                                                          http://www.cs.ie.niigata-u.ac.jp/Paper/
tive movement in cognitive science or computing science which
                                                                          Storage/graguation_thesis_gotou.pdf.
hopes to explain human intelligence or soft information process-
ing. It has been a matter of hot debate how and to what ex-            [Hölldobler and Kalinke, 1994] Steffen Hölldobler and Yvonne
tent the connectionism paradigm constitutes a challenge to clas-          Kalinke. Toward a new massively parallel computational
sicism or symbolic AI. In this paper, we showed that symbolic             model for logic programming. In Proc. of the Workshop
dialectical proof theories can be obtained from the neural net-           on Combining Symbolic and Connectionist Processing, ECAI
work computing various argumentation semantics, which allow               1994, pages 68–77, 1994.
to extract or generate symbolic dialogues from the neural net-         [Jagota et al., 1999] Arun Jagota, Tony Plate, Lokendra Shas-
work computation under various argumentation semantics. The               tri, and Ron Sun. Connectionist symbol processing: Dead or
results illustrate that there can exist an equal bidirectional rela-      alive? Neural Computing Surveys, 2:1–40, 1999.
tionship between the connectionism and symbolism in the area
                                                                       [Levine and Aparicio, 1994] Daniel Levine and Manuel Apari-
of computational argumentation. On the other hand, much effort
has been devoted to a fusion or hybridization of neural net com-          cio. Neural Networks for Knowledge Representation and In-
putation and symbolic one [d’Avila Garcez et al., 2009][Levine            ference. LEA, 1994.
and Aparicio, 1994][Jagota et al., 1999]. The result of this pa-       [Makiguchi and Sawamura, 2007a] Wataru Makiguchi and Ha-
per as well as our former results on the hybrid argumentation             jime Sawamura. A Hybrid Argumentation of Symbolic and
[Makiguchi and Sawamura, 2007a][Makiguchi and Sawamura,                   Neural Net Argumentation (Part I). In Argumentation in
2007b] yields a strong evidence to show that such a symbolic              Multi-Agent Systems, 4th International Workshop, ArgMAS
cognitive phenomenon as human argumentation can be captured               2007, Revised Selected and Invited Papers, volume 4946 of
within an artificial neural network.                                       Lecture Notes in Computer Science, pages 197–215. Springer,
   The simplicity and efficiency of our neural network may be              2007.
favorable to our future plan such as introducing learning mecha-       [Makiguchi and Sawamura, 2007b] Wataru Makiguchi and Ha-
nism into the neural network argumentation, implementing the              jime Sawamura. A Hybrid Argumentation of Symbolic and
neural network engine for argumentation, which can be used                Neural Net Argumentation (Part II). In Argumentation in
in argumentation-based agent systems, and so on. Specifically,             Multi-Agent Systems, 4th International Workshop, ArgMAS
it might be possible to take into account the so-called core              2007, Revised Selected and Invited Papers, volume 4946 of
method developed in [Hölldobler and Kalinke, 1994] and CLIP              Lecture Notes in Computer Science, pages 216–233. Springer,
in [d’Avila Garcez et al., 2009] although our neural-symbolic             2007.
system for argumentation is much more complicated due to the
                                                                       [Prakken and Sartor, 1997] H. Prakken and G. Sartor.
complexities and varieties of the argumentation semantics.
                                                                          Argument-based extended logic programming with de-
                                                                          feasible priorities. J. of Applied Non-Classical Logics,
References                                                                7(1):25–75, 1997.
                                                                       [Prakken and Vreeswijk, 2002] H. Prakken and G. Vreeswijk.
[Besnard and Doutre, 2004] Philippe Besnard and Sylvie
                                                                          Logical systems for defeasible argumentation. In In D. Gab-
  Doutre. Checking the acceptability of a set of arguments. In
                                                                          bay and F. Guenther, editors, Handbook of Philosophical
  10th International Workshop on Non-Monotonic Reasoning
                                                                          Logic, pages 219–318. Kluwer, 2002.
  (NMR 2004, pages 59–64, 2004.
                                                                       [Rahwan and Simari, 2009] Iyad Rahwan and Guillermo
[Caminada, 2006] Martin Caminada. Semi-stable semantics. In               R. (Eds.) Simari. Argumentation in Artificial Intelligence.
  Paul E. Dunne and Trevor J. M. Bench-Capon, editors, Com-               Springer, 2009.
  putational Models of Argument: Proceedings of COMMA
                                                                       [Vreeswijk and Prakken, 2000] Gerard A. W. Vreeswijk and
  2006, volume 144 of Frontiers in Artificial Intelligence and
  Applications, pages 121–130. IOS Press, 2006.                           Henry Prakken. Credulous and sceptical argument games
                                                                          for preferred semantics. Lecture Notes in Computer Science,
[d’Avila Garcez et al., 2005] Artur S. d’Avila Garcez, Dov M.             1919:239–??, 2000.
   Gabbay, and Luis C. Lamb. Value-based argumentation                 [Walton, 1998] D. Walton. The New Dialectic: Conversational
   frameworks as neural-symbolic learning systems. Journal of             Contexts of Argument. Univ. of Toronto Press, 1998.
   Logic and Computation, 15(6):1041–1058, 2005.




                                                                  33