<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta />
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Motivating beliefs</p>
      <p>Assessment beliefs
Sleeping
goals</p>
      <p>ACTIVATION</p>
      <p>STAGE</p>
      <p>Active
goals</p>
      <p>EVALUATION</p>
      <p>STAGE</p>
      <p>Pursuable
goals</p>
      <p>Cost/incompatibility/
preference beliefs
DELIBERATION</p>
      <p>STAGE</p>
      <p>Chosen
goals</p>
      <p>Pre-conditions/
means-end beliefs</p>
      <p>CHECKING</p>
      <p>STAGE</p>
      <p>Executive
goals</p>
      <p>Against this background, the aim of this article is to formalize the BBGP model from the activation stage until
the checking stage and to endow BBGP-based agents with explainability abilities. Thus, the research questions
that are addressed in this article are: (i) how to formalize the BBGP model by integrating the activation,
evaluation, deliberation, and checking stages? and (ii) how to endow BBGP-based agents with the ability of
generating explanations?</p>
      <p>In addressing the rst question, we will use argumentation, which is a process of constructing and comparing
arguments considering the con icts { which are called attacks { among them. The output of argumentation
process is a set (or sets) of arguments { called extensions { which are internally consistent [Dun95]. In the
intention formation process or goal processing, arguments can represent reasons for a goal to change (or not) its
status. Thus, one can see the intention formation process as a decision-making process, where an agent has to
decide which goal(s) passes a given stage and which does not. Adopting an argumentation-based approach in
a decision making problem has some bene ts on the explainability. For example, a (human) user will obtain a
\good" choice along with the reasons underlying this recommendation. Besides, argumentation-based
decisionmaking is more similar with the way humans deliberate and nally make a choice [OMT10]. Regarding the
second question, we endow agents with an structure to save the reasoning path, based on which explanations
can be generated.</p>
      <p>Figure 2 shows an overview of our approach, concretely it shows all the possible transitions of a goal from its
sleeping status until it becomes executive, which depends on the arguments generated in each stage. We also
consider the status cancelled, which happens under some circumstances that are brie y mentioned in the legend.</p>
      <p>Next section presents the work that has been done so far, including the formalization of argumentation-based
agents and our proposal for generating partial and complete explanations. Section 3 enumerates the main future
direction of our research. Section 4 presents the main related work. Finally, Section 5 is devoted to conclusions.
2</p>
    </sec>
    <sec id="sec-2">
      <title>What has been done so far</title>
      <p>In this section, we present the work that has been done so far. Thus, we introduce (i) the building blocks for both
the argumentation-based formalization and the mechanism for generating explanations, (ii) the argumentation
process for goal processing, and (ii) the mechanism for the generation of complete and partial explanations3.
1Hereafter, these notations are used for di erentiate the stages and the statuses of goals.
2Sleeping is a status of a goal that is proposed in [Cas08] to refer to goals that have not been activated yet.
3The rst two points were presented in [MEPPGT19] and the last point was presented in [MEPT19].</p>
      <p>Acceptable activation
argument(s)</p>
      <p>Acceptable evaluation
argument(s)</p>
      <p>Acceptable deliberation</p>
      <p>Argument(s)</p>
      <p>Acceptable checking
argument
Sleeping</p>
      <p>Active</p>
      <p>Chosen</p>
      <p>Executive
1 or 3
2
2</p>
      <p>Pursuable
1 or 3
Cancelled
4</p>
      <p>1 or 3
Legend</p>
      <p>1 Goal is deactivated
Condition guards transition 23 GMoaaxlimbeucmomneusmibmeproossfibcyleclteos biseraecahchieevded
Condition triggers transition 4 Goal becomes incompatible and not preferred
In this paper, BBGP-based agents use rule-based systems4 as their basic reasoning model. The underlying logical
language { denoted by L { consists of a set of literals5 in rst-order logical language. We represent non-ground
formulae with Greek letters ('; ; :::), variables with Roman letters (x; y; :::) and we name rules with r1; r2; :::.
Strict rules are of the form r = '1; :::; 'n ! , and defeasible rules are of the form r = '1; :::; 'n ) . Thus,
a theory is a triple T = hF ; S ; Di where F L is a set of facts, S is a set of strict rules, and D is a set of
defeasible rules. New information is produced from a given theory by applying the following concept, which was
given in [AB13].</p>
      <p>De nition 1. (Derivation schema) Let T = hF ; S ; Di be a theory and 2 L. A derivation schema for
from T is a nite sequence T = f('1; r1); :::; ('n; rn)g such that:
- 'n =
- for i = 1:::n, 'i 2 F and ri = ;, or ri 2 S [ D
Based on a derivation scheme T , the following sets can be de ned: SEQ(T ) = f'1; :::; 'ng, FACTS(T ) = f'ij
i 2 f1; :::; ng; ri = ;g, STRICT(T ) = friji 2 f1; :::; ng; ri 2 S g, DEFE(T ) = friji 2 f1; :::; ng; ri 2 Dg
2.2</p>
      <sec id="sec-2-1">
        <title>Building Blocks</title>
        <p>From L, we distinguish the following nite sets:
- F is the set of facts of the agent and
- G is the set of goals of the agent.</p>
        <p>F and G are subsets of ground literals from the language L and are pairwise disjoint. Besides, G = Gac [
Gpu [ Gch [ Gex, where Gac (resp. Gpu, Gch, Gex) stands for the set of active (resp. pursuable, chosen, executive)
goals. It holds that Gx \ Gy = ;, for x; y 2 fac; pu; ch; exg with x 6= y.</p>
        <p>Other important structures are rules, which express the relation between beliefs and goals. Rules can be
classi ed in standard and non-standard rules (activation, evaluation, deliberation, and checking rules). The
former are made up of beliefs in both their premises and their conclusions and the latter are made up of beliefs
in their premises and goals or beliefs about goals in their conclusions. Both standard and non-standard rules
can be strict or defeasible rules. Standard rules can be used in any of the stages of the goal processing whereas
non-standard rules are distinct for each stage. Thus, we have:
- Standard rules (rst): V 'i ! (or V 'i ) ).
- Activation rules (rac): V 'i ! (or V 'i ) ).
- Evaluation rules (rev): V 'i ! : (or V 'i ) : ).
- Deliberation rules : rd1e = :has incompatibility(g) ! chosen(g)</p>
        <p>rd2e : most valuable(g) ! chosen(g)
- Checking rule : rck = has plans f or(g) ^ satisf ied context f or(g) ! executive(g)</p>
        <p>Where 'i and</p>
        <p>denote non-ground literals that represent beliefs and goals, respectively. g denote ground
4Rule-based systems distinguish between facts, strict rules, and defeasible rules. A strict rule encodes strict information that has
no exception, whereas a defeasible rule expresses general information that may have exceptions.</p>
        <p>5Literals are de ned as positive or negative atoms where an atom is an n-ary predicate.
literals that represent a goal6. Notice that standard, activation, and evaluation rules are designed and entered
by the programmer of the agent, and their content is domain-dependent. Otherwise, rules in deliberation and
checking stages are pre-de ned and domain-independent. Finally, let Rst; Rac; Rev; Rde, and Rck denote the
set of standard, activation, evaluation, deliberation, and checking rules, respectively. Next, we de ne the theory
of a BBGP-based agent.</p>
        <p>De nition 2. (BBGP-based Agent Theory) A theory is a triple T = hF ; S ; Di such that: (i) F is the
set of beliefs of the agent, (ii) S = RsSt [ RaSc [ ReSv [ RdSe [ RcSk is the set of strict rules, and (ii) D =
RsDt [ RaDc [ ReDv [ RdDe [ RcDk is the set of defeasible rules, where Rx = RxS [ RxD (for x 2 fst; ac; ev; de; ckg). It
holds that RxS \ RxD = ;.</p>
        <p>From a theory, a BBGP-based agent can build arguments. There are two categories of arguments. The rst
one { called epistemic arguments { justi es or attacks beliefs, while the other one { called stage arguments {
justi es or attacks the passage of a goal from one stage to another. There is a set of arguments for each stage of
the BBGP model.</p>
        <p>De nition 3. (Arguments) Let T = hF ; S ; Di be a BBGP-based agent theory and T 0 = hF ; RsSt; RsDti and
T 00 = hF ; S 00; D00i be two sub-theories of T , where S 0 = S n RsSt and D0 = D n RsDt. An epistemic argument
constructed from T 0 is a pair A = hT; 'i such that:
(1) ' 2 L
(2) T is a derivation schema for ' from T 0
On the other hand, a stage argument constructed from T 00 is a pair A = hT; gi such that:
(1) g 2 G
(2) For the activation and evaluation stages: T is a derivation schema for g from T 00. For the deliberation
stage: T is a derivation schema for chosen(g) from T 00. For the checking stage: T is a derivation schema for
executive(g) from T 00.</p>
        <p>For both kinds of arguments, it holds that SEQ(T ) is consistent7 and T must be minimal8. Finally, ARGep,
ARGac, ARGev, ARGde, and ARGck denote the set of all epistemic, activation, evaluation, deliberation, and checking
arguments, respectively. As for notation, CLAIM(A) = ' (or g) and SUPPORT(A) = T denote the conclusion and
the support of an argument A, respectively.</p>
        <p>An argument may have a set of sub-arguments. Thus, an argument hT 0; '0i is a sub-argument of hT; 'i
i FACTS(T 0) FACTS(T ); STRICT(T 0) STRICT(T ); and DEFE(T 0) DEFE(T ). SUB(A) denotes the set of all
sub-arguments of A.</p>
        <p>Stage arguments built from T constitute a cause for a goal changes its status. However, it is not a proof that
the the goal should adopt another status. The reason is that an argument can be attacked by other arguments.
Two kinds of attacks are distinguished: (i) the attacks between epistemic arguments, and (ii) the mixed attacks,
in which an epistemic argument attacks a stage argument. The former is de ned over ARGep and is captured by
the binary relation attep ARGep ARGep. The latter is de ned over ARGep and ARGx (for x 2 fac; ev; de; ckg);
and is captured by the binary relation attmx ARGep ARGx. For both kinds of attacks, (A; B) 2 attmx (or
(A; B) 2 attep) denotes that there is an attack relation between arguments A and B. Next de nition captures
both kinds of attacks; thus, rebuttal may occur only between epistemic arguments and undercut may occur in
both kinds of attacks.</p>
        <p>De nition 4. (Attacks) Let hT 0; '0i and hT; 'i be two epistemic arguments, and hT; gi be a stage argument.
hT 0; '0i rebuts hT; 'i if ' = :'0. hT; 'i undercuts hT 0; :'0i (or hT; gi) if '0 2 FACTS(T ).</p>
        <p>From epistemic and stage arguments and the attacks between them, it is generated a di erent Argumentation
Framework (AF) for each stage of the BBGP model.</p>
        <p>De nition 5. (Argumentation Framework) An argumentation framework AFx is a pair AFx = hARG; atti
(x 2 fac; ev; de; ckg) such that:</p>
        <p>ARG = ARGx [ ARG0ep [ SUBARGS , where ARGx is a set of stage arguments, ARG0ep = fA j A 2 ARGep and
(A; B) 2 attmx or (A; C) 2 attepg, where B 2 ARGx and C 2 ARG0ep, and SUBARGS = SA2ARGx;A2ARG0ep SUB(A).
6In any of the states of the goal processing, a goal is represented by a ground atom. However, before a goal becomes active, it
has the form of a non-ground atom; in this case, we call it a sleeping goal. Thus, is a sleeping goal and g a goal in some status.
7A set L0 L is consistent i @'; '0 2 L0 such that ' = :'0. It is inconsistent otherwise.
8Minimal means that there is no T 0 T such that ' (g, chosen(g); or executive(g)) is a derivation schema of T 0.
att = att0ep [ att0mx, where att0ep</p>
        <p>ARG0ep and att0mx</p>
        <p>The next step is to evaluate the arguments that make part of the AF. This evaluation is important because
it determines which goals pass (acceptable goals) from one stage to the next. The aim is to obtain a subset of
ARG without con icting arguments. In order to obtain it, we use an acceptability semantics, which return one or
more sets { called extensions { of acceptable arguments. The fact that a stage argument belong to an extension
determines the change of status of the goal in its claim. Next, the main semantics introduced by Dung [Dun95]
are recalled9.</p>
        <p>De nition 6. (Semantics) Let AFx = hARG; atti be an AF (with x 2 fac; ev; de; ckg) and E
- E is con ict-free if 8A; B 2 E , (A; B) 2= att or (B; A) 2= att
- E defends A i 8B 2 ARG, if (B; A) 2 att, then 9C 2 E s.t. (C; B) 2 att.
- E is admissible i it is con ict-free and defends all its elements.
- A con ict-free E is a complete extension i we have E = fAjE defends Ag.
- E is a preferred extension i it is a maximal (w.r.t the set inclusion) complete extension.
- E is a grounded extension i is a minimal (w.r.t. set inclusion) complete extension.
- E is a stable extension i E is con ict-free and 8A 2 ARG, 9B 2 E such that (B; A) 2 att.
2.3</p>
      </sec>
      <sec id="sec-2-2">
        <title>Partial and Complete Explanations</title>
        <p>ARG:
In order to able to generate partial and complete explanations, a BBGP-based agent needs to store information
about the progress of his goals, that is, the changes of the statuses of such goals and the causes of these changes.
The latter are stored in each AF in form of accepted arguments; however, the former cannot be stored in an
AF. Thus, we need a structure that saves both the status of each goal and the AF that supports this status.
Considering that at each stage, the agent generates arguments and attacks for more than one goal { which are
stored in each AFx { and we only need those arguments and attacks related to one goal, we have to extract
from AFx such arguments and attacks. In other words, we need to obtain a sub-AF.</p>
        <p>De nition 7. (Sub-AF) Let AFx = hARG; atti (with x 2 fac; ev; de; ckg) be the an AF and g 2 G a goal. An
AF AF x0 = hARG0; att0i is a sub-AF of AFx (denoted AF x0 v AFx), if ARG0 ARG and att0 = att ARG0, s.t.:
- ARG0 = ffAjA 2 ARGx; CLAIM(A) = gg [ fBj(B; A) 2 attmx or (B; C) 2 att0epg, where B 2 ARGep, C 2
ARG0ep; att0ep att0; and ARG0ep ARG0gg, and
- att ARG0 returns a subset of att that involves just the arguments in ARG0.</p>
        <p>Next, we de ne a structure that stores the causes of the change of the status of a goal, which must be updated
after a new change occurs. We can see this structure as a table record, where each row saves the status of a goal
along with the AF that supports such status.</p>
        <p>De nition 8. (Goal Memory) Let AFx = hARG; atti be an AF (with x 2 fac; ev; de; ckg), AF x0 v AFx a
sub-AF, and g 2 G a goal. The goal memory GMg for goal g is a set of ordered pairs (STA; REASON) such that:
STA 2 fac; pu; ch; ex;not ac; not pu; not ch; not exg where</p>
        <p>fac; pu; ch; exg represent the status g attains due to the arguments in REASON
fnot ac; not pu; not ch; not exg represent the status g cannot attain due to the arguments in REASON.
whereas</p>
        <p>REASON = AF x0 v AFx is a sub-AF whose selected extension supports the current status of g.</p>
        <p>Let GM+ be the set of all goal memories and NUM REC : GM+ ! N a function that returns the number of
records of a given GM. From the goal memory structure, the partial and complete explanation can be generated.
De nition 9. (Partial and Complete Explanations) Let g 2 G be a goal, GMg the memory of g, and
AFac, AFev, AFde, and AFck the four argumentation frameworks involved in the goal processing. Besides, let
x 2 fac; pu; ch; exg denote the current status of g:
- A complete explanation CEg for g 2 Gx is obtained as follows: CEg = Sii==N1UM REC(GMg) REASONi, where</p>
        <p>REASONi v AFac, REASONi v AFev, REASONi v AFde, and REASONi v AFck.
- A partial explanation P Eg is obtained as follows: P Eg = Sii==N1UM REC(GMg) Ei, where Ei is the selected extension
obtained from REASONi.</p>
        <p>9It is not the scope of this article to study the most adequate semantics for goal processing or the way to select an extension
when more than one is returned by a semantics.</p>
        <p>This means that complete explanations include the arguments that support and attack the pass of a goal to
the next stage whereas partial explanations only include the supporting arguments.</p>
        <p>Example 1. Considering the scenario presented in the Introduction section, suppose that BOB is a rescue robot
whose goal memory for goal g2 = take hospital(patient1) (that is, GMg2 ) is shown in the following table10.</p>
        <p>STA
ac
pu
ch
ex</p>
        <p>REASON
AFdge2 = hfA1de; Ae1p2g; fgi
AFcgk2 = hfAc1k; Ae1p3; Ae1p1g; fgi
AFegv2 = hfAe1v; Ae1p1; Ae6pg; f(Ae1p1; Ae1v); (Ae1p1; Ae6p); (Ae6p; Ae1p1)gi</p>
        <p>AFagc2 = hfAa2c; Aa3c; Ae7p; Ae2p; Ae3p; Ae9p; Ae5p; Ae8pg; f(Ae8p; Aa2c); (Ae7p; Ae8p); (Ae8p; Ae7p); (Ae9p; Ae8p)gi
Now suppose that at end of a rescue day, BOB is interrogated by a supervisor with the following question:
WHY(g2; ac), which in natural language would be: Why did you decide to activate the goal \taking patient1 to the
hospital?". Considering GMg2 , the complete explanation is:</p>
        <p>CEg2 = AFagc2 = hfAe2p; Ae3p; Ae5p; Ae7p; Ae8p; Ae9p; Aa2c; Aa3cg; (Ae8p; Aa2c); f(Ae7p; Ae8p); (Ae8p; Ae7p); (Ae9p; Ae8p)gi
In natural language, this would be the answer: patient1 had a fractured bone (Ae2p), the fractured bone was
of his arm (Ae3p), and it was an open fracture (Ae5p). Given that he had a fractured bone, he might be considered
severe injured (Ae7p); however, since such fracture was of his arm, it might not be considered a severe injure (Ae8p).
Finally, I noted that it was an open fracture, which determines { without exception { that it was a severe injury
(Ae9p). For these reasons I decided to activate the goal taking him to the hospital (Aa2c; Aa3c).</p>
        <p>On the other hand, for generating the partial explanation BOB uses the arguments that are part of the a
preferred extension. Thus, he answers with: P Eg2 = fAe2p; Ae3p; Ae5p; Ae7p; Ae9p; Aa2c; Aa3cg.</p>
        <p>In natural language he would give the following answer: patient1 had a fractured bone (Ae2p), the fractured bone
was of his arm (Ae3p), and it was an open fracture (Ae5p); therefore, he was severely injured (Ae7p; Ae9p). Since he
was severely injured I took him to the hospital (Aa2c; Aa3c).
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>What still needs to be done</title>
      <sec id="sec-3-1">
        <title>Next, we present some future research directions of this work:</title>
        <p>Considering Figure 2, goals can go forward in the intention formation process and also can go back, they
can even become cancelled. For example, a chosen goal can return to a previous status like pursuable or
active. We have brie y given an idea about the reasons for a goal lose its current status; however, there is
no formalization about it and there is not a mechanism that controls the life-cycle depicted in Figure 2.
We have considered an additional status (cancelled); however, it is important study if there is the need of
another statuses (like suspended of [BPML04]). The inclusion of new statuses impacts on explainability
skills of the agents; however, it also makes the model more complex. So, there is also a need for a further
study about it and about how the dynamic between the statuses could be.</p>
        <p>We have focused on achievement (or procedural) goals11; however, maintenance and declarative goals12 are
also considered in the evaluation stage of the BBGP model. According to the BBGP model, maintenance
goals are also evaluated to determine if a goal becomes or not pursuable. For example, one goal of human
beings is to be happy and it is a goal that is permanently active. If a person is already happy, this goal does
not becomes pursuable because it is not necessary to do something to achieve it; on the other hand, if the
person is not happy, this goal becomes pursuable because the person has to do something to achieve his/her
happiness. BBGP-based agents also evaluate goals that depend on other agents, that is, the achievement
of an state of the world depends on the actions execution of other agents. Therefore, declarative goals are
used to evaluate a certain state of the world.
10Due to the lack of space, we are not going to explain the entire example; however, the reader can nd it in [MEPT19].
11A goal is called procedural when there is a set of plans for achieving it. This di ers from declarative ones, which are a description
of the state sought [WPHT02].</p>
        <p>12Unlike achievement goals, maintenance goals de ne states that must remain true, rather than a state that is to be achieved
[DHT06].</p>
        <p>During deliberation stage, BBGP-based agents have to identify possible incompatibilities between pursuable
goals and resolve such incompatibilities in order to determine those goals become chosen. This problem was
brie y tackled in [MEPPGT17] and extended in [MENP+19]; however, it was not integrated in the
BBGPbased agents architecture. Besides, based on [MENP+19], it is possible to improve and add new deliberations
rules. This in turn may enrich the explanations generated by BBGP-based agents.</p>
        <p>In [MENPT19], we have took into account uncertainty for identifying and dealing with incompatibilities
among goals. It would be interesting to also consider uncertainty in the elements of activation, evaluation,
deliberation rules rules. How would this impact on the results? How would this impact on the explainability
skills of agents?
Regarding the generated explanations, there is still a lot to work to do. We have proposed two kinds
of explanations; however, it is necessary to study how to deal with complex questions, which require more
elaborate and adequate explanations. In this sense, a \good" explanation may include elements from di erent
AFs, which elements? how to organize them? What else should be taken into account for generating an
explanation?
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Related Work</title>
      <p>Since XAI is a recently emerged domain in Arti cial Intelligence, there are few reviews about the works in
this area. In [ANCF19], Anjomshoae et al. make a Systematic Literature Review about goal-driven XAI,
i.e., explainable agency for robots and agents. According to them, there are very few works tackle inter-agent
explainability. One interesting research question was about the platforms and architectures that have been used
to design Explainable Agency. Their results show that 9% implemented their explanations in BDI architecture.
In [BHH+10] and [HvdBM10], the authors focus on generating explanations for humans about their goals and
actions. They construct a tree with beliefs, goals, and actions, from which they explanations are constructed.
Unlike our proposal, their explanations do not detail the progress of the goals and are not complete in the sense
that do not express why an agent did not commit to a given goal.</p>
      <p>Finally, Sassoon et al. [SSKP19] propose an approach of explainable argumentation based on argumentation
schemes and argumentation-based dialogues. In this approach, an agent provides explanations to patients (human
users) about their treatments. In this case, argumentation is applied in a di erent way than in our proposal and
with other focus, they generate explanations for information seeking and persuasion.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Final Remarks</title>
      <p>This article presented an approach for explainable agency based on argumentation theory. The chosen
architecture was the BBGP model, which can be considered an extension of the BDI model. Our objective was that
BBGP-based agents be able to explain their decision about the statuses of their goals, especially those goals
they committed to. In order to achieve our objectives, we equipped BBGP-based agents with a structure and a
mechanism to generate both partial and complete explanations.</p>
      <p>Furthermore, we presented an agenda about some future research directions. Currently, we are working on
integrating the identi cation and resolution of incompatibilities to the whole BBGP-based agent architecture.
We are also extending the set of deliberation rules in order to produce more detailed explanations. Finally, we
are working on the implementation of the initial proposal.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <sec id="sec-6-1">
        <title>This work is fully founded by CAPES.</title>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>References</title>
      <p>Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Framling. Explainable agents and
robots: Results from a systematic literature review. In Proceedings of the 18th International
Conference on Autonomous Agents and MultiAgent Systems, pages 1078{1088, 2019.
[BPML04]
[Cas08]
[MENP+19]
[MENPT19]</p>
      <p>Joost Broekens, Maaike Harbers, Koen Hindriks, Karel Van Den Bosch, Catholijn Jonker, and
John-Jules Meyer. Do you get it? user-evaluated explainable bdi agents. In German Conference
on Multiagent System Technologies, pages 28{39. Springer, 2010.</p>
      <p>Lars Braubach, Alexander Pokahr, Daniel Moldt, and Winfried Lamersdorf. Goal representation
for bdi agent systems. In International Workshop on Programming Multi-Agent Systems, pages
44{65. Springer, 2004.</p>
      <p>Michael Bratman. Intention, plans, and practical reason. 1987.</p>
      <p>Cristiano Castelfranchi. Reasons: Belief support and goal dynamics. Mathware &amp; Soft Computing,
3(1-2):233{247, 2008.</p>
      <p>Cristiano Castelfranchi and Fabio Paglieri. The role of beliefs in goal dynamics: Prolegomena to
a constructive theory of intentions. Synthese, 155(2):237{263, 2007.</p>
      <p>Simon Du , James Harland, and John Thangarajah. On proactivity and maintenance goals.
In Proceedings of the fth international joint conference on Autonomous agents and multiagent
systems, pages 1033{1040, 2006.</p>
      <p>Phan Minh Dung. On the acceptability of arguments and its fundamental role in nonmonotonic
reasoning, logic programming and n-person games. Arti cial intelligence, 77(2):321{357, 1995.
Maaike Harbers, Karel van den Bosch, and John-Jules Meyer. Design and evaluation of
explainable bdi agents. In 2010 IEEE/WIC/ACM International Conference on Web Intelligence and
Intelligent Agent Technology, volume 2, pages 125{132. IEEE, 2010.</p>
      <p>Mariela Morveli-Espinoza, Juan Carlos Nieves, Ayslan Possebom, Josep Puyol-Gruart, and
Cesar Augusto Tacla. An argumentation-based approach for identifying and dealing with
incompatibilities among procedural goals. International Journal of Approximate Reasoning, 105:1{26,
2019.</p>
      <p>Mariela Morveli Espinoza, Juan Carlos Nieves, Ayslan Possebom, and Cesar Augusto Tacla.
Dealing with incompatibilities among procedural goals under uncertainty. Inteligencia Arti cial.</p>
      <p>Ibero-American Journal of Arti cial Intelligence, 22(64), 2019.
[SSKP19]
[WPHT02]</p>
      <p>Mariela Morveli-Espinoza, Ayslan Possebom, and Cesar Augusto Tacla. Argumentation-based
agents that explain their decisions. In Proceedings of the 8th Brazilian Conference on Intelligent
Systems (BRACIS), pages 467{472. IEEE, 2019.</p>
      <p>Wassila Ouerdane, Nicolas Maudet, and Alexis Tsoukias. Argumentation theory and decision
aiding. In Trends in Multiple Criteria Decision Analysis, pages 177{208. Springer, 2010.
Anand S Rao and Michael P George . BDI agents: from theory to practice. In ICMAS, volume 95,
pages 312{319, 1995.</p>
      <p>Isabel Sassoon, Elizabeth Sklar, Nadin Kokciyan, and Simon Parsons. Explainable argumentation
for wellness consultation. In Proceedings of 1st International Workshop on eXplanable
TRansparent Autonomous Agents and Multi-Agent Systems (EXTRAAMAS2019), AAMAS, 2019.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Bra87]
          <article-title>[MEPPGT17] Mariela Morveli Espinoza</article-title>
          , Ayslan T Possebom,
          <article-title>Josep Puyol-Gruart, and Cesar A Tacla. Dealing with incompatibilities among goals</article-title>
          .
          <source>In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems</source>
          , pages
          <fpage>1649</fpage>
          {
          <fpage>1651</fpage>
          . International Foundation for Autonomous Agents and
          <string-name>
            <given-names>Multiagent</given-names>
            <surname>Systems</surname>
          </string-name>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [MEPPGT19]
          <string-name>
            <given-names>Mariela</given-names>
            <surname>Morveli-Espinoza</surname>
          </string-name>
          ,
          <article-title>Ayslan Trevizan Possebom, Josep Puyol-Gruart, and Cesar Augusto Tacla</article-title>
          .
          <article-title>Argumentation-based intention formation process</article-title>
          .
          <source>Revista Internacional Dyna</source>
          ,
          <volume>86</volume>
          (
          <issue>208</issue>
          ):
          <volume>82</volume>
          {
          <fpage>91</fpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>[MEPT19] [OMT10] [RG95]</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>