y. In this case, it is clear that it is important to endow the agents with the ability of explain their decisions, that is, to explain how and why a certain desire became (or not) an intention. In the case of BDI agents, the path of this explanation is made up of only one step, which happens because in BDI agents there are only two stages in the intention formation process: desires and intentions. This means that there is a lack of a fine-grained analysis of this process, which may improve and enrich the informational quality of the explanations. An extended model for intention formation has been proposed by Castelfranchi and Paglieri [CP07], which was named the Belief-based Goal Processing model (let us denote it by BBGP). The BBGP model has four stages: (i) activation (denoted ac)1 , (ii) evaluation (denoted ev), (iii) deliberation (denoted de), and (iv) checking (denoted ck). Consequently, four different statuses for a goal are defined: (i) active (=desire and denoted ac), (ii) pursuable (denoted pu), (iii) chosen (=future-directed intention and denoted ch) and (iv) executive (=present-directed intention and denoted ex). When a goal passes the activation (resp. evaluation, deliberation, checking) stage, it becomes active (resp. pursuable, chosen, executive). Figure 1 shows a general schema of the goal processing stages, the necessary beliefs, and the status of a goal after passing each stage. In this article, a status before the active one is also considered, it is called sleeping status2 . Cost/incompatibility/ Pre-conditions/ Motivating beliefs Assessment beliefs preference beliefs means-end beliefs Sleeping Active Pursuable Chosen Executive goals ACTIVATION goals EVALUATION goals DELIBERATION goals CHECKING goals STAGE STAGE STAGE STAGE Figure 1: Schema of the goal processing stages and the status of a goal after passing each stage. Against this background, the aim of this article is to formalize the BBGP model from the activation stage until the checking stage and to endow BBGP-based agents with explainability abilities. Thus, the research questions that are addressed in this article are: (i) how to formalize the BBGP model by integrating the activation, evaluation, deliberation, and checking stages? and (ii) how to endow BBGP-based agents with the ability of generating explanations? In addressing the first question, we will use argumentation, which is a process of constructing and comparing arguments considering the conflicts – which are called attacks – among them. The output of argumentation process is a set (or sets) of arguments – called extensions – which are internally consistent [Dun95]. In the intention formation process or goal processing, arguments can represent reasons for a goal to change (or not) its status. Thus, one can see the intention formation process as a decision-making process, where an agent has to decide which goal(s) passes a given stage and which does not. Adopting an argumentation-based approach in a decision making problem has some benefits on the explainability. For example, a (human) user will obtain a “good” choice along with the reasons underlying this recommendation. Besides, argumentation-based decision- making is more similar with the way humans deliberate and finally make a choice [OMT10]. Regarding the second question, we endow agents with an structure to save the reasoning path, based on which explanations can be generated. Figure 2 shows an overview of our approach, concretely it shows all the possible transitions of a goal from its sleeping status until it becomes executive, which depends on the arguments generated in each stage. We also consider the status cancelled, which happens under some circumstances that are briefly mentioned in the legend. Next section presents the work that has been done so far, including the formalization of argumentation-based agents and our proposal for generating partial and complete explanations. Section 3 enumerates the main future direction of our research. Section 4 presents the main related work. Finally, Section 5 is devoted to conclusions. 2 What has been done so far In this section, we present the work that has been done so far. Thus, we introduce (i) the building blocks for both the argumentation-based formalization and the mechanism for generating explanations, (ii) the argumentation process for goal processing, and (ii) the mechanism for the generation of complete and partial explanations3 . 1 Hereafter, these notations are used for differentiate the stages and the statuses of goals. 2 Sleeping is a status of a goal that is proposed in [Cas08] to refer to goals that have not been activated yet. 3 The first two points were presented in [MEPPGT19] and the last point was presented in [MEPT19]. Acceptable activation Acceptable evaluation Acceptable deliberation Acceptable checking argument(s) argument(s) Argument(s) argument Sleeping Active Pursuable Chosen Executive 2 4 1 or 3 1 or 3 1 or 3 Cancelled 2 Legend 1 Goal is deactivated 2 Goal becomes impossible to be achieved Condition guards transition 3 Maximum number of cycles is reached Condition triggers transition 4 Goal becomes incompatible and not preferred Figure 2: Life-cycle of goals. 2.1 Preliminaries In this paper, BBGP-based agents use rule-based systems4 as their basic reasoning model. The underlying logical language – denoted by L – consists of a set of literals5 in first-order logical language. We represent non-ground formulae with Greek letters (ϕ, ψ, ...), variables with Roman letters (x, y, ...) and we name rules with r1 , r2 , .... Strict rules are of the form r = ϕ1 , ..., ϕn → ψ, and defeasible rules are of the form r = ϕ1 , ..., ϕn ⇒ ψ. Thus, a theory is a triple T = hF , S , D i where F ⊆ L is a set of facts, S is a set of strict rules, and D is a set of defeasible rules. New information is produced from a given theory by applying the following concept, which was given in [AB13]. Definition 1. (Derivation schema) Let T = hF , S , D i be a theory and ψ ∈ L. A derivation schema for ψ from T is a finite sequence T = {(ϕ1 , r1 ), ..., (ϕn , rn )} such that: - ϕn = ψ - for i = 1...n, ϕi ∈ F and ri = ∅, or ri ∈ S ∪ D Based on a derivation scheme T , the following sets can be defined: SEQ(T ) = {ϕ1 , ..., ϕn }, FACTS(T ) = {ϕi | i ∈ {1, ..., n}, ri = ∅}, STRICT(T ) = {ri |i ∈ {1, ..., n}, ri ∈ S }, DEFE(T ) = {ri |i ∈ {1, ..., n}, ri ∈ D } 2.2 Building Blocks From L, we distinguish the following finite sets: - F is the set of facts of the agent and - G is the set of goals of the agent. F and G are subsets of ground literals from the language L and are pairwise disjoint. Besides, G = Gac ∪ Gpu ∪ Gch ∪ Gex , where Gac (resp. Gpu , Gch , Gex ) stands for the set of active (resp. pursuable, chosen, executive) goals. It holds that Gx ∩ Gy = ∅, for x, y ∈ {ac, pu, ch, ex} with x 6= y. Other important structures are rules, which express the relation between beliefs and goals. Rules can be classified in standard and non-standard rules (activation, evaluation, deliberation, and checking rules). The former are made up of beliefs in both their premises and their conclusions and the latter are made up of beliefs in their premises and goals or beliefs about goals in their conclusions. Both standard and non-standard rules can be strict or defeasible rules. Standard rules can be used in any of the stages of the goal processing whereas non-standard rules are distinct V for eachVstage. Thus, we have: - Standard rules (rst ): V ϕi → φ (or V ϕi ⇒ φ). - Activation rules (rac ): Vϕi → ψ (or V ϕi ⇒ ψ). - Evaluation rules (rev ): ϕi → ¬ψ (or ϕi ⇒ ¬ψ). 1 - Deliberation rules: rde = ¬has incompatibility(g) → chosen(g) 2 rde : most valuable(g) → chosen(g) - Checking rule: rck = has plans f or(g) ∧ satisf ied context f or(g) → executive(g) Where ϕi and ψ denote non-ground literals that represent beliefs and goals, respectively. g denote ground 4 Rule-based systems distinguish between facts, strict rules, and defeasible rules. A strict rule encodes strict information that has no exception, whereas a defeasible rule expresses general information that may have exceptions. 5 Literals are defined as positive or negative atoms where an atom is an n-ary predicate. literals that represent a goal6 . Notice that standard, activation, and evaluation rules are designed and entered by the programmer of the agent, and their content is domain-dependent. Otherwise, rules in deliberation and checking stages are pre-defined and domain-independent. Finally, let Rst , Rac , Rev , Rde , and Rck denote the set of standard, activation, evaluation, deliberation, and checking rules, respectively. Next, we define the theory of a BBGP-based agent. Definition 2. (BBGP-based Agent Theory) A theory is a triple T = hF , S , D i such that: (i) F is the S ∪ RS ∪ RS ∪ RS ∪ RS is the set of strict rules, and (ii) D = set of beliefs of the agent, (ii) S = Rst ac ev de ck Rst ∪ Rac ∪ Rev ∪ Rde ∪ Rck is the set of defeasible rules, where Rx = RxS ∪ RxD (for x ∈ {st, ac, ev, de, ck}). It D D D D D holds that RxS ∩ RxD = ∅. From a theory, a BBGP-based agent can build arguments. There are two categories of arguments. The first one – called epistemic arguments – justifies or attacks beliefs, while the other one – called stage arguments – justifies or attacks the passage of a goal from one stage to another. There is a set of arguments for each stage of the BBGP model. Definition 3. (Arguments) Let T = hF , S , D i be a BBGP-based agent theory and T 0 = hF , Rst S , RD i and st 00 00 00 0 S 0 D T = hF , S , D i be two sub-theories of T , where S = S \ Rst and D = D \ Rst . An epistemic argument constructed from T 0 is a pair A = hT, ϕi such that: (1) ϕ ∈ L (2) T is a derivation schema for ϕ from T 0 On the other hand, a stage argument constructed from T 00 is a pair A = hT, gi such that: (1) g ∈ G (2) For the activation and evaluation stages: T is a derivation schema for g from T 00 . For the deliberation stage: T is a derivation schema for chosen(g) from T 00 . For the checking stage: T is a derivation schema for executive(g) from T 00 . For both kinds of arguments, it holds that SEQ(T ) is consistent7 and T must be minimal8 . Finally, ARGep , ARGac , ARGev , ARGde , and ARGck denote the set of all epistemic, activation, evaluation, deliberation, and checking arguments, respectively. As for notation, CLAIM(A) = ϕ (or g) and SUPPORT(A) = T denote the conclusion and the support of an argument A, respectively. An argument may have a set of sub-arguments. Thus, an argument hT 0 , ϕ0 i is a sub-argument of hT, ϕi iff FACTS(T 0 ) ⊆ FACTS(T ), STRICT(T 0 ) ⊆ STRICT(T ), and DEFE(T 0 ) ⊆ DEFE(T ). SUB(A) denotes the set of all sub-arguments of A. Stage arguments built from T constitute a cause for a goal changes its status. However, it is not a proof that the the goal should adopt another status. The reason is that an argument can be attacked by other arguments. Two kinds of attacks are distinguished: (i) the attacks between epistemic arguments, and (ii) the mixed attacks, in which an epistemic argument attacks a stage argument. The former is defined over ARGep and is captured by the binary relation attep ⊆ ARGep × ARGep . The latter is defined over ARGep and ARGx (for x ∈ {ac, ev, de, ck}); and is captured by the binary relation attmx ⊆ ARGep × ARGx . For both kinds of attacks, (A, B) ∈ attmx (or (A, B) ∈ attep ) denotes that there is an attack relation between arguments A and B. Next definition captures both kinds of attacks; thus, rebuttal may occur only between epistemic arguments and undercut may occur in both kinds of attacks. Definition 4. (Attacks) Let hT 0 , ϕ0 i and hT, ϕi be two epistemic arguments, and hT, gi be a stage argument. hT 0 , ϕ0 i rebuts hT, ϕi if ϕ = ¬ϕ0 . hT, ϕi undercuts hT 0 , ¬ϕ0 i (or hT, gi) if ϕ0 ∈ FACTS(T ). From epistemic and stage arguments and the attacks between them, it is generated a different Argumentation Framework (AF) for each stage of the BBGP model. Definition 5. (Argumentation Framework) An argumentation framework AFx is a pair AFx = hARG, atti (x ∈ {ac, ev, de, ck}) such that: • ARG = ARGx ∪ ARG0ep ∪ SUBARGS , where ARGx is a set of stage arguments, ARG0ep S = {A | A ∈ ARGep and (A, B) ∈ attmx or (A, C) ∈ attep }, where B ∈ ARGx and C ∈ ARG0ep , and SUBARGS = A∈ARGx ,A∈ARG0 SUB(A). ep 6 In any of the states of the goal processing, a goal is represented by a ground atom. However, before a goal becomes active, it has the form of a non-ground atom; in this case, we call it a sleeping goal. Thus, ψ is a sleeping goal and g a goal in some status. 7 A set L0 ⊆ L is consistent iff @ϕ, ϕ0 ∈ L0 such that ϕ = ¬ϕ0 . It is inconsistent otherwise. 8 Minimal means that there is no T 0 ⊂ T such that ϕ (g, chosen(g), or executive(g)) is a derivation schema of T 0 . • att = att0ep ∪ att0mx , where att0ep ⊆ ARG0ep × ARG0ep and att0mx ⊆ ARG0ep × ARGx . The next step is to evaluate the arguments that make part of the AF. This evaluation is important because it determines which goals pass (acceptable goals) from one stage to the next. The aim is to obtain a subset of ARG without conflicting arguments. In order to obtain it, we use an acceptability semantics, which return one or more sets – called extensions – of acceptable arguments. The fact that a stage argument belong to an extension determines the change of status of the goal in its claim. Next, the main semantics introduced by Dung [Dun95] are recalled9 . Definition 6. (Semantics) Let AFx = hARG, atti be an AF (with x ∈ {ac, ev, de, ck}) and E ⊆ ARG: - E is conflict-free if ∀A, B ∈ E , (A, B) ∈/ att or (B, A) ∈/ att - E defends A iff ∀B ∈ ARG, if (B, A) ∈ att, then ∃C ∈ E s.t. (C, B) ∈ att. - E is admissible iff it is conflict-free and defends all its elements. - A conflict-free E is a complete extension iff we have E = {A|E defends A}. - E is a preferred extension iff it is a maximal (w.r.t the set inclusion) complete extension. - E is a grounded extension iff is a minimal (w.r.t. set inclusion) complete extension. - E is a stable extension iff E is conflict-free and ∀A ∈ ARG, ∃B ∈ E such that (B, A) ∈ att. 2.3 Partial and Complete Explanations In order to able to generate partial and complete explanations, a BBGP-based agent needs to store information about the progress of his goals, that is, the changes of the statuses of such goals and the causes of these changes. The latter are stored in each AF in form of accepted arguments; however, the former cannot be stored in an AF. Thus, we need a structure that saves both the status of each goal and the AF that supports this status. Considering that at each stage, the agent generates arguments and attacks for more than one goal – which are stored in each AFx – and we only need those arguments and attacks related to one goal, we have to extract from AFx such arguments and attacks. In other words, we need to obtain a sub-AF. Definition 7. (Sub-AF) Let AFx = hARG, atti (with x ∈ {ac, ev, de, ck}) be the an AF and g ∈ G a goal. An AF AFx0 = hARG0 , att0 i is a sub-AF of AFx (denoted AFx0 v AFx ), if ARG0 ⊆ ARG and att0 = att ⊗ ARG0 , s.t.: - ARG0 = {{A|A ∈ ARGx , CLAIM(A) = g} ∪ {B|(B, A) ∈ attmx or (B, C) ∈ att0ep }, where B ∈ ARGep , C ∈ ARG0ep , att0ep ⊂ att0 , and ARG0ep ⊂ ARG0 }}, and - att ⊗ ARG0 returns a subset of att that involves just the arguments in ARG0 . Next, we define a structure that stores the causes of the change of the status of a goal, which must be updated after a new change occurs. We can see this structure as a table record, where each row saves the status of a goal along with the AF that supports such status. Definition 8. (Goal Memory) Let AFx = hARG, atti be an AF (with x ∈ {ac, ev, de, ck}), AFx0 v AFx a sub-AF, and g ∈ G a goal. The goal memory GMg for goal g is a set of ordered pairs (STA, REASON) such that: • STA ∈ {ac, pu, ch, ex,not ac, not pu, not ch, not ex} where {ac, pu, ch, ex} represent the status g attains due to the arguments in REASON whereas {not ac, not pu, not ch, not ex} represent the status g cannot attain due to the arguments in REASON. • REASON = AFx0 v AFx is a sub-AF whose selected extension supports the current status of g. Let GM+ be the set of all goal memories and NUM REC : GM+ → N a function that returns the number of records of a given GM. From the goal memory structure, the partial and complete explanation can be generated. Definition 9. (Partial and Complete Explanations) Let g ∈ G be a goal, GMg the memory of g, and AFac , AFev , AFde , and AFck the four argumentation frameworks involved in the goal processing. Besides, let x ∈ {ac, pu, ch, ex} denote the current status of g: Si=NUM REC(GMg ) - A complete explanation CEg for g ∈ Gx is obtained as follows: CEg = i=1 REASONi , where REASONi v AFac , REASONi v AFev , REASONi v AFde , and REASONi v AFck . Si=NUM REC(GMg ) - A partial explanation P Eg is obtained as follows: P Eg = i=1 Ei , where Ei is the selected extension obtained from REASONi . 9 It is not the scope of this article to study the most adequate semantics for goal processing or the way to select an extension when more than one is returned by a semantics. This means that complete explanations include the arguments that support and attack the pass of a goal to the next stage whereas partial explanations only include the supporting arguments. Example 1. Considering the scenario presented in the Introduction section, suppose that BOB is a rescue robot whose goal memory for goal g2 = take hospital(patient1 ) (that is, GMg2 ) is shown in the following table10 . STA REASON g2 ac AFac = h{A2ac , A3ac , A7ep , A2ep , A3ep , A9ep , A5ep , A8ep }, {(A8ep , A2ac ), (A7ep , A8ep ), (A8ep , A7ep ), (A9ep , A8ep )}i g2 pu AFev = h{A1ev , A11 6 11 1 11 6 6 ep , Aep }, {(Aep , Aev ), (Aep , Aep ), (Aep , Aep )}i 11 g2 ch AFde = h{A1de , A12ep }, {}i g2 ex AFck = h{Ack , Aep , A11 1 13 ep }, {}i Now suppose that at end of a rescue day, BOB is interrogated by a supervisor with the following question: WHY(g2 , ac), which in natural language would be: Why did you decide to activate the goal “taking patient1 to the hospital?”. Considering GMg2 , the complete explanation is: g2 CEg2 = AFac = h{A2ep , A3ep , A5ep , A7ep , A8ep , A9ep , A2ac , A3ac }, (A8ep , A2ac ), {(A7ep , A8ep ), (A8ep , A7ep ), (A9ep , A8ep )}i In natural language, this would be the answer: patient1 had a fractured bone (A2ep ), the fractured bone was of his arm (A3ep ), and it was an open fracture (A5ep ). Given that he had a fractured bone, he might be considered severe injured (A7ep ); however, since such fracture was of his arm, it might not be considered a severe injure (A8ep ). Finally, I noted that it was an open fracture, which determines – without exception – that it was a severe injury (A9ep ). For these reasons I decided to activate the goal taking him to the hospital (A2ac , A3ac ). On the other hand, for generating the partial explanation BOB uses the arguments that are part of the a preferred extension. Thus, he answers with: P Eg2 = {A2ep , A3ep , A5ep , A7ep , A9ep , A2ac , A3ac }. In natural language he would give the following answer: patient1 had a fractured bone (A2ep ), the fractured bone was of his arm (A3ep ), and it was an open fracture (A5ep ); therefore, he was severely injured (A7ep , A9ep ). Since he was severely injured I took him to the hospital (A2ac , A3ac ). 3 What still needs to be done Next, we present some future research directions of this work: • Considering Figure 2, goals can go forward in the intention formation process and also can go back, they can even become cancelled. For example, a chosen goal can return to a previous status like pursuable or active. We have briefly given an idea about the reasons for a goal lose its current status; however, there is no formalization about it and there is not a mechanism that controls the life-cycle depicted in Figure 2. • We have considered an additional status (cancelled); however, it is important study if there is the need of another statuses (like suspended of [BPML04]). The inclusion of new statuses impacts on explainability skills of the agents; however, it also makes the model more complex. So, there is also a need for a further study about it and about how the dynamic between the statuses could be. • We have focused on achievement (or procedural) goals11 ; however, maintenance and declarative goals12 are also considered in the evaluation stage of the BBGP model. According to the BBGP model, maintenance goals are also evaluated to determine if a goal becomes or not pursuable. For example, one goal of human beings is to be happy and it is a goal that is permanently active. If a person is already happy, this goal does not becomes pursuable because it is not necessary to do something to achieve it; on the other hand, if the person is not happy, this goal becomes pursuable because the person has to do something to achieve his/her happiness. BBGP-based agents also evaluate goals that depend on other agents, that is, the achievement of an state of the world depends on the actions execution of other agents. Therefore, declarative goals are used to evaluate a certain state of the world. 10 Due to the lack of space, we are not going to explain the entire example; however, the reader can find it in [MEPT19]. 11 A goal is called procedural when there is a set of plans for achieving it. This differs from declarative ones, which are a description of the state sought [WPHT02]. 12 Unlike achievement goals, maintenance goals define states that must remain true, rather than a state that is to be achieved [DHT06]. • During deliberation stage, BBGP-based agents have to identify possible incompatibilities between pursuable goals and resolve such incompatibilities in order to determine those goals become chosen. This problem was briefly tackled in [MEPPGT17] and extended in [MENP+ 19]; however, it was not integrated in the BBGP- based agents architecture. Besides, based on [MENP+ 19], it is possible to improve and add new deliberations rules. This in turn may enrich the explanations generated by BBGP-based agents. • In [MENPT19], we have took into account uncertainty for identifying and dealing with incompatibilities among goals. It would be interesting to also consider uncertainty in the elements of activation, evaluation, deliberation rules rules. How would this impact on the results? How would this impact on the explainability skills of agents? • Regarding the generated explanations, there is still a lot to work to do. We have proposed two kinds of explanations; however, it is necessary to study how to deal with complex questions, which require more elaborate and adequate explanations. In this sense, a “good” explanation may include elements from different AFs, which elements? how to organize them? What else should be taken into account for generating an explanation? 4 Related Work Since XAI is a recently emerged domain in Artificial Intelligence, there are few reviews about the works in this area. In [ANCF19], Anjomshoae et al. make a Systematic Literature Review about goal-driven XAI, i.e., explainable agency for robots and agents. According to them, there are very few works tackle inter-agent explainability. One interesting research question was about the platforms and architectures that have been used to design Explainable Agency. Their results show that 9% implemented their explanations in BDI architecture. In [BHH+ 10] and [HvdBM10], the authors focus on generating explanations for humans about their goals and actions. They construct a tree with beliefs, goals, and actions, from which they explanations are constructed. Unlike our proposal, their explanations do not detail the progress of the goals and are not complete in the sense that do not express why an agent did not commit to a given goal. Finally, Sassoon et al. [SSKP19] propose an approach of explainable argumentation based on argumentation schemes and argumentation-based dialogues. In this approach, an agent provides explanations to patients (human users) about their treatments. In this case, argumentation is applied in a different way than in our proposal and with other focus, they generate explanations for information seeking and persuasion. 5 Final Remarks This article presented an approach for explainable agency based on argumentation theory. The chosen architec- ture was the BBGP model, which can be considered an extension of the BDI model. Our objective was that BBGP-based agents be able to explain their decision about the statuses of their goals, especially those goals they committed to. In order to achieve our objectives, we equipped BBGP-based agents with a structure and a mechanism to generate both partial and complete explanations. Furthermore, we presented an agenda about some future research directions. Currently, we are working on integrating the identification and resolution of incompatibilities to the whole BBGP-based agent architecture. We are also extending the set of deliberation rules in order to produce more detailed explanations. Finally, we are working on the implementation of the initial proposal. Acknowledgements This work is fully founded by CAPES. References [AB13] Leila Amgoud and Philippe Besnard. A formal characterization of the outcomes of rule-based argumentation systems. In International Conference on Scalable Uncertainty Management, pages 78–91. Springer, 2013. [ANCF19] Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. Explainable agents and robots: Results from a systematic literature review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 1078–1088, 2019. [BHH+ 10] Joost Broekens, Maaike Harbers, Koen Hindriks, Karel Van Den Bosch, Catholijn Jonker, and John-Jules Meyer. Do you get it? user-evaluated explainable bdi agents. In German Conference on Multiagent System Technologies, pages 28–39. Springer, 2010. [BPML04] Lars Braubach, Alexander Pokahr, Daniel Moldt, and Winfried Lamersdorf. Goal representation for bdi agent systems. In International Workshop on Programming Multi-Agent Systems, pages 44–65. Springer, 2004. [Bra87] Michael Bratman. Intention, plans, and practical reason. 1987. [Cas08] Cristiano Castelfranchi. Reasons: Belief support and goal dynamics. Mathware & Soft Computing, 3(1-2):233–247, 2008. [CP07] Cristiano Castelfranchi and Fabio Paglieri. The role of beliefs in goal dynamics: Prolegomena to a constructive theory of intentions. Synthese, 155(2):237–263, 2007. [DHT06] Simon Duff, James Harland, and John Thangarajah. On proactivity and maintenance goals. In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pages 1033–1040, 2006. [Dun95] Phan Minh Dung. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence, 77(2):321–357, 1995. [HvdBM10] Maaike Harbers, Karel van den Bosch, and John-Jules Meyer. Design and evaluation of explain- able bdi agents. In 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, volume 2, pages 125–132. IEEE, 2010. [MENP+ 19] Mariela Morveli-Espinoza, Juan Carlos Nieves, Ayslan Possebom, Josep Puyol-Gruart, and Ce- sar Augusto Tacla. An argumentation-based approach for identifying and dealing with incom- patibilities among procedural goals. International Journal of Approximate Reasoning, 105:1–26, 2019. [MENPT19] Mariela Morveli Espinoza, Juan Carlos Nieves, Ayslan Possebom, and Cesar Augusto Tacla. Dealing with incompatibilities among procedural goals under uncertainty. Inteligencia Artificial. Ibero-American Journal of Artificial Intelligence, 22(64), 2019. [MEPPGT17] Mariela Morveli Espinoza, Ayslan T Possebom, Josep Puyol-Gruart, and Cesar A Tacla. Dealing with incompatibilities among goals. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pages 1649–1651. International Foundation for Autonomous Agents and Multiagent Systems, 2017. [MEPPGT19] Mariela Morveli-Espinoza, Ayslan Trevizan Possebom, Josep Puyol-Gruart, and Cesar Augusto Tacla. Argumentation-based intention formation process. Revista Internacional Dyna, 86(208):82– 91, 2019. [MEPT19] Mariela Morveli-Espinoza, Ayslan Possebom, and Cesar Augusto Tacla. Argumentation-based agents that explain their decisions. In Proceedings of the 8th Brazilian Conference on Intelligent Systems (BRACIS), pages 467–472. IEEE, 2019. [OMT10] Wassila Ouerdane, Nicolas Maudet, and Alexis Tsoukias. Argumentation theory and decision aiding. In Trends in Multiple Criteria Decision Analysis, pages 177–208. Springer, 2010. [RG95] Anand S Rao and Michael P Georgeff. BDI agents: from theory to practice. In ICMAS, volume 95, pages 312–319, 1995. [SSKP19] Isabel Sassoon, Elizabeth Sklar, Nadin Kokciyan, and Simon Parsons. Explainable argumentation for wellness consultation. In Proceedings of 1st International Workshop on eXplanable TRanspar- ent Autonomous Agents and Multi-Agent Systems (EXTRAAMAS2019), AAMAS, 2019. [WPHT02] Michael Winikoff, Lin Padgham, James Harland, and John Thangarajah. Declarative and proce- dural goals in intelligent agent systems. In International Conference on Principles of Knowledge Representation and Reasoning. Morgan Kaufman, 2002.