Preference Management in Epistemic Logic L-DINF⋆ Stefania Costantini1 , Andrea Formisano2 and Valentina Pitoni1,** 1 DISIM, Università di L’Aquila, Italy 2 DMIF, Università di Udine, Italy Abstract The Logic of “Inferable” L-DINF has been recently proposed as a declarative framework to formally model via epistemic logic the group dynamics of cooperative agents. In this paper, we extend the framework by introducing the possibility to have costs for execution of physical action. Such costs may require the consumption of multiple resources of various types, to be drawn from agents’ budgets. Also, we emphasize that all aspects of Multi-Agent Systems specified in L-DINF can be formalized in a modular way. In particular, concerning the execution of physical actions, dedicated modules allow the specification of a notion of equivalence for actions and a notion of agents’ preference, be used to affect action execution. Keywords Multi Agent Systems, Modal Logic, Epistemic Logic 1. Introduction Nowadays, Multi-Agent Systems (MAS) has become a technology with many fields of application. In previous work [1, 2, 3] we introduced a logical framework based on an epistemic logic called L-DINF, which allows to represent knowledge and beliefs of agents, to reason about mental and physical actions of agents, to take into account costs of actions and budgets that are available to afford such costs. In particular, we assume that agents perform mental actions to develop reasoning as a premise to the execution of some physical action. The execution of each step in these inferences has a cost, paid by agents by drawing on a budget which is dedicated to execution of mental actions. Intuitively, in a practical scenario, we could consider this cost as an amount of energy that the agent draws from its internal battery to enable its reasoning functions. The logical framework L-DINF allows the modeling of group dynamics of cooperative agents. Consequently, one can model agents that can form groups and support each other in performing collective mental actions [4, 5]. Moreover, agents can take into account preferences about performing one action instead of another [6]. The logical framework also encompasses the possibility for agents to have roles within their group of agents. Roles determine which actions each agent is enabled by its group to perform [7]. A mental action is considered executable if at CILC’23: 38th Italian Conference on Computational Logic, June 21–23, 2023, Udine, Italy ⋆ Research partially supported by Action COST CA17124 “DigForASP”, by Interdepartmental Project on AI (Strategic Plan UniUD–22-25) and by INdAM-GNCS projects CUP E55F22000270001 and CUP E53C22001930001. ** Corresponding author. $ stefania.costantini@univaq.it (S. Costantini); andrea.formisano@uniud.it (A. Formisano); valentina.pitoni@univaq.it (V. Pitoni)  0000-0002-5686-6124 (S. Costantini); 0000-0002-6755-9314 (A. Formisano) © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) least one agent of the group can perform the action, with the approval of the group and on behalf of the group. An agent can join or leave a group whenever it wants (and, consequently, the role of an agent may change as it joins another group). The agents of a group can share their beliefs, so that any agent can access beliefs of other agents. This ability opens up the possibility of modeling aspects of “Theory of Mind” [8]. For instance, an agent can maintain a version (possibly outdated) of the mental state other agents and perform inferences about such knowledge. Then, it can make predictions and formulate interpretations of other agent’s behaviours. For simplicity, in this paper we do not deal with these aspects and refer the interested reader to [9]. In this context, there are applications where agents can profit from the ability to choose the preferred physical action among a set of actions deemed equivalent. Hence, whenever the inference activity of an agent indicates that a physical action 𝜑𝐴 has to be executed, the agent can choose to perform another action 𝜑′𝐴 equivalent to 𝜑𝐴 . To model this feature we enrich the logical framework so that agents can exploit an equivalence relation among physical actions, together with evaluations of costs of actions, available budget and agents’ own preferences, in order to determine which is the best physical action to be performed. Also, we assume that the execution of each physical action may require some amounts of resources to be “used/available”. Differently from what happens for mental actions (where, for simplicity, only one type of resource, i.e., “energy”, is taken into account), we consider that each agent has different amounts of different resources available, and that each physical action may require multiple resources. Various logics concerning implicit and explicit belief, as well as some aspects of awareness, have been proposed in the literature. We mention, among others, the seminal work [10] by Fagin & Halpern. Nevertheless, to the best of our knowledge, such proposals make no use of concepts such as ‘reasoning’ and ‘inference’. Instead, the logical framework L-DINF accounts for the perceptive and inferential steps leading from agent’s knowledge and beliefs to new beliefs. In this sense, it provides a constructive theory of explicit beliefs. In L-DINF, aspects such as “executability” of actions (both mental and physical) and costs related to their execution, can be represented and considered in reasoning activity of the agents. Epistemic attitudes are modeled similarly to other approaches, among which we mention the dynamic theory of evidence-based beliefs [11] —that exploits, similarly to our approach, a neighborhood semantics for the notion of evidence—, the sentential approach to explicit beliefs dynamics [12], the dynamic theory of beliefs described in [13], and the dynamic logic combining explicit beliefs and knowledge proposed in [14]. Concerning logics of inference, relevant proposals are the one in [15] and the logical system DES4𝑛 described in [16]. In particular, we are indebted to [15] concerning the idea of modeling inference steps by means of dynamic operators in the style of dynamic epistemic logic (DEL). We, however, distinguish and emphasize the notions of explicit belief and background knowledge. Also, we consider issues related to executability and costs. In developing L-DINF we are also indebted to [16], concerning the point of view that agents reach certain belief states by performing inferences, and that making inferences takes time (we tackled the issue of time in previous work, discussed in [1, 17, 18]). Differently from this work however, in L-DINF inferential actions are represented both at the syntactic level, via dynamic operators, and at a semantic level as neighborhood-updates. Moreover, L-DINF enables an agent to reason on executability of inferential actions. The notion of explicit beliefs constitutes a major difference, besides others, between L-DINF and active logics [19, 20]. While active logics provide models of reasoning based on long-term memory and short-term memory like in our approach, they do not distinguish –as we do– between the notion of explicit belief and the notion of background knowledge, conceived in our case as a radically different kind of epistemic attitude. Moreover, L-DINF accounts for a collection of mental actions that have not been explored in the active logic literature. In this paper, we extend the framework described in [1, 2, 3] by introducing the possibility to have costs for execution of physical actions. To meet these costs, the agents draw on resource budgets and this may involve the consumption of multiple amounts of resources of various types. Compared to the formalization of L-DINF appeared in previous papers, here we propose a re-engineering of the entire logical framework. This is done, both syntactically and semantically, by distinguishing between a core part of the framework (concerning the essential part of the logic, i.e., expressing properties of agents’ knowledge and beliefs) and a collection of packages. Each package can be thought as modular extensions of the core part, used to introduce a specific feature (such as preferences, costs, executability constraints, etc.) in the framework. 2. Logical Framework L-DINF is a logic composed of a static component and a dynamic component. The first, called L-INF, is a logic of explicit beliefs and background knowledge. The second component extends the static one with dynamic operators that express the consequences of agents’ mental actions. 2.1. Syntax Let Atm = {𝑝, 𝑞, . . .} be a countable set of atomic propositions. The set 𝐴𝑡𝑚𝐴 is the set of the physical actions that agents can perform, including “active sensing” actions (e.g., “let’s check whether it rains”, “let’s measure the temperature”, etc.). Let Agt be a set of agents and Grp the set of groups of agents. Moreover, let Res = {r1 , . . . , rℓ } be the set of all (names of) resources. The language of L-DINF, denoted by ℒL-DINF , is defined by the following grammar: 𝜙, 𝜓 ::= 𝑝 | ¬𝜙 | 𝜙 ∧ 𝜓 | B𝑖 𝜙 | K𝑖 𝜙 | do 𝑃𝐺 (𝜑𝐴 ) | doG (𝜑𝐴 ) | can_doG (𝜑𝐴 ) | intend 𝐺 (𝜑𝐴 ) | exec 𝐺 (𝛼) | pref _do 𝑖 (𝜑𝐴 , 𝑑) | pref _do 𝐺 (𝑖, 𝜑𝐴 ) | [𝐺 : 𝛼] 𝜙 | Cl (𝜑𝐴 , 𝜑′𝐴 ) | fCl 𝑖 (𝜑𝐴 ) 𝛼 ::= +𝜙 | ⊢(𝜙,𝜓) | ∩ (𝜙,𝜓) | ↓(𝜙, 𝜓) | ⊣(𝜙, 𝜓) where 𝑝 ranges over Atm, 𝜑𝐴 , 𝜑′𝐴 ∈ 𝐴𝑡𝑚𝐴 , 𝑖 ∈ Agt, 𝑑 ∈ N, and 𝐺 ∈ Grp. Other Boolean operators are defined from ¬ and ∧ in the standard manner.1 The language of mental actions of type 𝛼 is denoted by ℒACT . The static part L-INF of L-DINF, includes only those formulas not having sub-formulas of the form [𝐺 : 𝛼] 𝜙. Before introducing the formal semantics let us briefly describe the intended informal meaning of basic formulas of L-INF. As mentioned, we are interested in modelling the reasoning of agents 1 For simplicity, whenever 𝐺 = {𝑖} we will write 𝑖 as subscript in place of {𝑖}. So, for instance, we often write exec 𝑖 (𝜑𝐴 ) instead of exec {𝑖} (𝜑𝐴 ) and similarly for other constructs. acting cooperatively. We consider the set of agents as partitioned in groups: each agent 𝑖 ∈ Agt always belongs to a unique group in Grp. We assume that all agents initially belong to an initial group. Any agent 𝑖, at any time, can perform a (physical) action joinA (𝑖, 𝑗), for 𝑗 ∈ Agt, in order to change her group and join 𝑗’s group. The special case in which 𝑖 = 𝑗 denotes the action that allows agent 𝑖 to leave her current group and form the new singleton group {𝑖}. The formula intend 𝑖 (𝜑𝐴 ) indicates the intention of agent 𝑖 to perform the physical action 𝜑𝐴 , in the sense of the BDI agent model [21]. Formulas of this form can be part of agent’s knowledge base from the beginning or it can be derived later. In this paper we do not cope with the formalization of BDI, for which the reader may refer, e.g., to [22]. Hence, we will deal with intentions rather informally, also assuming that intend 𝐺 (𝜑𝐴 ) holds whenever all agents of group 𝐺 intend to perform 𝜑𝐴 . The formula doi (𝜑𝐴 ) indicates the actual execution of action 𝜑𝐴 by agent, automatically recorded by the new belief do 𝑃𝑖 (𝜑𝐴 ) (postfix “𝑃 ” standing for “past” action). Note that, we do not provide an axiomatization for do (and similarly for doG , that indicates the actual execution of 𝜑𝐴 by the group of agents 𝐺). In fact, we assume that in any concrete implementation of the logical framework, do 𝑖 and doG are realized by means of a semantic attachment [23], that is, a procedure which connects an agent with its external environment in a way that is unknown at the logical level. The axiomatization only concerns the relationship between doing and being enabled to do. The expressions can_doi (𝜑𝐴 ) and pref _do 𝑖 (𝜑𝐴 , 𝑑) are closely related to doi (𝜑𝐴 ). In particu- lar, can_doi (𝜑𝐴 ) must be seen as an enabling condition, indicating that the agent 𝑖 is enabled to perform the action 𝜑𝐴 , while pref _doi (𝜑𝐴 , 𝑑) indicates the level 𝑑 of preference/willingness of agent 𝑖 to perform 𝜑𝐴 . The formula pref _doG (𝑖, 𝜑𝐴 ) indicates that agent 𝑖 exhibits the maximum level of preference on performing action 𝜑𝐴 within all group members. Notice that, if a group of agents intends to perform an action 𝜑𝐴 , this will entail that the entire group intends to do 𝜑𝐴 , that will be enabled to be actually executed only if at least one agent 𝑖 ∈ 𝐺 can do it, i.e., it can derive can_doi (𝜑𝐴 ). The formula Cl (𝜑𝐴 , 𝜑′𝐴 ) denoted the equivalence of the two physical actions 𝜑𝐴 and 𝜑′𝐴 . Intu- itively, this means that in the specific practical context at hand, the two actions have “something in common”, i.e., for instance, they use similar resources, perform in a similar way, can be used by an agent to obtain equivalent results, etc. Notice that the predicate Cl induces a partition of 𝐴𝑡𝑚𝐴 in a collection of equivalence classes. Agents modeled through L-DINF deal with two kind of memories, namely, a working memory used to represent beliefs, i.e., facts and formulas acquired via perceptions during an agent’s oper- ation, and a long-term memory used to model agent’s background knowledge. Such knowledge is assumed to satisfy omniscience principles, such as: closure under conjunction and known implication, closure under logical consequence, and introspection. Background knowledge of an agent 𝑖 is specified by means of the modal operator K𝑖 , which is actually the usual S5 modal operator often used to model knowledge. The fact that background knowledge is closed under logical consequence is justified because we conceive it as a kind of stable and reliable knowledge base. The modal operator B𝑖 , instead, is used to represent the beliefs of agents 𝑖 kept in 𝑖’s working memory. The contents of the working memory is determined by the mental actions 𝑖 has executed (cf., Section 2.2.2). We assume the background knowledge to include: facts/formulas known by the agent from the beginning, and facts the agent subsequently decided to store in its long-term memory (via a decision-making mechanism not covered here) after processing them in its working memory. We therefore assume that background knowledge is irrevocable, in the sense of being stable over time. Whenever an agent wants to perform a physical action 𝜑′𝐴 , it can exploit the equivalence de- scribed by the facts of the form Cl (𝜑𝐴 , 𝜑′𝐴 ) to execute a most convenient action 𝜑𝐴 (in terms of re- sources requires, preferences, etc.) drawn from the equivalence class of 𝜑′𝐴 . The formula fCl 𝑖 (𝜑𝐴 ) indicates that 𝜑𝐴 is the more convenient action among those in the set {𝜑′𝐴 |Cl (𝜑𝐴 , 𝜑′𝐴 )}. The formulas exec 𝐺 (𝛼) express executability of mental actions by a group 𝐺 (which is a consequence of the fact that any member of the group is able to perform the action). They have to be read as: “𝛼 is a mental action that an agent in 𝐺 can perform”. A formula of the form [𝐺:𝛼] 𝜙, where 𝛼 must be a mental action, states that “𝜙 holds after action 𝛼 has been performed by at least one of the agents in 𝐺, and all agents in 𝐺 have common knowledge about this fact”. Let us now introduce the dynamic component of the framework. Borrowing from [6, 24], we distinguish five types of mental actions 𝛼 that capture some of the dynamic properties of explicit beliefs and background knowledge. +𝜙, ↓(𝜙, 𝜓), ∩(𝜙,𝜓), ⊣(𝜙, 𝜓), and ⊢(𝜙,𝜓). These actions characterize the basic operations of belief formation through inference: • +𝜙: learning perceived belief: the mental operation that serves to form a new belief from a perception 𝜙. A perception may become a belief whenever an agent becomes “aware” of the perception and takes it into explicit consideration. • ↓(𝜙, 𝜓) is the mental action which consists in inferring 𝜓 from 𝜙, where 𝜓 is an atom: an agent, believing that 𝜙 is true and having in its long-term memory that 𝜙 implies 𝜓 , starts believing that 𝜓 is true. • ∩(𝜙,𝜓) is the mental action which closes the beliefs 𝜙 and belief 𝜓 under conjunction. Namely, ∩(𝜙,𝜓) characterizes the mental action of deducing 𝜙 ∧ 𝜓 from 𝜙 and 𝜓. • ⊣(𝜙, 𝜓), where 𝜙 and 𝜓 are atoms, is the mental action that performs a simple form of “belief revision”, i.e., it removes 𝜓 from the belief set, in case 𝜙 is believed and, according to the background knowledge, ¬𝜓 is logical consequence of 𝜙. • ⊢(𝜙,𝜓), where 𝜓 is an atom; by means of this mental action, an agent believing that 𝜙 is true (i.e., it is in the working memory) and that 𝜙 implies 𝜓, starts believing that 𝜓 is true. This last action operates exclusively on the working memory without recovering anything from the background knowledge. We conclude this section by introducing some pieces of notation that will be useful in the following. Recall Res = {r1 , . . . , rℓ } is the set of all (names of) resources. Each resource is available in a certain amount. For simplicity, let us assume that all amounts are natural numbers. We will use the writing 𝑟𝑗 :𝑛𝑗 to denote an amount of 𝑛𝑗 units of resource 𝑟𝑗 . Hence, the writing (𝑟1 :𝑛1 , . . . , 𝑟ℓ :𝑛ℓ ) describes the amounts of all existing resources. More in general, let Amounts = {(𝑟1 :𝑛1 , . . . , 𝑟ℓ :𝑛ℓ )|𝑛1 , . . . , 𝑛ℓ ∈ N} be the set of all possible descriptions of amounts of all resources. Moreover, given ¯𝑟 = (𝑟1 :𝑛1 , . . . , 𝑟ℓ :𝑛ℓ ) and ¯𝑡 = (𝑟∑︀ 1 :𝑚1 , . . . , 𝑟ℓ :𝑚ℓ ) in Amounts, we write ¯𝑟 ≤ ¯𝑡 iff ∀𝑖 𝑛𝑖 ≤ 𝑚𝑖 and denote by sum(𝑟¯) the value ℓ𝑖=1 𝑛𝑖 . Also, in case ¯𝑡 ≤ ¯𝑟, we denote by ¯𝑟 − ¯𝑡 the tuple ¯𝑟 = (𝑟1 :(𝑛1 − 𝑚1 ), . . . , 𝑟ℓ :(𝑛ℓ − 𝑚ℓ )). 2.2. Semantics Many relevant aspects of an agent’s behaviour are specified in the definition of L-INF model, including what mental and physical actions an agent can perform, what is the cost of an action and what is the budget that the agent has at its disposal, what is the degree of preference of the agent to perform each action, what is the degree of preference of the agent to use a particular resource. This choice has the advantage of keeping the complexity of the logic under control and making these aspects modular. Definitions 2.1 and 2.2 introduce the notion of L-INF model, which is then used to introduce semantics of the static fragment L-INF. A model 𝑀 is composed of two parts. A core part 𝒞𝑀 and a collection of packages 𝒫𝑀 . More specifically: Definition 2.1. The core part 𝒞𝑀 of a model 𝑀 is a tuple (𝑊, 𝑁, ℛ, 𝑉, 𝑆), where • 𝑊 is a set of worlds (or situations); • ℛ = {𝑅𝑖 }𝑖∈Agt is a collection of equivalence relations on 𝑊 : 𝑅𝑖 ⊆ 𝑊 × 𝑊 ; 𝑊 • 𝑁 : Agt × 𝑊 −→ 22 is a neighborhood function such that, for each 𝑖 ∈ Agt, each 𝑤, 𝑣 ∈ 𝑊 , and each 𝑋 ⊆ 𝑊 these conditions hold: (C1) if 𝑋 ∈ 𝑁 (𝑖, 𝑤) then 𝑋 ⊆ {𝑣 ∈ 𝑊 | 𝑤𝑅𝑖 𝑣}, (C2) if 𝑤𝑅𝑖 𝑣 then 𝑁 (𝑖, 𝑤) = 𝑁 (𝑖, 𝑣); • 𝑉 : 𝑊 −→ 2Atm is a valuation function; 𝑃 • 𝑆 : 𝑊 −→ 2{do 𝐺 (𝜑𝐴 ),do 𝑖 (𝜑𝐴 )|𝜑𝐴 ∈Atm 𝐴 ,𝑖∈Agt,𝐺∈Grp} is a valuation function for formulas of the forms do 𝐺 (𝜑𝐴 ) and do 𝑃𝑖 (𝜑𝐴 ). To simplify the notation, let 𝑅𝑖 (𝑤) denote the set {𝑣 ∈ 𝑊 | 𝑤𝑅𝑖 𝑣}, for 𝑤∈𝑊 . The set 𝑅𝑖 (𝑤) identifies the situations that agent 𝑖 considers possible at world 𝑤. It is the epistemic state of agent 𝑖 at 𝑤. In cognitive terms, 𝑅𝑖 (𝑤) can be conceived as the set of all situations that agent 𝑖 can retrieve from its long-term memory and reason about. While 𝑅𝑖 (𝑤) concerns background knowledge, 𝑁 (𝑖, 𝑤) is the set of all facts that agent 𝑖 explicitly believes at world 𝑤, a fact being identified with a set of worlds. Hence, if 𝑋 ∈ 𝑁 (𝑖, 𝑤) then, the agent 𝑖 has the fact 𝑋 under the focus of its attention and believes it. We say that 𝑁 (𝑖, 𝑤) is the explicit belief set of agent 𝑖 at world 𝑤. Constraint (C1) imposes that agent 𝑖 can have explicit in its mind only facts which are compatible with its current epistemic state. Moreover, according to constraint (C2), if a world 𝑣 is compatible with the epistemic state of agent 𝑖 at world 𝑤, then agent 𝑖 should have the same explicit beliefs at 𝑤 and 𝑣. In other words, if two situations are equivalent as concerns background knowledge, then they cannot be distinguished through the explicit belief set. This aspect of the semantics can be extended in future work to allow agents make plausible assumptions. The packages of a model can be thought as modular extensions of the core part. Each package is used to specify a specific feature, such as preferences, costs, executability, etc. Ideally, each package, (may) correspond to some syntactic element of the syntax of L-INF. The connection between the syntactic elements and the corresponding package will be established by a suitable component of the semantics (so be seen). The following are some possible packages. Note that we are focusing on those of interests for the purposes of this paper. Plainly, the designer of a particular MAS may decide to include only part of the following packages or even to add/model other features (also providing a suitable adaptation of the notion of truth). Definition 2.2. Given a core model 𝒞𝑀 = (𝑊, 𝑁, ℛ, 𝑉, 𝑆), the packages 𝒫𝑀 are: EXECUTABILITY FOR MENTAL ACTIONS ∙ 𝐸 : Agt × 𝑊 −→ 2ℒACT is an executability function of mental actions such that, for each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (D1) if 𝑤𝑅𝑖 𝑣 then 𝐸(𝑖, 𝑤) = 𝐸(𝑖, 𝑣); BUDGET AND COSTS FOR MENTAL ACTIONS ∙ 𝐵1 : Agt × 𝑊 −→ N is a budget function such that, for each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , the following holds (E1) if 𝑤𝑅𝑖 𝑣 then 𝐵1 (𝑖, 𝑤) = 𝐵1 (𝑖, 𝑣); ∙ 𝐶1 : Agt × ℒACT × 𝑊 −→ N is a cost function such that, for each 𝑖 ∈ Agt, 𝛼 ∈ ℒACT , and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (F1) if 𝑤𝑅𝑖 𝑣 then 𝐶1 (𝑖, 𝛼, 𝑤) = 𝐶1 (𝑖, 𝛼, 𝑣); EXECUTABILITY FOR PHYSICAL ACTIONS ∙ 𝐴 : Agt × 𝑊 −→ 2𝐴𝑡𝑚𝐴 is an executability function for physical actions such that, for each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (G1) if 𝑤𝑅𝑖 𝑣 then 𝐴(𝑖, 𝑤) = 𝐴(𝑖, 𝑣); BUDGET AND COSTS FOR PHYSICAL ACTIONS ∙ 𝐵2 : Agt × 𝑊 −→ Amounts is a budget function for physical action, such that, for each 𝑖 ∈ Agt, and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (E2) if 𝑤𝑅𝑖 𝑣 then 𝐵2 (𝑖, 𝑤) = 𝐵2 (𝑖, 𝑣); ∙ 𝐶2 : Agt × AtmA × 𝑊 −→ Amounts is a cost function for physical action, such that, for each 𝑖 ∈ Agt, 𝜑𝐴 ∈ AtmA , and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (F2) if 𝑤𝑅𝑖 𝑣 then 𝐶2 (𝑖, 𝜑𝐴 , 𝑤) = 𝐶2 (𝑖, 𝜑𝐴 , 𝑣); AGENTS ’ ROLES ∙ 𝐻 : Agt × 𝑊 −→ 2𝐴𝑡𝑚𝐴 is an enabling function for physical actions such that, for each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (G2) if 𝑤𝑅𝑖 𝑣 then 𝐻(𝑖, 𝑤) = 𝐻(𝑖, 𝑣); PREFERENCES ON PHYSICAL ACTIONS ∙ 𝑃 : Agt × 𝑊 × Atm A −→ N is a preference function for physical actions 𝜑𝐴 such that, for each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (H1) if 𝑤𝑅𝑖 𝑣 then 𝑃 (𝑖, 𝑤, 𝜑𝐴 ) = 𝑃 (𝑖, 𝑣, 𝜑𝐴 ); For each 𝑖 and 𝑤, the function 𝑃 induces a preference order ⪯𝑖,𝑤 on Atm 𝐴 , such that 𝜑𝐴 ⪯𝑖,𝑤 𝜑′𝐴 iff 𝑃 (𝑖, 𝑤, 𝜑𝐴 ) ≤ 𝑃 (𝑖, 𝑤, 𝜑′𝐴 ). EQUIVALENCE OF PHYSICAL ACTIONS ∙ 𝑄 : AtmA × 𝑊 −→ 2𝐴𝑡𝑚𝐴 is a function describing a partition of 𝐴𝑡𝑚𝐴 in equivalence classes (i.e., 𝑄 associates each physical action with its equivalence class), such that for each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (I1) if 𝑤𝑅𝑖 𝑣 then 𝑄(𝜑𝐴 , 𝑤) = 𝑄(𝜑𝐴 , 𝑣); ∙ 𝐹 : Agt × 𝑊 × ℒACT −→ ℒACT is a selector function for physical actions that, given 𝑖 and 𝑤, selects one physical action 𝐹 (𝑖, 𝑤, 𝜑𝐴 ) from the equivalence class of 𝜑𝐴 . Namely, it holds that 𝐹 (𝑖, 𝑤, 𝜑𝐴 ) ∈ 𝑄(𝜑𝐴 , 𝑤) ∧ ∀ 𝜑′𝐴 ∈ 𝑄(𝜑𝐴 , 𝑤) 𝜑′𝐴 ⪯𝑖,𝑤 𝜑𝐴 . For each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , it holds that: (I2) if 𝑤𝑅𝑖 𝑣 then 𝐹 (𝑖, 𝑤, 𝜑𝐴 ) = 𝐹 (𝑖, 𝑣, 𝜑𝐴 ). Let us briefly describe the intended features shaped by the packages introduced by Def. 2.2. Notice that the concrete implementation, in a real MAS, the specification of some packages might depend on other packages (for example, in what follows we will describe a possible implementation of 𝐹 that relies on the function 𝑃 ). For an agent 𝑖, 𝐸(𝑖, 𝑤) is the set of mental actions that 𝑖 can execute at world 𝑤. To execute a mental action, 𝑖 has to pay the cost 𝐶1 (𝑖, 𝛼, 𝑤). 𝐵1 (𝑖, 𝑤) is the budget that 𝑖 has (in 𝑤) to perform mental actions. As mentioned, concerning physical actions, we are interested in modeling situations where performing an action may require multiple resources. Hence, the cost 𝐶2 (𝑖, 𝜑𝐴 , 𝑤) of an action 𝜑𝐴 (for agent 𝑖 in world 𝑤) is a tuple in Amounts, while the available budget is described by 𝐵2 (𝑖, 𝑤). For an agent 𝑖, the set of physical actions it can execute at 𝑤 is 𝐴(𝑖, 𝑤). Equivalence between physical actions is determined by function 𝑄. That is, 𝑄(𝜑𝐴 , 𝑤) is the set of physical actions that are equivalent to 𝜑𝐴 in 𝑤. Roles of agents (that, as we will see, affects the capability of agents in a group to execute actions) is described through 𝐻. Namely, 𝐻(𝑖, 𝑤) is the set of physical actions that agent 𝑖 is enabled by its group to perform (recall that, at each time instant, an agent belongs to a single group). Agent’s preference on execution of physical actions is determined by the function 𝑃 . For an agent 𝑖, and a physical action 𝜑𝐴 , the value of 𝑃 (𝑖, 𝑤, 𝜑𝐴 ) should be intended as a degree of willingness of agent 𝑖 to execute 𝜑𝐴 at world 𝑤. Analogously to property (C2) imposed in Def. 2.1, the constrain (D1) imposes that agent 𝑖 always knows which mental actions it can perform and those it cannot, but if two situations/worlds are equivalent as concerns background knowledge, then they cannot be distinguished through the executability of actions. Similar “indistinguishability’ requirements are imposed for each package by conditions (E1), (F1), (F2), (G1), (H1), (G2), (H2), (I1), and (I2). Let us give some hints on how the functions 𝑃 and 𝐹 might be actually implemented in a concrete MAS. As concerns 𝑃 , we assume defined (e.g., by the MAS designer) a preference relation among (equivalent) actions, for any agent 𝑖. In practice, this relation might be obtained by exploiting some specific reasoning module. Some possibilities in this sense are described in [25, 26]. Similarly, as for all packages, a specific module in the MAS implementation may be devoted to realize the selector function 𝐹 . Here we outline a simple option in defining 𝐹 , relying on the availability of functions 𝐵2 , 𝐶2 , and 𝑄. Given an agent 𝑖, a world 𝑤, and an action 𝜑𝐴 , let 𝒜 = {𝜑′𝐴 |𝜑′𝐴 ∈ 𝑄(𝜑𝐴 , 𝑤) ∧ 𝐶2 (𝑖, 𝜑′𝐴 , 𝑤) ≤ 𝐵2 (𝑖, 𝑤)} and 𝒜′ ⊆ 𝒜 such that for each 𝜑′𝐴 ∈ 𝒜′ the value sum(𝐶2 (𝑖, 𝜑′𝐴 , 𝑤)) is minimal among the elements of 𝒜. Finally, select the preferred element to be the ⪯𝑖,𝑤 -maximal element of 𝒜′ (i.e., the action 𝜑′′𝐴 with the larger value of 𝑃 (𝑖, 𝑤, 𝜑′′𝐴 ). In case of multiple options, any deterministic criterion can be applied). 2.2.1. Truth Conditions Truth values of L-DINF formulas are inductively defined as follows. Given a model 𝑀 (cf., Definitions 2.1 and 2.2), 𝑖 ∈ Agt, 𝐺 ∈ Grp, 𝑤 ∈ 𝑊 , and a formula 𝜙 ∈ ℒL-INF , we introduce this shorthand notation: ‖𝜙‖𝑀𝑖,𝑤 = {𝑣 ∈ 𝑊 : 𝑤𝑅𝑖 𝑣 and 𝑀, 𝑣 |= 𝜙}, whenever 𝑀, 𝑣 |= 𝜙 is well-defined (see below). Then, we set: 1. 𝑀, 𝑤 |= 𝑝 iff 𝑝 ∈ 𝑉 (𝑤) 2. 𝑀, 𝑤 |= Cl (𝜑𝐴 , 𝜑′𝐴 ) iff 𝜑′𝐴 ∈ 𝑄(𝜑𝐴 , 𝑤) 3. 𝑀, 𝑤 |= exec 𝐺 (𝛼) iff ∃𝑖 ∈ 𝐺 with 𝛼 ∈ 𝐸(𝑖, 𝑤) 4. 𝑀, 𝑤 |= pref _do 𝑖 (𝜑𝐴 , 𝑑) iff 𝜑𝐴 ∈ 𝐴(𝑖, 𝑤𝐼 ) and 𝑃 (𝑖, 𝑤𝐼 , 𝜑𝐴 ) = 𝑑 5. 𝑀, 𝑤 |= pref _do 𝐺 (𝑖, 𝜑𝐴 ) iff 𝑀, 𝑤 |= pref _do 𝑖 (𝜑𝐴 , 𝑑) for 𝑑 ∈ N such that 𝑑 = max{𝑃 (𝑗, 𝑤, 𝜑𝐴 ) | 𝑗 ∈ 𝐺 ∧ 𝜑𝐴 ∈ 𝐴(𝑗, 𝑤) ∩ 𝐻(𝑗, 𝑤)} 6. 𝑀, 𝑤 |= can_do 𝐺 (𝜑𝐴 ) iff ∃𝑖 ∈ 𝐺 with 𝜑𝐴 ∈ 𝐴(𝑖, 𝑤) ∩ 𝐻(𝑖, 𝑤) and 𝜑𝐴 = 𝐹 (𝑖, 𝑤, 𝜑𝐴 ) 7. 𝑀, 𝑤 |= fCl 𝑖 (𝜑𝐴 ) iff 𝜑𝐴 = 𝐹 (𝑖, 𝑤, 𝜑𝐴 ) 8. 𝑀, 𝑤 |= ¬𝜙 iff 𝑀, 𝑤 ̸|= 𝜙 9. 𝑀, 𝑤 |= 𝜙 ∧ 𝜓 iff 𝑀, 𝑤 |= 𝜙 and 𝑀, 𝑤 |= 𝜓 10. 𝑀, 𝑤 |= B𝑖 𝜙 iff ||𝜙||𝑀𝑖,𝑤 ∈ 𝑁 (𝑖, 𝑤) 11. 𝑀, 𝑤 |= K𝑖 𝜙 iff 𝑀, 𝑣 |= 𝜙 for all 𝑣 ∈ 𝑅𝑖 (𝑤) 12. 𝑀, 𝑤 |= 𝜙, for 𝜙 of the forms do 𝐺 (𝜑𝐴 ) and do 𝑃𝑖 (𝜑𝐴 ), iff 𝜙 ∈ 𝑆(𝑤) As mentioned, a physical action can be performed by a group of agents if at least one agent of the group can do it. In this case, the level of preference for performing this action is set to the maximum among those of the agents enabled to execute the action. In addition, the agent selects (using 𝐹 ) among the enabled equivalent actions the one it prefers the most. Notice that, in the above described semantics, a specific evaluation function 𝑆 deals with formulas of the forms doG (𝜑𝐴 ) and doiP (𝜑A ). These kind of formulas are, nevertheless, left not axiomatized. This because doG (𝜑𝐴 ) refers to the practical execution of an action by some kind of actuator, where in a robotic application this action can have physical effects. To find a way for accounting for such expression, we choose to resort to a concept that has been called by Weyhrauch in the seminal work [23] a semantic attachment, i.e., it is assumed that some device exists, which connects an agent with its external environment in a way that is unknown at the logical level. The aim of [23] was exactly to explain how formal systems could be used in AI by being “mechanized” in a practical way, by providing ideas about a principled though potentially running implementation of these systems. In our setting, an action is meant to be executed by means of such a device, and, whenever successfully completed, it will be then recorded by means of atoms of the form doiP (𝜑A ); such records can greatly aid the agent’s subsequent reasoning process and support the ability to provide explanations. Hence, we assume that the function 𝑆 reflects at the semantic level the presence of such semantic attachment mechanism. So that, the semantics is concerned only with the relationship between doing and being enabled to do. A similar treatment is done for join actions. Performing joinA (𝑖, 𝑗) imply that agents 𝑖, 𝑗 are now in the same group. We assume that the execution of joinA (𝑖, 𝑗) affects the contents of the working memories of the agents 𝑖 and 𝑗 (and consequently of the other members of their groups). As mentioned, formulas of the form intend 𝐺 (𝜑𝐴 ) express agents’ intentions of performing physical actions. In this paper we do not cope with the formalization of BDI (for which the reader may refer, e.g., to [22]). So, we do not provide a specific semantics for intentions and treat them rather informally, assuming also that intend 𝐺 (𝜑𝐴 ) represents the fact that all agents in 𝐺 intend to perform 𝜑𝐴 . For any mental action 𝛼 performed by any agent 𝑖, we set: 13. 𝑀, 𝑤 |= [𝐺𝐼 : 𝛼]𝜙 iff 𝑀 [𝐺:𝛼] , 𝑤 |= 𝜙 where the model 𝑀 [𝐺:𝛼] is an updated version of the model 𝑀 , that takes into account the effect that the execution of the mental action 𝛼 has on the sets of beliefs of 𝑖 and on the available budget. Hence, 𝑀 [𝐺:𝛼] is obtained from 𝑀 by replacing the function 𝑁 and 𝐵1 , with the function 𝑁 [𝐺:𝛼] [𝐺 :𝛼] and 𝐵1 𝐼 , resp., defined as described below. The action 𝛼 may add new beliefs by direct perception, by means of one inference step, or as a conjunction of previous beliefs. Hence, when introducing new beliefs (i.e., performing mental actions), the neighborhood must be extended accordingly. The following condition characterizes the circumstances in which a mental action may be performed, and by which agent(s): enabled 𝑤 (𝐺, 𝛼) : ∃𝑗 ∈ 𝐺 (𝛼 ∈ 𝐸(𝑗, 𝑤) ∧ 𝐶1 (𝑗,𝛼,𝑤) |𝐺| ≤ minℎ∈𝐺 𝐵1 (ℎ, 𝑤)) To handle the case of multiple enabled agents, we assume defined a predicate doer 𝑤 (𝑖, 𝐺, 𝛼) to univocally select one among the enabled agents. Its definition might rely on any criteria, even involving background knowledge and belief sets. For simplicity, let us define such predicate as: doer 𝑤 (𝑖, 𝐺, 𝛼) ↔ 𝑖 = min{𝑗|enabled 𝑤 (𝑗, 𝐺, 𝛼)}. This condition, as defined above, expresses the fact that a mental action is enabled when: at least an agent can perform it; and the “payment” due by each agent, obtained by dividing the action’s cost equally among all agents of the group, is within each agent’s available budget (this choice is inherited from L-DINF). In case more than one agent in 𝐺 can execute an action, we implicitly assume the agent 𝑗 performing the action is the one corresponding to the lowest possible cost. Namely, 𝑗 is such that 𝐶1 (𝑗, 𝛼, 𝑤) = minℎ∈𝐺 𝐶1 (ℎ, 𝛼, 𝑤). Other choices might be viable, so variations of this logic can be easily defined simply by devising some other enabling condition and, possibly, introducing differences in neighborhood update. 2.2.2. Belief Update Updating an agent’s beliefs accounts to modify the neighborhood of the present world. The updated neighborhood 𝑁 [𝐺:𝛼] resulting from execution of a mental action 𝛼 by a group 𝐺 of agents is as follows. • If 𝛼 is +𝜙, then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 , 𝑁 [𝐺:+𝜙] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∪ {||𝜙||𝑀 𝑖,𝑤 } if 𝑖∈𝐺 and doer 𝑤 (𝑖, 𝐺, +𝜙) holds. Otherwise, the neighborhood function does not change (i.e., 𝑁 [𝐺:+𝜙] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤)). • If 𝛼 is ↓(𝜓, 𝜒), then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 , 𝑁 [𝐺:↓(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∪ {||𝜒||𝑀 𝑖,𝑤 } if 𝑖∈𝐺 and doer 𝑤 (𝑖, 𝐺, ↓(𝜓, 𝜒)) and 𝑀, 𝑤 |= B𝑖 𝜓 ∧ K𝑖 (𝜓 → 𝜒) hold. Otherwise, the neighborhood function does not change (i.e., 𝑁 [𝐺:↓(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤)). • If 𝛼 is ∩(𝜓,𝜒), then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 , 𝑁 [𝐺:∩(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∪ {||𝜓 ∧ 𝜒||𝑀 𝑖,𝑤 } if 𝑖∈𝐺 and doer 𝑤 (𝑖, 𝐺, ∩(𝜓,𝜒)) and 𝑀, 𝑤 |= B𝑖 𝜓 ∧ B𝑖 𝜒. Otherwise, the neighborhood function remains unchanged (i.e., 𝑁 [𝐺:∩(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤)). • If 𝛼 is ⊢(𝜓,𝜒), then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 , 𝑁 [𝐺:⊢(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∪ {||𝜒||𝑀 𝑖,𝑤 } if 𝑖∈𝐺 and doer 𝑤 (𝑖, 𝐺, ⊢(𝜓,𝜒)) and 𝑀, 𝑤𝐼 |= B𝑖 𝜓 ∧ B𝑖 (𝜓 → 𝜒). Otherwise, the neighborhood function remains unchanged: 𝑁 [𝐺:⊢(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤). • If 𝛼 is ⊣ (𝜓, 𝜒), then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 , 𝑁 [𝐺:⊣(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∖ {||𝜒||𝑀 𝑖,𝑤 } if 𝑖∈𝐺 and doer 𝑤 (𝑖, 𝐺, ⊣(𝜓, 𝜒)) and 𝑀, 𝑤 |= B𝑖 𝜓 ∧ K𝑖 (𝜓 → ¬𝜒). Otherwise, the neighborhood does not change (i.e., 𝑁 [𝐺:⊣(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤)). We write |=L-DINF 𝜙 to denote that 𝑀, 𝑤 |= 𝜙 holds for all worlds 𝑤 of every model 𝑀 . 2.2.3. Budget Update for Mental Actions If a mental action 𝛼 is executed by a group 𝐺, agent in 𝐺 has to contribute to cover the cost of execution by consuming part of its budget. Hence, for each 𝑖 ∈ Agt and each 𝑤 ∈ 𝑊 , we set [𝐺:𝛼] 𝐵1 (𝑖, 𝑤) = 𝐵1 (𝑖, 𝑤) − 𝐶1 (𝑖, 𝛼, 𝑤)/|𝐺|, if 𝑖∈𝐺, doer 𝑤 (𝑖, 𝐺, 𝛼) holds, and, depending on 𝛼, the same conditions described be- fore to enable neighborhood updates are satisfied. Otherwise, the budget is preserved, i.e., [𝐺:𝛼] 𝐵1 (𝑖, 𝑤)=𝐵1 (𝑖, 𝑤). Clearly, the budget is preserved for those agents that are not in 𝐺. 2.2.4. Budget Update for Physical Action Also execution of physical actions involves consumption of some amounts of resources. Hence, the budget available for physical actions has to be updated accordingly. If an action 𝜑𝐴 is performed by an agent 𝑖 at a world 𝑤, this involves a transition from 𝑤 to another world 𝑤′ . Moreover, if an action is executed, it means that enough resources are available and are consumed to complete the action. The budget function satisfies this condition: 𝐵2 (𝑖, 𝑤′ ) = 𝐵2 (𝑖, 𝑤) − 𝐶2 (𝑖, 𝜑𝐴 , 𝑤) Remark 2.1. A comment is due concerning the action joinA (𝑖, 𝑗). We assume that whenever an agent 𝑖 ∈ 𝐺 joins the group of another agent 𝑗 (by executing joinA (𝑖, 𝑗)), the neighborhood function 𝑁 (𝑖, 𝑤) becomes equal to 𝑁 (𝑗, 𝑤), for each ℎ ∈ Agt. In case 𝑖 ∈ 𝐺 executes joinA (𝑖, 𝑖) (i.e., it leaves 𝐺 and forms a new singleton group) then it maintains its current neighborhood function, but without any binding with the belief set of the remaining agents in 𝐺. Remark 2.2. In the actions ⊢(𝜙,𝜓) and ↓(𝜙, 𝜓), the formula 𝜓 which is inferred and asserted as a new belief can be of the forms can_do 𝐺 (𝜑𝐴 ) or do 𝑖 (𝜑𝐴 ). Conclusion do 𝑖 (𝜑𝐴 ) (from can_do 𝑖 (𝜑𝐴 ) and possibly other conditions) implies that physical action 𝜑𝐴 is actually performed by 𝑖. Actions are supposed to succeed by default, in case of failure a corresponding failure event will be perceived by the agent (again, we rely on semantic attachment). The do 𝑃𝑖 beliefs constitute a history of the agent’s operation, so they might be useful for the agent to reason about its own past behavior, and/or, importantly, they may be useful to provide explanations to human users. 2.3. Axiomatization In previous works [1, 4, 3], a sound and complete axiomatic system has been proposed for L-INF and L-DINF. For simplicity, below we recall the axiomatization of the core part of the logic only (corresponding to the notion of core model introduced by Def. 2.1). The L-INF and L-DINF core axioms and inference rules are (together with the usual axioms of propositional logic): 1. (K𝑖 𝜙 ∧ K𝑖 (𝜙 → 𝜓)) → K𝑖 𝜓; 2. K𝑖 𝜙 → 𝜙; 3. ¬K𝑖 (𝜙 ∧ ¬𝜙); 4. K𝑖 𝜙 → K𝑖 K𝑖 𝜙; 5. ¬K𝑖 𝜙 → K𝑖 ¬K𝑖 𝜙; 6. B𝑖 𝜙 ∧ K𝑖 (𝜙 ↔ 𝜓) → B𝑖 𝜓; 7. B𝑖 𝜙 → K𝑖 B𝑖 𝜙; 𝜙 8. K 𝜙 ; 𝑖 9. [𝐺:𝛼]𝑝 ↔ 𝑝; 10. [𝐺:𝛼]¬𝜙 ↔ ¬[𝐺:𝛼]𝜙; 11. [𝐺:𝛼](𝜙 ∧ 𝜓) ↔ [𝐺:𝛼]𝜙 ∧ [𝐺:𝛼]𝜓; 12. [𝐺:𝛼]K𝑖 𝜙 ↔ K(︁𝑖 ([𝐺:𝛼]𝜙); (︀ )︀)︁ 13. [𝐺:+𝜙]B𝑖 𝜓 ↔ B𝑖 ([𝐺:+𝜙]𝜓) ∨ doer 𝑤 (𝑖, 𝐺, +𝜙) ∧ K𝑖 ([𝐺:+(𝜙)]𝜓 ↔ 𝜙) ; (︁ (︀ 14. [𝐺:↓(𝜙, 𝜓)]B𝑖 𝜒 ↔ B𝑖 ([𝐺:↓(𝜙, 𝜓)]𝜒) ∨ doer 𝑤 (𝑖, 𝐺, ↓(𝜙, 𝜓)) ∧ B𝑖 𝜙∧ )︀)︁ K𝑖 (𝜙 → 𝜓) ∧ K𝑖 ([𝐺:↓(𝜙, 𝜓)]𝜒 ↔ 𝜓) ; (︁ (︀ 15. [𝐺: ∩ (𝜙, 𝜓)]B𝑖 𝜒 ↔ B𝑖 ([𝐺: ∩ (𝜙, 𝜓)]𝜒) ∨ doer 𝑤 (𝑖, 𝐺, ∩(𝜙,𝜓)) ∧ B𝑖 𝜙∧ )︀)︁ B𝑖 𝜓 ∧ K𝑖 ([𝐺: ∩ (𝜙, 𝜓)]𝜒 ↔ (𝜙 ∧ 𝜓)) ; (︁ (︀ 16. [𝐺:⊢(𝜙, 𝜓)]B𝑖 𝜒 ↔ B𝑖 ([𝐺:⊢(𝜙, 𝜓)]𝜒) ∨ doer 𝑤 (𝑖, 𝐺, ⊢(𝜙, 𝜓)) ∧ B𝑖 𝜙∧ )︀)︁ B𝑖 (𝜙 → 𝜓) ∧ K𝑖 ([𝐺:⊢(𝜙, 𝜓)]𝜒 ↔ 𝜓) ; (︁ (︀ 17. [𝐺:⊣(𝜙, 𝜓)]B𝑖 𝜒 ↔ B𝑖 ([𝐺:⊣(𝜙, 𝜓)]𝜒) ∨ doer 𝑤 (𝑖, 𝐺, ⊣(𝜙, 𝜓)) ∧ B𝑖 𝜙∧ (︀ )︀)︀)︁ K𝑖 (𝜙→¬𝜓) ∧ K𝑖 [⊣(𝜙, 𝜓)]𝜒↔¬𝜓 ; 18. intendG (𝜑A ) ↔ ∀𝑖 ∈ 𝐺 intendi (𝜑A ); 𝜓↔𝜒 19. 𝜙↔𝜙[𝜓/𝜒] . We write L-DINF ⊢ 𝜙 to denote that 𝜙 is a theorem of L-DINF. The above axiomatization is sound for the class of L-INF models. Namely, all axioms are valid and the two inference rules (8) and (19) preserve validity. In particular, soundness of axioms (13)–(17) follows from the semantics of [𝐺 : 𝛼]𝜙, for each mental action 𝛼, as previously defined. Recall that, as mentioned earlier in the paper, the axiomatization does not deal with formulas of the forms doG (𝜑𝐴 ), as they are intended to be realized by a semantic attachment, that connects an agent with its external environment. As concerns completeness of the (core) axiomatization, a standard notion of canonical L-INF model can be introduced. Then, strong completeness of the axiomatization can be proved by applying a standard canonical-model argument (cf., [3, 6, 9]) and this leads to the following result: Theorem 2.1. L-DINF is strongly complete for the class of L-INF models. 3. Problem Specification and Inference: An Example In this section, we propose an example of problem specification and inference in L-DINF. Consider a group of 𝑛 agents, e.g., four, who are colleagues, work together over the weekend to prepare banquets. They know that three of them are both able to prepare dishes, while the other one is both able to prepare the mise en place. Below we show how our logic is able to represent the situation, and the proceedings of this work. Each agent will initially have in its knowledge base the fact K𝑖 (intendG (prepare-banquet)) The physical actions are the following: prepare-mise-en-place, prepare-starters, prepare-main-dish, prepare-dessert. (1) Assume that the knowledge base of each agent 𝑖 contains the following rule, that specifies how to reach the intended goal in terms of actions to perform: (︀ K𝑖 intendG (prepare-banquet) → intendG (prepare-mise-en-place) ∧ )︀ intendG (prepare-starters) ∧ intendG (prepare-main-dish) ∧ intendG (prepare-dessert) . By axiom18, every agent will also have the following: (︀ K𝑖 intendi (prepare-banquet) → intendi (prepare-mise-en-place) ∧ )︀ intendi (prepare-starters) ∧ intendi (prepare-main-dish) ∧ intendi (prepare-dessert) . Therefore, the following is entailed for each of the agents (1 ≤ 𝑖 ≤ 4): K𝑖 (intendi (prepare-banquet) → intendi (prepare-mise-en-place)) K𝑖 (intendi (prepare-banquet) → intendi (prepare-starters)) (2) K𝑖 (intendi (prepare-banquet) → intendi (prepare-main-dish)) K𝑖 (intendi (prepare-banquet) → intendi (prepare-dessert)). Assume now that the knowledge base of each agent 𝑖 contains also the following rule, for 𝜑𝐴 = 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑖𝑠𝑒-𝑒𝑛-𝑝𝑙𝑎𝑐𝑒, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑠𝑡𝑎𝑟𝑡𝑒𝑟𝑠, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑎𝑖𝑛-𝑑𝑖𝑠ℎ, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑑𝑒𝑠𝑠𝑒𝑟𝑡: K𝑖 (intendi (𝜑A ) ∧ can_doi (𝜑A ) ∧ fCl 𝑖 (𝜑𝐴 ) → doi (𝜑A )) As previously stated, whenever an agent derives doi (𝜑A ) for any physical action 𝜑𝐴 , the action is supposed to have been performed via some kind of semantic attachment which links the agent to the external environment. However, doi (𝜑A ) will be derived by means of some mental action based upon the available rules. Such mental action can have a cost, that can be paid either by the agent itself or by the group (according to the adopted policy of cost-sharing for this group). According to the above rules, an agent can execute an action 𝜑𝐴 if it is able to derive 𝑐𝑎𝑛_𝑑𝑜𝑖 (𝜑𝐴 ), and 𝜑𝐴 is the selected one among the viable equivalent actions (i.e., fCl 𝑖 (𝜑𝐴 ) has also been derived). Such conclusion will be drawn on the basis of the assessment performed in external modules that concretely implement the model packages. These modules provide the decision according to some kind of reasoning process in some formalism, with respect to which the logic L-DINF is completely agnostic: modules will add the corresponding facts to each agent’s knowledge base. To get the agents do the actions listed in (1) four sequences of mental actions have to be executed, yielding, respectively, conclusions of the forms doG (prepare-mise-en-place), doG (prepare-starters), doG (prepare-main-dish), and doG (prepare-dessert), and causing their addition to agents’ working memory. Such reasoning would consist in mental actions of kind ∩ to form conjunctions from single facts, and mental actions of kind ↓ to apply knowledge rule, i.e., given their preconditions, draw the conclusions. In particular, given the initial general intention by the group, it will be possible to derive the practical goal, in terms of the conjunction of actions to be performed by the group. From its own specialized rules and the available facts about enabling and willingness, the execution of each action by some agent 𝑖 will be then derived. Note that, there can be unlucky situations where no agent is enabled to perform some action, or that the one allowed is not willing, or that there is not enough budget. In this case, the goal fails. Let 𝛼1 –𝛼4 be the last mental actions performed at the end of the mentioned four sequences of mental inferences (that lead to derive the 𝑑𝑜𝑖 (𝜑𝐴 ), for 𝜑𝐴 among the actions in (1)), respectively. For how such mental actions are treated we can refer to [6]. Let us focus on physical actions and their equivalence classes that we assume to be specified by function 𝑄 so that: 𝑄(𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑖𝑠𝑒-𝑒𝑛-𝑝𝑙𝑎𝑐𝑒) = {𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑖𝑠𝑒-𝑒𝑛-𝑝𝑙𝑎𝑐𝑒} 𝑄(𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑠𝑡𝑎𝑟𝑡𝑒𝑟𝑠) = {𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑠𝑡𝑎𝑟𝑡𝑒𝑟𝑠, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑎𝑖𝑛-𝑑𝑖𝑠ℎ, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑑𝑒𝑠𝑠𝑒𝑟𝑡} Moreover let the cost of physical actions be (where we do not list resources with null amounts): 𝐶2 (𝑖, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑖𝑠𝑒-𝑒𝑛-𝑝𝑙𝑎𝑐𝑒, 𝑤1 ) = (𝑡𝑎𝑏𝑙𝑒 : 5, 𝑐𝑢𝑡𝑙𝑒𝑟𝑦 : 10, 𝑡𝑎𝑏𝑙𝑒-𝑐𝑒𝑛𝑡𝑒𝑟 : 5), 𝐶2 (𝑖, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑑𝑒𝑠𝑠𝑒𝑟𝑡, 𝑤1 ) = (𝑠𝑢𝑔𝑎𝑟 : 4, 𝑓 𝑙𝑜𝑢𝑟 : 5, 𝑒𝑔𝑔 : 2), 𝐶2 (𝑖, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑠𝑡𝑎𝑟𝑡𝑒𝑟𝑠, 𝑤1 ) = (𝑎𝑝𝑝𝑒𝑡𝑖𝑧𝑒𝑟𝑠 : 5, 𝑑𝑟𝑖𝑛𝑘𝑠 : 5), 𝐶2 (𝑖, 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑎𝑖𝑛-𝑑𝑖𝑠ℎ, 𝑤1 ) = (𝑠𝑎𝑢𝑐𝑒 : 2, 𝑝𝑎𝑠𝑡𝑎 : 10) and the their budget are: 𝐵2 (1, 𝑤1 ) = (𝑡𝑎𝑏𝑙𝑒 : 6, 𝑐𝑢𝑡𝑙𝑒𝑟𝑦 : 10, 𝑡𝑎𝑏𝑙𝑒-𝑐𝑒𝑛𝑡𝑒𝑟 : 7), 𝐵2 (2, 𝑤1 ) = (𝑠𝑢𝑔𝑎𝑟 : 5, 𝑓 𝑙𝑜𝑢𝑟 : 7, 𝑒𝑔𝑔 : 4, 𝑎𝑝𝑝𝑒𝑡𝑖𝑧𝑒𝑟𝑠 : 4, 𝑑𝑟𝑖𝑛𝑘𝑠 : 4, 𝑠𝑎𝑢𝑐𝑒 : 2, 𝑝𝑎𝑠𝑡𝑎 : 9) 𝐵2 (3, 𝑤1 ) = (𝑠𝑢𝑔𝑎𝑟 : 4, 𝑓 𝑙𝑜𝑢𝑟 : 5, 𝑒𝑔𝑔 : 1, 𝑎𝑝𝑝𝑒𝑡𝑖𝑧𝑒𝑟𝑠 : 5, 𝑑𝑟𝑖𝑛𝑘𝑠 : 5) 𝐵2 (4, 𝑤1 ) = (𝑠𝑢𝑔𝑎𝑟 : 3, 𝑓 𝑙𝑜𝑢𝑟 : 5, 𝑒𝑔𝑔 : 2, 𝑎𝑝𝑝𝑒𝑡𝑖𝑧𝑒𝑟𝑠 : 1, 𝑑𝑟𝑖𝑛𝑘𝑠 : 1, 𝑠𝑎𝑢𝑐𝑒 : 5, 𝑝𝑎𝑠𝑡𝑎 : 20) Considering the available resources, only agent 1 can perform 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑖𝑠𝑒-𝑒𝑛-𝑝𝑙𝑎𝑐𝑒, for the other three action we choose the right one using the function 𝐹 . By assuming the definition of 𝐹 described in Section 2.2, we have that agent 2 can perform 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑑𝑒𝑠𝑠𝑒𝑟𝑡, agent 3 can perform 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑠𝑡𝑎𝑟𝑡𝑒𝑟𝑠 and agent 4 can perform 𝑝𝑟𝑒𝑝𝑎𝑟𝑒-𝑚𝑎𝑖𝑛-𝑑𝑖𝑠ℎ. After the execution of these actions the new budget becomes (cf. Section 2.2.4): 𝐵2 (1, 𝑤2 ) = (𝑡𝑎𝑏𝑙𝑒 : 5, 𝑡𝑎𝑏𝑙𝑒-𝑐𝑒𝑛𝑡𝑒𝑟 : 2), 𝐵2 (2, 𝑤2 ) = (𝑠𝑢𝑔𝑎𝑟 : 1, 𝑓 𝑙𝑜𝑢𝑟 : 2, 𝑒𝑔𝑔 : 2, 𝑎𝑝𝑝𝑒𝑡𝑖𝑧𝑒𝑟𝑠 : 4, 𝑑𝑟𝑖𝑛𝑘𝑠 : 4, 𝑠𝑎𝑢𝑐𝑒 : 2, 𝑝𝑎𝑠𝑡𝑎 : 9) 𝐵2 (3, 𝑤2 ) = (𝑠𝑢𝑔𝑎𝑟 : 4, 𝑓 𝑙𝑜𝑢𝑟 : 5, 𝑒𝑔𝑔 : 1) 𝐵2 (4, 𝑤2 ) = (𝑠𝑢𝑔𝑎𝑟 : 3, 𝑓 𝑙𝑜𝑢𝑟 : 5, 𝑒𝑔𝑔 : 2, 𝑎𝑝𝑝𝑒𝑡𝑖𝑧𝑒𝑟𝑠 : 1, 𝑑𝑟𝑖𝑛𝑘𝑠 : 1, 𝑠𝑎𝑢𝑐𝑒 : 3, 𝑝𝑎𝑠𝑡𝑎 : 10) It is relevant to comment about the role of past events. If the set of past events, which is a part of an agent’s short-term memory, is made available to the external modules defining actions enabling and degree of willingness, such recordings might be used, for instance, to define constraints concerning actions execution. 4. Conclusions In this paper we extended an epistemic logical framework previously introduced in [4, 3] and originally designed to enable modeling and reasoning on group dynamics of cooperative agents. In such framework, agents can perform actions and inferences on the base of their knowledge and their beliefs. Agents can reason about mental and physical actions. The extension presented in this paper enriches such a framework with the possibility of modeling costs involving multiple resources consumption and specifying through particular components of the concrete MAS (modules/packages) which physical actions to choose according to agent’s preferences, actions’ costs, and available resources. References [1] S. Costantini, A. Formisano, V. Pitoni, Timed memory in resource-bounded agents, in: C. Ghidini, B. Magnini, A. Passerini, P. Traverso (Eds.), AI*IA 2018 - Advances in Artificial Intelligence - XVIIth International Conference of the Italian Association for Artificial Intelligence, 2018, Proceedings, volume 11298 of LNCS, Springer, 2018, pp. 15–29. [2] V. Pitoni, Memory management with explicit time in resource-bounded agents, in: S. A. McIlraith, K. Q. Weinberger (Eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), AAAI Press, 2018, pp. 8133–8134. [3] S. Costantini, A. Formisano, V. Pitoni, An epistemic logic for multi-agent systems with budget and costs, in: W. Faber, G. Friedrich, M. Gebser, M. Morak (Eds.), Logics in Artificial Intelligence - 17th European Conference, JELIA 2021, Proceedings, volume 12678 of LNCS, Springer, 2021, pp. 101–115. [4] S. Costantini, V. Pitoni, Towards a logic of “inferable” for self-aware transparent logical agents, in: C. Musto, D. Magazzeni, S. Ruggieri, G. Semeraro (Eds.), Proc. of the Italian Workshop on Explainable AI co-located with 19th International Conference of AI*IA, 2020, volume 2742 of CEUR Workshop Proceedings, CEUR-WS.org, 2020, pp. 68–79. [5] S. Costantini, A. Formisano, V. Pitoni, Cooperation among groups of agents in the epistemic logic L-DINF, in: G. Governatori, A. Turhan (Eds.), Proc. of RuleML+RR’22, volume 13752 of LNCS, Springer, 2022, pp. 280–295. [6] S. Costantini, A. Formisano, V. Pitoni, An epistemic logic for modular development of multi-agent systems, in: N. Alechina, M. Baldoni, B. Logan (Eds.), Proc. of EMAS’21, Revised Selected papers, volume 13190 of LNCS, Springer, 2021, pp. 72–91. [7] S. Costantini, A. Formisano, V. Pitoni, Modelling agents roles in the epistemic logic L-DINF, in: O. Arieli, G. Casini, L. Giordano (Eds.), NMR’22, volume 3197 of CEUR Workshop Proceedings, CEUR-WS.org, 2022, pp. 70–79. [8] A. I. Goldman, Theory of mind, in: E. Margolis, R. Samuels, S. P. Stich (Eds.), The Oxford Handbook of Philosophy of Cognitive Science, volume 1, Oxford University Press, 2012, pp. 402–424. [9] S. Costantini, A. Formisano, V. Pitoni, An epistemic logic for formalizing group dynamics of agents, Interaction Studies 23 (2023) 391–426. [10] R. Fagin, J. Y. Halpern, Belief, awareness, and limited reasoning., Artif. Intell. 34 (1987) 39–76. [11] J. van Benthem, E. Pacuit, Dynamic logics of evidence-based beliefs, Studia Logica 99 (2011) 61–92. [12] M. Jago, Epistemic logic for rule-based agents, Journal of Logic, Language and Information 18 (2009) 131–158. [13] F. R. Velázquez-Quesada, Dynamic epistemic logic for implicit and explicit beliefs, Journal of Logic, Language and Information 23 (2014) 107–140. [14] P. Balbiani, D. Fernández-Duque, E. Lorini, The dynamics of epistemic attitudes in resource- bounded agents, Studia Logica 107 (2019) 457–488. [15] F. R. Velázquez-Quesada, Explicit and implicit knowledge in neighbourhood models, in: D. Grossi, O. Roy, H. Huang (Eds.), Logic, Rationality, and Interaction - 4th International Workshop, LORI 2013, volume 8196 of LNCS, Springer, 2013, pp. 239–252. [16] H. N. Duc, Reasoning about rational, but not logically omniscient, agents, J. Log. Comput. 7 (1997) 633–648. [17] V. Pitoni, S. Costantini, A temporal module for logical frameworks, in: B. Bogaerts, E. Erdem, P. Fodor, A. Formisano, G. Ianni, D. Inclezan, G. Vidal, A. Villanueva, M. De Vos, F. Yang (Eds.), Proc. of ICLP 2019 (TC), volume 306 of EPTCS, 2019, pp. 340–346. [18] S. Costantini, V. Pitoni, Memory management in resource-bounded agents, in: M. Al- viano, G. Greco, F. Scarcello (Eds.), AI*IA 2019 - Advances in Artificial Intelligence - XVIIIth International Conference of the Italian Association for Artificial Intelligence, 2019, Proceedings, volume 11946 of LNCS, Springer, 2019, pp. 46–58. [19] J. J. Elgot-Drapkin, S. Kraus, M. Miller, M. Nirkhe, D. Perlis, Active Logics: A Unified Formal Approach to Episodic Reasoning, Technical Report, UMIACS—University of Maryland, 1999. CS-TR-4072. [20] J. J. Elgot-Drapkin, M. I. Miller, D. Perlis, Life on a desert island: ongoing work on real-time reasoning, in: F. M. Brown (Ed.), The Frame Problem in Artificial Intelligence, Morgan Kaufmann, 1987, pp. 349–357. [21] A. S. Rao, M. Georgeff, Modeling rational agents within a BDI architecture, in: Proc. of the Second Int. Conf. on Principles of Knowledge Representation and Reasoning (KR’91), Morgan Kaufmann, 1991, pp. 473–484. [22] H. V. Ditmarsch, J. Y. Halpern, W. V. D. Hoek, B. Kooi, Handbook of Epistemic Logic, College Publications, 2015. Editors. [23] R. W. Weyhrauch, Prolegomena to a theory of mechanized formal reasoning, Artif. Intell. 13 (1980) 133–170. [24] P. Balbiani, D. F. Duque, E. Lorini, A logical theory of belief dynamics for resource-bounded agents, in: Proc. of AAMAS’16, ACM, 2016, pp. 644–652. [25] S. Costantini, A. Formisano, Modeling preferences and conditional preferences on resource consumption and production in ASP, J. Algorithms 64 (2009) 3–15. [26] S. Costantini, A. Formisano, Augmenting weight constraints with complex preferences, in: Logical Formalizations of Commonsense Reasoning, Papers from the 2011 AAAI Spring Symposium, AAAI Press, USA, 2011.