<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Preference Management in Epistemic Logic L-DINF⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Stefania Costantini</string-name>
          <email>stefania.costantini@univaq.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Formisano</string-name>
          <email>andrea.formisano@uniud.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentina Pitoni</string-name>
          <email>valentina.pitoni@univaq.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DISIM, Università di L'Aquila</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DMIF, Università di Udine</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The Logic of “Inferable” L-DINF has been recently proposed as a declarative framework to formally model via epistemic logic the group dynamics of cooperative agents. In this paper, we extend the framework by introducing the possibility to have costs for execution of physical action. Such costs may require the consumption of multiple resources of various types, to be drawn from agents' budgets. Also, we emphasize that all aspects of Multi-Agent Systems specified in L-DINF can be formalized in a modular way. In particular, concerning the execution of physical actions, dedicated modules allow the specification of a notion of equivalence for actions and a notion of agents' preference, be used to affect action execution.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multi Agent Systems</kwd>
        <kwd>Modal Logic</kwd>
        <kwd>Epistemic Logic</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>least one agent of the group can perform the action, with the approval of the group and on behalf
of the group. An agent can join or leave a group whenever it wants (and, consequently, the role of
an agent may change as it joins another group).</p>
      <p>
        The agents of a group can share their beliefs, so that any agent can access beliefs of other
agents. This ability opens up the possibility of modeling aspects of “Theory of Mind” [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. For
instance, an agent can maintain a version (possibly outdated) of the mental state other agents
and perform inferences about such knowledge. Then, it can make predictions and formulate
interpretations of other agent’s behaviours. For simplicity, in this paper we do not deal with these
aspects and refer the interested reader to [9].
      </p>
      <p>In this context, there are applications where agents can profit from the ability to choose
the preferred physical action among a set of actions deemed equivalent. Hence, whenever the
inference activity of an agent indicates that a physical action  has to be executed, the agent can
choose to perform another action ′ equivalent to . To model this feature we enrich the logical
framework so that agents can exploit an equivalence relation among physical actions, together
with evaluations of costs of actions, available budget and agents’ own preferences, in order to
determine which is the best physical action to be performed. Also, we assume that the execution
of each physical action may require some amounts of resources to be “used/available”. Differently
from what happens for mental actions (where, for simplicity, only one type of resource, i.e.,
“energy”, is taken into account), we consider that each agent has different amounts of different
resources available, and that each physical action may require multiple resources.</p>
      <p>Various logics concerning implicit and explicit belief, as well as some aspects of awareness,
have been proposed in the literature. We mention, among others, the seminal work [10] by
Fagin &amp; Halpern. Nevertheless, to the best of our knowledge, such proposals make no use of
concepts such as ‘reasoning’ and ‘inference’. Instead, the logical framework L-DINF accounts
for the perceptive and inferential steps leading from agent’s knowledge and beliefs to new beliefs.
In this sense, it provides a constructive theory of explicit beliefs. In L-DINF, aspects such as
“executability” of actions (both mental and physical) and costs related to their execution, can be
represented and considered in reasoning activity of the agents.</p>
      <p>Epistemic attitudes are modeled similarly to other approaches, among which we mention
the dynamic theory of evidence-based beliefs [11] —that exploits, similarly to our approach, a
neighborhood semantics for the notion of evidence—, the sentential approach to explicit beliefs
dynamics [12], the dynamic theory of beliefs described in [13], and the dynamic logic combining
explicit beliefs and knowledge proposed in [14].</p>
      <p>Concerning logics of inference, relevant proposals are the one in [15] and the logical system
DES4 described in [16]. In particular, we are indebted to [15] concerning the idea of modeling
inference steps by means of dynamic operators in the style of dynamic epistemic logic (DEL).
We, however, distinguish and emphasize the notions of explicit belief and background knowledge.
Also, we consider issues related to executability and costs.</p>
      <p>
        In developing L-DINF we are also indebted to [16], concerning the point of view that agents
reach certain belief states by performing inferences, and that making inferences takes time (we
tackled the issue of time in previous work, discussed in [
        <xref ref-type="bibr" rid="ref1">1, 17, 18</xref>
        ]). Differently from this work
however, in L-DINF inferential actions are represented both at the syntactic level, via dynamic
operators, and at a semantic level as neighborhood-updates. Moreover, L-DINF enables an agent
to reason on executability of inferential actions.
      </p>
      <p>The notion of explicit beliefs constitutes a major difference, besides others, between L-DINF
and active logics [19, 20]. While active logics provide models of reasoning based on long-term
memory and short-term memory like in our approach, they do not distinguish –as we do– between
the notion of explicit belief and the notion of background knowledge, conceived in our case as
a radically different kind of epistemic attitude. Moreover, L-DINF accounts for a collection of
mental actions that have not been explored in the active logic literature.</p>
      <p>
        In this paper, we extend the framework described in [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ] by introducing the possibility to
have costs for execution of physical actions. To meet these costs, the agents draw on resource
budgets and this may involve the consumption of multiple amounts of resources of various
types. Compared to the formalization of L-DINF appeared in previous papers, here we propose a
re-engineering of the entire logical framework. This is done, both syntactically and semantically,
by distinguishing between a core part of the framework (concerning the essential part of the
logic, i.e., expressing properties of agents’ knowledge and beliefs) and a collection of packages.
Each package can be thought as modular extensions of the core part, used to introduce a specific
feature (such as preferences, costs, executability constraints, etc.) in the framework.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Logical Framework</title>
      <p>L-DINF is a logic composed of a static component and a dynamic component. The first, called
L-INF, is a logic of explicit beliefs and background knowledge. The second component extends
the static one with dynamic operators that express the consequences of agents’ mental actions.
2.1. Syntax
Let Atm = {, , . . .} be a countable set of atomic propositions. The set  is the set of the
physical actions that agents can perform, including “active sensing” actions (e.g., “let’s check
whether it rains”, “let’s measure the temperature”, etc.). Let Agt be a set of agents and Grp the</p>
      <p>The language of L-DINF, denoted by ℒL-DINF, is defined by the following grammar:
set of groups of agents. Moreover, let Res = {r1 , . . . , rℓ} be the set of all (names of) resources.
, 

::=  | ¬</p>
      <p>|  ∧  | B | K |
do() | doG () | can_doG () |
[ :  ]  | Cl (, ′) | fCl ()
intend () | exec( ) | pref _do(, ) | pref _do(, ) |
::=</p>
      <p>+ | ⊢(, ) | ∩ (, ) | ↓(,  ) | ⊣(,  )
where  ranges over Atm, , ′ ∈ ,  ∈ Agt ,  ∈ N, and  ∈ Grp. Other Boolean
operators are defined from</p>
      <p>¬ and ∧ in the standard manner.1 The language of mental actions of
having sub-formulas of the form [ :  ]  .
type  is denoted by ℒACT. The static part L-INF of L-DINF, includes only those formulas not</p>
      <p>Before introducing the formal semantics let us briefly describe the intended informal meaning
of basic formulas of L-INF. As mentioned, we are interested in modelling the reasoning of agents
exec() instead of exec{}() and similarly for other constructs.
1For simplicity, whenever  = {} we will write  as subscript in place of {}. So, for instance, we often write
acting cooperatively. We consider the set of agents as partitioned in groups: each agent  ∈ Agt
always belongs to a unique group in Grp. We assume that all agents initially belong to an initial
group. Any agent , at any time, can perform a (physical) action joinA(, ), for  ∈ Agt , in order
to change her group and join ’s group. The special case in which  =  denotes the action that
allows agent  to leave her current group and form the new singleton group {}.</p>
      <p>The formula intend () indicates the intention of agent  to perform the physical action
, in the sense of the BDI agent model [21]. Formulas of this form can be part of agent’s
knowledge base from the beginning or it can be derived later. In this paper we do not cope with
the formalization of BDI, for which the reader may refer, e.g., to [22]. Hence, we will deal with
intentions rather informally, also assuming that intend () holds whenever all agents of group
 intend to perform .</p>
      <p>The formula doi () indicates the actual execution of action  by agent, automatically
recorded by the new belief do () (postfix “  ” standing for “past” action). Note that, we do
not provide an axiomatization for do (and similarly for doG , that indicates the actual execution
of  by the group of agents ). In fact, we assume that in any concrete implementation of the
logical framework, do and doG are realized by means of a semantic attachment [23], that is, a
procedure which connects an agent with its external environment in a way that is unknown at
the logical level. The axiomatization only concerns the relationship between doing and being
enabled to do.</p>
      <p>The expressions can_doi () and pref _do(, ) are closely related to doi (). In
particular, can_doi () must be seen as an enabling condition, indicating that the agent  is enabled to
perform the action , while pref _doi (, ) indicates the level  of preference/willingness of
agent  to perform .</p>
      <p>The formula pref _doG (, ) indicates that agent  exhibits the maximum level of preference
on performing action  within all group members. Notice that, if a group of agents intends to
perform an action , this will entail that the entire group intends to do , that will be enabled
to be actually executed only if at least one agent  ∈  can do it, i.e., it can derive can_doi ().</p>
      <p>The formula Cl (, ′) denoted the equivalence of the two physical actions  and ′.
Intuitively, this means that in the specific practical context at hand, the two actions have “something
in common”, i.e., for instance, they use similar resources, perform in a similar way, can be used
by an agent to obtain equivalent results, etc. Notice that the predicate Cl induces a partition of
 in a collection of equivalence classes.</p>
      <p>Agents modeled through L-DINF deal with two kind of memories, namely, a working memory
used to represent beliefs, i.e., facts and formulas acquired via perceptions during an agent’s
operation, and a long-term memory used to model agent’s background knowledge. Such knowledge
is assumed to satisfy omniscience principles, such as: closure under conjunction and known
implication, closure under logical consequence, and introspection.</p>
      <p>Background knowledge of an agent  is specified by means of the modal operator K, which is
actually the usual S5 modal operator often used to model knowledge. The fact that background
knowledge is closed under logical consequence is justified because we conceive it as a kind
of stable and reliable knowledge base. The modal operator B, instead, is used to represent
the beliefs of agents  kept in ’s working memory. The contents of the working memory is
determined by the mental actions  has executed (cf., Section 2.2.2). We assume the background
knowledge to include: facts/formulas known by the agent from the beginning, and facts the agent
subsequently decided to store in its long-term memory (via a decision-making mechanism not
covered here) after processing them in its working memory. We therefore assume that background
knowledge is irrevocable, in the sense of being stable over time.</p>
      <p>Whenever an agent wants to perform a physical action ′, it can exploit the equivalence
described by the facts of the form Cl (, ′) to execute a most convenient action  (in terms of
resources requires, preferences, etc.) drawn from the equivalence class of ′. The formula fCl ()
indicates that  is the more convenient action among those in the set {′|Cl (, ′)}.</p>
      <p>The formulas exec( ) express executability of mental actions by a group  (which is a
consequence of the fact that any member of the group is able to perform the action). They have to
be read as: “ is a mental action that an agent in  can perform”.</p>
      <p>A formula of the form [: ]  , where  must be a mental action, states that “ holds after
action  has been performed by at least one of the agents in , and all agents in  have common
knowledge about this fact”.</p>
      <p>
        Let us now introduce the dynamic component of the framework. Borrowing from [
        <xref ref-type="bibr" rid="ref6">6, 24</xref>
        ], we
distinguish vfie types of mental actions  that capture some of the dynamic properties of explicit
beliefs and background knowledge. + , ↓(,  ), ∩(, ), ⊣(,  ), and ⊢(, ). These actions
characterize the basic operations of belief formation through inference:
• + : learning perceived belief: the mental operation that serves to form a new belief from a
perception  . A perception may become a belief whenever an agent becomes “aware” of
the perception and takes it into explicit consideration.
• ↓(,  ) is the mental action which consists in inferring  from  , where  is an atom: an
agent, believing that  is true and having in its long-term memory that  implies  , starts
believing that  is true.
• ∩(, ) is the mental action which closes the beliefs  and belief  under conjunction.
      </p>
      <p>Namely, ∩(, ) characterizes the mental action of deducing  ∧  from  and  .
• ⊣(,  ), where  and  are atoms, is the mental action that performs a simple form of
“belief revision”, i.e., it removes  from the belief set, in case  is believed and, according
to the background knowledge, ¬ is logical consequence of  .
• ⊢(, ), where  is an atom; by means of this mental action, an agent believing that  is
true (i.e., it is in the working memory) and that  implies  , starts believing that  is true.
This last action operates exclusively on the working memory without recovering anything
from the background knowledge.</p>
      <p>We conclude this section by introducing some pieces of notation that will be useful in the
following. Recall Res = {r1 , . . . , rℓ} is the set of all (names of) resources. Each resource
is available in a certain amount. For simplicity, let us assume that all amounts are natural
numbers. We will use the writing  : to denote an amount of  units of resource  . Hence,
the writing (1:1, . . . , ℓ:ℓ) describes the amounts of all existing resources. More in general,
let Amounts = {(1:1, . . . , ℓ:ℓ)|1, . . . , ℓ ∈ N} be the set of all possible descriptions of
amounts of all resources. Moreover, given ¯ = (1:1, . . . , ℓ:ℓ) and ¯ = (1:1, . . . , ℓ:ℓ)
in Amounts, we write ¯ ≤ ¯ iff ∀  ≤  and denote by sum(¯) the value ∑︀ℓ=1 . Also, in
case ¯ ≤ ¯, we denote by ¯ − ¯ the tuple ¯ = (1:(1 − 1), . . . , ℓ:(ℓ − ℓ)).
2.2. Semantics
Many relevant aspects of an agent’s behaviour are specified in the definition of L-INF model,
including what mental and physical actions an agent can perform, what is the cost of an action
and what is the budget that the agent has at its disposal, what is the degree of preference of the
agent to perform each action, what is the degree of preference of the agent to use a particular
resource. This choice has the advantage of keeping the complexity of the logic under control
and making these aspects modular. Definitions 2.1 and 2.2 introduce the notion of L-INF model,
which is then used to introduce semantics of the static fragment L-INF. A model  is composed
of two parts. A core part  and a collection of packages  . More specifically:
Definition 2.1. The core part  of a model  is a tuple (, , ℛ, , ), where
•  is a set of worlds (or situations);
• ℛ = {}∈Agt is a collection of equivalence relations on  :  ⊆  ×  ;
•  : Agt ×  →− 22 is a neighborhood function such that, for each  ∈ Agt , each
,  ∈  , and each  ⊆  these conditions hold:
(C1) if  ∈  (, ) then  ⊆ {  ∈  | },
(C2) if  then  (, ) =  (, );
•  :  →−
•  :  →−
of the forms do() and do ().</p>
      <p>2Atm is a valuation function;
2{do(),do ()|∈Atm,∈Agt,∈Grp} is a valuation function for formulas
To simplify the notation, let () denote the set { ∈  | }, for ∈ . The set ()
identifies the situations that agent  considers possible at world . It is the epistemic state of
agent  at . In cognitive terms, () can be conceived as the set of all situations that agent 
can retrieve from its long-term memory and reason about. While () concerns background
knowledge,  (, ) is the set of all facts that agent  explicitly believes at world , a fact being
identified with a set of worlds. Hence, if  ∈  (, ) then, the agent  has the fact  under the
focus of its attention and believes it. We say that  (, ) is the explicit belief set of agent  at
world . Constraint (C1) imposes that agent  can have explicit in its mind only facts which are
compatible with its current epistemic state. Moreover, according to constraint (C2), if a world 
is compatible with the epistemic state of agent  at world , then agent  should have the same
explicit beliefs at  and . In other words, if two situations are equivalent as concerns background
knowledge, then they cannot be distinguished through the explicit belief set. This aspect of the
semantics can be extended in future work to allow agents make plausible assumptions.</p>
      <p>The packages of a model can be thought as modular extensions of the core part. Each package
is used to specify a specific feature, such as preferences, costs, executability, etc. Ideally, each
package, (may) correspond to some syntactic element of the syntax of L-INF. The connection
between the syntactic elements and the corresponding package will be established by a suitable
component of the semantics (so be seen). The following are some possible packages. Note that
we are focusing on those of interests for the purposes of this paper. Plainly, the designer of a
particular MAS may decide to include only part of the following packages or even to add/model
other features (also providing a suitable adaptation of the notion of truth).</p>
      <p>Definition 2.2. Given a core model  = (, , ℛ, , ), the packages  are:
EXECUTABILITY FOR MENTAL ACTIONS
∙  : Agt ×  →− 2ℒACT is an executability function of mental actions such that, for each
 ∈ Agt and ,  ∈  , it holds that:</p>
      <p>(D1) if  then (, ) = (, );
BUDGET AND COSTS FOR MENTAL ACTIONS</p>
      <p>N is a budget function such that, for each  ∈ Agt and ,  ∈  , the</p>
      <p>N is a cost function such that, for each  ∈ Agt ,  ∈ ℒACT, and
∙ 1 : Agt ×  →−</p>
      <p>following holds
∙ 1 : Agt × ℒ ACT ×  →−
,  ∈  , it holds that:
(E1) if  then 1(, ) = 1(, );
(F1) if  then 1(, ,  ) = 1(, ,  );
EXECUTABILITY FOR PHYSICAL ACTIONS
∙  : Agt ×  →− 2 is an executability function for physical actions such that, for each
 ∈ Agt and ,  ∈  , it holds that:</p>
      <p>(G1) if  then (, ) = (, );
BUDGET AND COSTS FOR PHYSICAL ACTIONS
∙ 2 : Agt ×  →− Amounts is a budget function for physical action, such that, for each
 ∈ Agt , and ,  ∈  , it holds that:</p>
      <p>(E2) if  then 2(, ) = 2(, );
∙ 2 : Agt × AtmA ×  →− Amounts is a cost function for physical action, such that, for
each  ∈ Agt ,  ∈ AtmA, and ,  ∈  , it holds that:</p>
      <p>(F2) if  then 2(, , ) = 2(, , );
AGENTS’ ROLES
∙  : Agt ×  →− 2 is an enabling function for physical actions such that, for each
 ∈ Agt and ,  ∈  , it holds that:</p>
      <p>(G2) if  then (, ) = (, );
PREFERENCES ON PHYSICAL ACTIONS
∙  : Agt ×  × AtmA →− N is a preference function for physical actions  such that, for
each  ∈ Agt and ,  ∈  , it holds that:</p>
      <p>(H1) if  then  (, , ) =  (, , );
For each  and , the function  induces a preference order ⪯ , on Atm, such that
 ⪯ , ′ iff  (, , ) ≤  (, , ′).</p>
      <p>EQUIVALENCE OF PHYSICAL ACTIONS
∙  : AtmA ×  →− 2 is a function describing a partition of  in equivalence
classes (i.e.,  associates each physical action with its equivalence class), such that for each
 ∈ Agt and ,  ∈  , it holds that:</p>
      <p>(I1) if  then (, ) = (, );
∙  : Agt ×  × ℒ ACT →− ℒ ACT is a selector function for physical actions that, given 
and , selects one physical action  (, , ) from the equivalence class of . Namely, it
holds that  (, , ) ∈ (, ) ∧ ∀ ′ ∈ (, ) ′ ⪯ , . For each  ∈ Agt and
,  ∈  , it holds that:</p>
      <p>(I2) if  then  (, , ) =  (, , ).</p>
      <p>Let us briefly describe the intended features shaped by the packages introduced by Def. 2.2.
Notice that the concrete implementation, in a real MAS, the specification of some packages
might depend on other packages (for example, in what follows we will describe a possible
implementation of  that relies on the function  ).</p>
      <p>For an agent , (, ) is the set of mental actions that  can execute at world . To execute
a mental action,  has to pay the cost 1(, ,  ). 1(, ) is the budget that  has (in )
to perform mental actions. As mentioned, concerning physical actions, we are interested in
modeling situations where performing an action may require multiple resources. Hence, the
cost 2(, , ) of an action  (for agent  in world ) is a tuple in Amounts, while the
available budget is described by 2(, ). For an agent , the set of physical actions it can execute
at  is (, ). Equivalence between physical actions is determined by function . That is,
(, ) is the set of physical actions that are equivalent to  in . Roles of agents (that, as
we will see, affects the capability of agents in a group to execute actions) is described through .
Namely, (, ) is the set of physical actions that agent  is enabled by its group to perform
(recall that, at each time instant, an agent belongs to a single group). Agent’s preference on
execution of physical actions is determined by the function  . For an agent , and a physical
action , the value of  (, , ) should be intended as a degree of willingness of agent  to
execute  at world . Analogously to property (C2) imposed in Def. 2.1, the constrain (D1)
imposes that agent  always knows which mental actions it can perform and those it cannot, but
if two situations/worlds are equivalent as concerns background knowledge, then they cannot be
distinguished through the executability of actions. Similar “indistinguishability’ requirements are
imposed for each package by conditions (E1), (F1), (F2), (G1), (H1), (G2), (H2), (I1), and (I2).</p>
      <p>Let us give some hints on how the functions  and  might be actually implemented in a
concrete MAS. As concerns  , we assume defined (e.g., by the MAS designer) a preference
relation among (equivalent) actions, for any agent . In practice, this relation might be obtained
by exploiting some specific reasoning module. Some possibilities in this sense are described
in [25, 26]. Similarly, as for all packages, a specific module in the MAS implementation may
be devoted to realize the selector function  . Here we outline a simple option in defining  ,
relying on the availability of functions 2, 2, and . Given an agent , a world , and an action
, let  = {′|′ ∈ (, ) ∧ 2(, ′, ) ≤ 2(, )} and ′ ⊆  such that for each
′ ∈ ′ the value sum(2(, ′, )) is minimal among the elements of . Finally, select the
preferred element to be the ⪯ ,-maximal element of ′ (i.e., the action ′′ with the larger value
of  (, , ′′). In case of multiple options, any deterministic criterion can be applied).</p>
      <sec id="sec-2-1">
        <title>2.2.1. Truth Conditions</title>
        <p>Truth values of L-DINF formulas are inductively defined as follows. Given a model  (cf.,
Definitions 2.1 and 2.2),  ∈ Agt ,  ∈ Grp,  ∈  , and a formula  ∈ ℒL-INF, we introduce
this shorthand notation: ‖ ‖, = { ∈  :  and ,  |=  }, whenever ,  |=  is
well-defined (see below). Then, we set:
1. ,  |=  iff  ∈  ()
2. ,  |= Cl (, ′) iff ′ ∈ (, )
3. ,  |= exec( ) iff ∃ ∈  with  ∈ (, )
4. ,  |= pref _do(, ) iff  ∈ (,  ) and  (,  , ) = 
5. ,  |= pref _do(, ) iff ,  |= pref _do(, ) for  ∈ N such that  =
max{ (, , ) |  ∈  ∧  ∈ (, ) ∩ (, )}
6. ,  |= can_do() iff ∃ ∈  with  ∈ (, ) ∩ (, ) and  =  (, , )
7. ,  |= fCl () iff  =  (, , )
8. ,  |= ¬ iff ,  ̸|= 
9. ,  |=  ∧  iff ,  |=  and ,  |=</p>
        <p>10. ,  |= B iff || ||, ∈  (, )
11. ,  |= K iff ,  |=  for all  ∈ ()
12. ,  |=  , for  of the forms do() and do (), iff  ∈ ()</p>
        <p>As mentioned, a physical action can be performed by a group of agents if at least one agent of
the group can do it. In this case, the level of preference for performing this action is set to the
maximum among those of the agents enabled to execute the action. In addition, the agent selects
(using  ) among the enabled equivalent actions the one it prefers the most.</p>
        <p>Notice that, in the above described semantics, a specific evaluation function  deals with
formulas of the forms doG () and doiP (A). These kind of formulas are, nevertheless, left
not axiomatized. This because doG () refers to the practical execution of an action by some
kind of actuator, where in a robotic application this action can have physical effects. To find a
way for accounting for such expression, we choose to resort to a concept that has been called by
Weyhrauch in the seminal work [23] a semantic attachment, i.e., it is assumed that some device
exists, which connects an agent with its external environment in a way that is unknown at the
logical level. The aim of [23] was exactly to explain how formal systems could be used in AI by
being “mechanized” in a practical way, by providing ideas about a principled though potentially
running implementation of these systems. In our setting, an action is meant to be executed by
means of such a device, and, whenever successfully completed, it will be then recorded by means
of atoms of the form doiP (A); such records can greatly aid the agent’s subsequent reasoning
process and support the ability to provide explanations. Hence, we assume that the function 
reflects at the semantic level the presence of such semantic attachment mechanism. So that, the
semantics is concerned only with the relationship between doing and being enabled to do. A
similar treatment is done for join actions. Performing joinA(, ) imply that agents ,  are now in
the same group. We assume that the execution of joinA(, ) affects the contents of the working
memories of the agents  and  (and consequently of the other members of their groups).</p>
        <p>As mentioned, formulas of the form intend () express agents’ intentions of performing
physical actions. In this paper we do not cope with the formalization of BDI (for which the reader
may refer, e.g., to [22]). So, we do not provide a specific semantics for intentions and treat them
rather informally, assuming also that intend () represents the fact that all agents in  intend
to perform .</p>
        <p>For any mental action  performed by any agent , we set:
13. ,  |= [ :  ] iff  [: ],  |= 
where the model  [: ] is an updated version of the model  , that takes into account the effect
that the execution of the mental action  has on the sets of beliefs of  and on the available budget.
Hence,  [: ] is obtained from  by replacing the function  and 1, with the function  [: ]
and 1[ : ], resp., defined as described below.</p>
        <p>The action  may add new beliefs by direct perception, by means of one inference step, or as a
conjunction of previous beliefs. Hence, when introducing new beliefs (i.e., performing mental
actions), the neighborhood must be extended accordingly. The following condition characterizes
the circumstances in which a mental action may be performed, and by which agent(s):
enabled (,  ) : ∃ ∈  ( ∈ (, ) ∧ 1(|,,| )
≤
minℎ∈ 1(ℎ, ))
To handle the case of multiple enabled agents, we assume defined a predicate doer (, ,  ) to
univocally select one among the enabled agents. Its definition might rely on any criteria, even
involving background knowledge and belief sets. For simplicity, let us define such predicate as:
doer (, ,  ) ↔  = min{|enabled (, ,  )}.</p>
        <p>This condition, as defined above, expresses the fact that a mental action is enabled when:
at least an agent can perform it; and the “payment” due by each agent, obtained by dividing
the action’s cost equally among all agents of the group, is within each agent’s available budget
(this choice is inherited from L-DINF). In case more than one agent in  can execute an action,
we implicitly assume the agent  performing the action is the one corresponding to the lowest
possible cost. Namely,  is such that 1(, ,  ) = minℎ∈ 1(ℎ, ,  ). Other choices might
be viable, so variations of this logic can be easily defined simply by devising some other enabling
condition and, possibly, introducing differences in neighborhood update.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2.2. Belief Update</title>
        <p>Updating an agent’s beliefs accounts to modify the neighborhood of the present world. The
updated neighborhood  [: ] resulting from execution of a mental action  by a group  of
agents is as follows.</p>
        <p>We write |=L-DINF  to denote that ,  |=  holds for all worlds  of every model  .</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.2.3. Budget Update for Mental Actions</title>
        <p>If a mental action  is executed by a group , agent in  has to contribute to cover the cost of
execution by consuming part of its budget. Hence, for each  ∈ Agt and each  ∈  , we set
1[: ](, ) = 1(, ) − 1(, ,  )/||,
if ∈, doer (, ,  ) holds, and, depending on  , the same conditions described
before to enable neighborhood updates are satisfied. Otherwise, the budget is preserved, i.e.,
1[: ](, )=1(, ). Clearly, the budget is preserved for those agents that are not in .</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.2.4. Budget Update for Physical Action</title>
        <p>Also execution of physical actions involves consumption of some amounts of resources. Hence,
the budget available for physical actions has to be updated accordingly. If an action  is
performed by an agent  at a world , this involves a transition from  to another world ′.
Moreover, if an action is executed, it means that enough resources are available and are consumed
to complete the action. The budget function satisfies this condition:</p>
        <p>
          2(, ′) = 2(, ) − 2(, , )
Remark 2.1. A comment is due concerning the action joinA(, ). We assume that whenever
an agent  ∈  joins the group of another agent  (by executing joinA(, )), the neighborhood
function  (, ) becomes equal to  (, ), for each ℎ ∈ Agt . In case  ∈  executes joinA(, )
(i.e., it leaves  and forms a new singleton group) then it maintains its current neighborhood
function, but without any binding with the belief set of the remaining agents in .
Remark 2.2. In the actions ⊢(, ) and ↓(,  ), the formula  which is inferred and asserted
as a new belief can be of the forms can_do() or do(). Conclusion do() (from
can_do() and possibly other conditions) implies that physical action  is actually performed
by . Actions are supposed to succeed by default, in case of failure a corresponding failure event
will be perceived by the agent (again, we rely on semantic attachment). The do beliefs constitute
a history of the agent’s operation, so they might be useful for the agent to reason about its own
past behavior, and/or, importantly, they may be useful to provide explanations to human users.
2.3. Axiomatization
In previous works [
          <xref ref-type="bibr" rid="ref1 ref3 ref4">1, 4, 3</xref>
          ], a sound and complete axiomatic system has been proposed for L-INF
and L-DINF. For simplicity, below we recall the axiomatization of the core part of the logic only
(corresponding to the notion of core model introduced by Def. 2.1). The L-INF and L-DINF core
axioms and inference rules are (together with the usual axioms of propositional logic):
1. (K ∧ K(
        </p>
        <p>→  )) → K ;
2. K</p>
        <p>→  ;
3. ¬K( ∧ ¬ );
4. K
5. ¬K
→ KK ;</p>
        <p>→ K¬K ;
6. B ∧ K(</p>
        <p>↔  ) → B ;
7. B
8. K</p>
        <p>︁( B([:+ ] ) ∨ (︀ doer (, , + ) ∧ K([:+( )]
↔  ))︀ ;
︁)
↔
︁( B([:↓(, 
)] ) ∨ ︀( doer (, , ↓(,</p>
        <p>)) ∧ B ∧
)] ↔  ))︀ ;
K(
→  ) ∧ K([:↓(, 
)] ) ∨ ︀( doer (, , ∩(,</p>
        <p>)) ∧ B ∧
B ∧ K([: ∩ (,</p>
        <p>)] ↔ ( ∧  )))︀ ;
)] ) ∨ ︀( doer (, , ⊢(, 
B(</p>
        <p>→  ) ∧ K([:⊢(, 
)] ) ∨ ︀( doer (, , ⊣(, 
K( →¬ ) ∧ K(︀ [
⊣(,</p>
        <p>)) ∧ B ∧
)] ↔  ))︀ ;</p>
        <p>)) ∧ B ∧
)] ↔¬ )︀ )︁
︁)
︁)
;
︁)
 to denote that  is a theorem of L-DINF.</p>
        <p>The above axiomatization is sound for the class of L-INF models. Namely, all axioms are
valid and the two inference rules (8) and (19) preserve validity. In particular, soundness of
axioms (13)–(17) follows from the semantics of [ :  ] , for each mental action  , as previously
defined. Recall that, as mentioned earlier in the paper, the axiomatization does not deal with
formulas of the forms doG (), as they are intended to be realized by a semantic attachment,
that connects an agent with its external environment.</p>
        <p>
          As concerns completeness of the (core) axiomatization, a standard notion of canonical L-INF
model can be introduced. Then, strong completeness of the axiomatization can be proved by
applying a standard canonical-model argument (cf., [
          <xref ref-type="bibr" rid="ref3 ref6">3, 6, 9</xref>
          ]) and this leads to the following
result:
Theorem 2.1. L-DINF is strongly complete for the class of L-INF models.
3. Problem Specification and Inference: An Example
In this section, we propose an example of problem specification and inference in L-DINF.
Consider a group of  agents, e.g., four, who are colleagues, work together over the weekend to
prepare banquets. They know that three of them are both able to prepare dishes, while the other
one is both able to prepare the mise en place. Below we show how our logic is able to represent
the situation, and the proceedings of this work. Each agent will initially have in its knowledge
base the fact K(intendG (prepare-banquet )) The physical actions are the following:
prepare-mise-en-place,
prepare-starters,
prepare-main-dish,
prepare-dessert .
        </p>
        <p>(1)</p>
        <p>Assume that the knowledge base of each agent  contains the following rule, that specifies how
to reach the intended goal in terms of actions to perform:</p>
        <sec id="sec-2-4-1">
          <title>K(︀ intendG (prepare-banquet ) → intendG (prepare-mise-en-place) ∧ intendG (prepare-starters) ∧ intendG (prepare-main-dish) ∧ intendG (prepare-dessert ))︀ .</title>
          <p>By axiom18, every agent will also have the following:</p>
        </sec>
        <sec id="sec-2-4-2">
          <title>K(︀ intendi (prepare-banquet ) → intendi (prepare-mise-en-place) ∧ intendi (prepare-starters) ∧ intendi (prepare-main-dish) ∧ intendi (prepare-dessert ))︀ .</title>
          <p>Therefore, the following is entailed for each of the agents (1 ≤  ≤ 4):</p>
          <p>K(intendi (prepare-banquet )
K(intendi (prepare-banquet )
K(intendi (prepare-banquet )
K(intendi (prepare-banquet )
→ intendi (prepare-mise-en-place))
→ intendi (prepare-starters))
→ intendi (prepare-main-dish))
→ intendi (prepare-dessert )).</p>
          <p>(2)</p>
          <p>Assume now that the knowledge base of each agent  contains also the following rule, for
 = ---, -, --ℎ, -:</p>
          <p>K(intendi (A) ∧ can_doi (A) ∧ fCl () → doi (A))</p>
          <p>As previously stated, whenever an agent derives doi (A) for any physical action , the
action is supposed to have been performed via some kind of semantic attachment which links
the agent to the external environment. However, doi (A) will be derived by means of some
mental action based upon the available rules. Such mental action can have a cost, that can be paid
either by the agent itself or by the group (according to the adopted policy of cost-sharing for this
group). According to the above rules, an agent can execute an action  if it is able to derive
_(), and  is the selected one among the viable equivalent actions (i.e., fCl () has
also been derived). Such conclusion will be drawn on the basis of the assessment performed
in external modules that concretely implement the model packages. These modules provide
the decision according to some kind of reasoning process in some formalism, with respect to
which the logic L-DINF is completely agnostic: modules will add the corresponding facts to each
agent’s knowledge base.</p>
          <p>To get the agents do the actions listed in (1) four sequences of mental actions have to
be executed, yielding, respectively, conclusions of the forms doG (prepare-mise-en-place),
doG (prepare-starters), doG (prepare-main-dish), and doG (prepare-dessert ), and causing
their addition to agents’ working memory. Such reasoning would consist in mental actions
of kind ∩ to form conjunctions from single facts, and mental actions of kind ↓ to apply knowledge
rule, i.e., given their preconditions, draw the conclusions. In particular, given the initial general
intention by the group, it will be possible to derive the practical goal, in terms of the conjunction
of actions to be performed by the group. From its own specialized rules and the available facts
about enabling and willingness, the execution of each action by some agent  will be then derived.
Note that, there can be unlucky situations where no agent is enabled to perform some action, or
that the one allowed is not willing, or that there is not enough budget. In this case, the goal fails.</p>
          <p>
            Let  1– 4 be the last mental actions performed at the end of the mentioned four sequences of
mental inferences (that lead to derive the (), for  among the actions in (1)), respectively.
For how such mental actions are treated we can refer to [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. Let us focus on physical actions and
their equivalence classes that we assume to be specified by function  so that:
(---) = {---}
(-) = {-, --ℎ, -}
Moreover let the cost of physical actions be (where we do not list resources with null amounts):
2(, ---, 1) = ( : 5,  : 10, - : 5),
2(, -, 1) = ( : 4,   : 5,  : 2),
2(, -, 1) = ( : 5,  : 5),
2(, --ℎ, 1) = ( : 2,  : 10)
and the their budget are:
2(1, 1) = ( : 6,  : 10, - : 7),
2(2, 1) = ( : 5,   : 7,  : 4,  : 4,  : 4,  : 2,  : 9)
2(3, 1) = ( : 4,   : 5,  : 1,  : 5,  : 5)
2(4, 1) = ( : 3,   : 5,  : 2,  : 1,  : 1,  : 5,  : 20)
          </p>
          <p>Considering the available resources, only agent 1 can perform ---, for
the other three action we choose the right one using the function  . By assuming the definition
of  described in Section 2.2, we have that agent 2 can perform -, agent 3 can
perform - and agent 4 can perform --ℎ. After the execution
of these actions the new budget becomes (cf. Section 2.2.4):
2(1, 2) = ( : 5, - : 2),
2(2, 2) = ( : 1,   : 2,  : 2,  : 4,  : 4,  : 2,  : 9)
2(3, 2) = ( : 4,   : 5,  : 1)
2(4, 2) = ( : 3,   : 5,  : 2,  : 1,  : 1,  : 3,  : 10)</p>
          <p>It is relevant to comment about the role of past events. If the set of past events, which is a part of
an agent’s short-term memory, is made available to the external modules defining actions enabling
and degree of willingness, such recordings might be used, for instance, to define constraints
concerning actions execution.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Conclusions</title>
      <p>
        In this paper we extended an epistemic logical framework previously introduced in [
        <xref ref-type="bibr" rid="ref3 ref4">4, 3</xref>
        ] and
originally designed to enable modeling and reasoning on group dynamics of cooperative agents.
In such framework, agents can perform actions and inferences on the base of their knowledge and
their beliefs. Agents can reason about mental and physical actions. The extension presented in
this paper enriches such a framework with the possibility of modeling costs involving multiple
resources consumption and specifying through particular components of the concrete MAS
(modules/packages) which physical actions to choose according to agent’s preferences, actions’
costs, and available resources.
[9] S. Costantini, A. Formisano, V. Pitoni, An epistemic logic for formalizing group dynamics
of agents, Interaction Studies 23 (2023) 391–426.
[10] R. Fagin, J. Y. Halpern, Belief, awareness, and limited reasoning., Artif. Intell. 34 (1987)
39–76.
[11] J. van Benthem, E. Pacuit, Dynamic logics of evidence-based beliefs, Studia Logica 99
(2011) 61–92.
[12] M. Jago, Epistemic logic for rule-based agents, Journal of Logic, Language and Information
18 (2009) 131–158.
[13] F. R. Velázquez-Quesada, Dynamic epistemic logic for implicit and explicit beliefs, Journal
of Logic, Language and Information 23 (2014) 107–140.
[14] P. Balbiani, D. Fernández-Duque, E. Lorini, The dynamics of epistemic attitudes in
resourcebounded agents, Studia Logica 107 (2019) 457–488.
[15] F. R. Velázquez-Quesada, Explicit and implicit knowledge in neighbourhood models, in:
D. Grossi, O. Roy, H. Huang (Eds.), Logic, Rationality, and Interaction - 4th International
Workshop, LORI 2013, volume 8196 of LNCS, Springer, 2013, pp. 239–252.
[16] H. N. Duc, Reasoning about rational, but not logically omniscient, agents, J. Log. Comput.
      </p>
      <p>7 (1997) 633–648.
[17] V. Pitoni, S. Costantini, A temporal module for logical frameworks, in: B. Bogaerts,
E. Erdem, P. Fodor, A. Formisano, G. Ianni, D. Inclezan, G. Vidal, A. Villanueva, M. De Vos,
F. Yang (Eds.), Proc. of ICLP 2019 (TC), volume 306 of EPTCS, 2019, pp. 340–346.
[18] S. Costantini, V. Pitoni, Memory management in resource-bounded agents, in: M.
Alviano, G. Greco, F. Scarcello (Eds.), AI*IA 2019 - Advances in Artificial Intelligence
XVIIIth International Conference of the Italian Association for Artificial Intelligence, 2019,
Proceedings, volume 11946 of LNCS, Springer, 2019, pp. 46–58.
[19] J. J. Elgot-Drapkin, S. Kraus, M. Miller, M. Nirkhe, D. Perlis, Active Logics: A Unified
Formal Approach to Episodic Reasoning, Technical Report, UMIACS—University of
Maryland, 1999. CS-TR-4072.
[20] J. J. Elgot-Drapkin, M. I. Miller, D. Perlis, Life on a desert island: ongoing work on
real-time reasoning, in: F. M. Brown (Ed.), The Frame Problem in Artificial Intelligence,
Morgan Kaufmann, 1987, pp. 349–357.
[21] A. S. Rao, M. Georgeff, Modeling rational agents within a BDI architecture, in: Proc. of
the Second Int. Conf. on Principles of Knowledge Representation and Reasoning (KR’91),
Morgan Kaufmann, 1991, pp. 473–484.
[22] H. V. Ditmarsch, J. Y. Halpern, W. V. D. Hoek, B. Kooi, Handbook of Epistemic Logic,</p>
      <p>College Publications, 2015. Editors.
[23] R. W. Weyhrauch, Prolegomena to a theory of mechanized formal reasoning, Artif. Intell.</p>
      <p>13 (1980) 133–170.
[24] P. Balbiani, D. F. Duque, E. Lorini, A logical theory of belief dynamics for resource-bounded
agents, in: Proc. of AAMAS’16, ACM, 2016, pp. 644–652.
[25] S. Costantini, A. Formisano, Modeling preferences and conditional preferences on resource
consumption and production in ASP, J. Algorithms 64 (2009) 3–15.
[26] S. Costantini, A. Formisano, Augmenting weight constraints with complex preferences, in:
Logical Formalizations of Commonsense Reasoning, Papers from the 2011 AAAI Spring
Symposium, AAAI Press, USA, 2011.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Timed memory in resource-bounded agents</article-title>
          , in: C.
          <string-name>
            <surname>Ghidini</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Magnini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Passerini</surname>
          </string-name>
          , P. Traverso (Eds.),
          <source>AI*IA 2018 - Advances in Artificial Intelligence - XVIIth International Conference of the Italian Association for Artificial Intelligence</source>
          ,
          <year>2018</year>
          , Proceedings, volume
          <volume>11298</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2018</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Memory management with explicit time in resource-bounded agents, in: S. A</article-title>
          .
          <string-name>
            <surname>McIlraith</surname>
            ,
            <given-names>K. Q.</given-names>
          </string-name>
          <string-name>
            <surname>Weinberger</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence</source>
          ,
          <source>(AAAI-18)</source>
          ,
          <source>the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)</source>
          , AAAI Press,
          <year>2018</year>
          , pp.
          <fpage>8133</fpage>
          -
          <lpage>8134</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>An epistemic logic for multi-agent systems with budget and costs</article-title>
          , in: W. Faber, G. Friedrich,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gebser</surname>
          </string-name>
          , M. Morak (Eds.),
          <source>Logics in Artificial Intelligence - 17th European Conference, JELIA</source>
          <year>2021</year>
          ,
          <article-title>Proceedings</article-title>
          , volume
          <volume>12678</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>101</fpage>
          -
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Towards a logic of “inferable” for self-aware transparent logical agents</article-title>
          , in: C.
          <string-name>
            <surname>Musto</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Magazzeni</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Ruggieri</surname>
          </string-name>
          , G. Semeraro (Eds.),
          <source>Proc. of the Italian Workshop on Explainable AI co-located with 19th International Conference of AI*IA</source>
          ,
          <year>2020</year>
          , volume
          <volume>2742</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>68</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Cooperation among groups of agents in the epistemic logic L-DINF</article-title>
          , in: G. Governatori,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Turhan (Eds.),
          <source>Proc. of RuleML+RR'22</source>
          , volume
          <volume>13752</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>280</fpage>
          -
          <lpage>295</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>An epistemic logic for modular development of multi-agent systems</article-title>
          , in: N.
          <string-name>
            <surname>Alechina</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Baldoni</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          Logan (Eds.),
          <source>Proc. of EMAS'21</source>
          ,
          <string-name>
            <surname>Revised</surname>
            <given-names>Selected papers</given-names>
          </string-name>
          , volume
          <volume>13190</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>72</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Modelling agents roles in the epistemic logic L-DINF</article-title>
          , in: O.
          <string-name>
            <surname>Arieli</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Casini</surname>
          </string-name>
          , L. Giordano (Eds.),
          <source>NMR'22</source>
          , volume
          <volume>3197</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>A. I. Goldman</surname>
          </string-name>
          , Theory of mind, in: E. Margolis,
          <string-name>
            <given-names>R.</given-names>
            <surname>Samuels</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. P.</surname>
          </string-name>
          Stich (Eds.),
          <source>The Oxford Handbook of Philosophy of Cognitive Science</source>
          , volume
          <volume>1</volume>
          , Oxford University Press,
          <year>2012</year>
          , pp.
          <fpage>402</fpage>
          -
          <lpage>424</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>