<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>WOA</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Timed Epistemic Logic for Formalizing Cooperation among Groups of Agents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Stefania Costantini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Formisano</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentina Pitoni</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DISIM, Università di L'Aquila</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DMIF, Università di Udine</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>24</volume>
      <fpage>6</fpage>
      <lpage>8</lpage>
      <abstract>
        <p>In the multi-agent setting, it is relevant to model group dynamics of agents, and logic has proved to be an excellent tool. We have proposed in previous work an epistemic logic that allows one to formalize the new beliefs formed or removed by a group of agents, where several groups can co-exist and where an agent can pass from one group to another. A novelty introduced in this paper is that an agent can be lent by a (willing) group to another one in case of need. Another distinguished feature we introduce in this paper is time and temporal instants/intervals to express the time periods in which agents' beliefs hold.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multi-Agent Systems</kwd>
        <kwd>Modal Logic</kwd>
        <kwd>Epistemic Logic</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In the research we are reporting in this paper, we are interested in the fruitful application of
MultiAgent Systems (MAS) not only in purely technological applications but also in human/social
sciences. There are applications where problem-solving in MAS can profit from the ability to
represent group dynamics, understand the behavior of other agents, and perform reasoning
by impersonating another agent. To this aim an agent needs to be able to represent aspects
of “Theory of Mind” [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This can be described as the set of social-cognitive skills involving
the ability to attribute mental states (such as desires, beliefs, knowledge) to oneself and to
other agents, and, moreover, the capability to reason about such mental states. Consequently,
agents with these abilities can make predictions and formulate interpretations of other agents’
behaviors. However, in real-world settings, agents are situated in time and their beliefs, desires,
and goals are timed, in the sense that they hold or make sense in a certain time interval. This is
an important feature, usually missing from other approaches in that it is not easy to model and
axiomatize, which we introduce in this paper.
      </p>
      <p>
        We build on our previous work, where we introduced a logical framework based on an
epistemic logic called L-DINF, which allows the representation of knowledge and beliefs of
agents, in order to reason about the mental actions of agents, taking into account the costs
of actions and budgets that are available to aford such costs. The logical framework L-DINF
allows the modeling of group dynamics of cooperative agents: one can model agents that can
form groups and agents of the same group can support each other in performing collective
actions. In addition, in L-DINF agents can take into consideration the preferences that each
agent may have for what concerns performing each action [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], preferences that the group
will take into account when devising and executing plans.
      </p>
      <p>The logical framework also encompasses the possibility for agents to have roles within their
group of agents. Roles determine which actions each agent is enabled by its group to perform.
The action is considered executable if at least one agent of the group can perform the action,
with the approval of the group and on behalf of the group. An agent can join or leave a group
whenever it wants, and, clearly, the role of an agent may change as it joins another group.</p>
      <p>
        The adequacy of the framework in modeling aspects of the Theory of Mind is discussed in
detail in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], to which we also refer the interested reader for an extensive discussion on related
works (which we cannot include here due to space limitations)
      </p>
      <p>
        Initial attempts to enrich L-DINF by introducing a notion of time have been explored in [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ].
In this paper, we go further in this direction and propose an extension of L-DINF in which the
specification of each belief of an agent includes the interval of time in which the belief holds.
The temporalization of L-DINF is obtained by defining a “time module” suitable to add time in an
easy way into logic representations of agents. Thus, our proposal finds potential applicability
beyond our specific approach. In essence, this module is a particular kind of function (cf.,
function  in Sect. 2.1), that assigns a “timing" to atoms and formulas.
      </p>
      <p>We also extend this timed framework so that any agent of a group can be authorized by its
group (“lent”, in a sense) to perform an action “as if it were a member of another group”. This
possibility of “lending an agent” to another group is particularly relevant to model situations
where in a group there is no agent that can perform a specific action (for various reasons, e.g.,
there is not enough budget to pay the cost of the action or the group members are not enabled
to perform it, etc.). In such cases, the group can be helped by an agent of another (willing)
group that performs the action. For the resulting extension of L-DINF we provide fully-defined
semantics, an axiomatization that turns out to be strongly complete.</p>
      <p>Notice that the contents of this paper do not correspond to a small addition to our existing
work. Rather, we are proposing an extensive reworking and improvement, where the new logic
features relevant new capabilities, never seen in related work.</p>
      <p>The paper is organized as follows. In Sect. 2 we introduce syntax and semantics of L-DINF. In
Sect. 3 we present the axiomatization of our logic. A completeness result for L-DINF exploiting
the notion of canonical model, defined in Sect. 4, can be found in Appendix A. In Sect. 5 we
discuss a significant example of the use of our logic, emphasizing the usefulness of the newly
introduced features. Finally, in Sect. 6 we conclude.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Logical Framework</title>
      <p>Agents (or, groups of agents) can perform two types of actions: mental actions and physical
actions. The latter kind has to be intended as the usual actions that an agent performs to interact
with the environment in which it operates. So-called mental actions are the elementary steps of
inference that an agent takes to develop its reasoning activity. The efect of mental actions is
to modify the beliefs of the agent. Moreover, it is always a sequence of mental inferences that
triggers the actual execution of a physical action. That is to say, the agent’s internal reasoning
makes it aware that an action can be performed.</p>
      <p>Let us start by introducing the syntax of the underlying logic L-DINF, which consists of a
static component and a dynamic component. The former, L-INF, is a logic of explicit beliefs and
background knowledge. The dynamic part extends the static one with operators capturing the
consequences of agents’ mental actions on their explicit beliefs.</p>
      <p>We are interested in modeling the knowledge and beliefs of agents that may hold in specific
time instants/intervals. Basic pieces of knowledge/belief will be represented by atomic
propositions. To model a notion of time we adopt a specific convention concerning the arguments
of such atomic propositions. More specifically, let be given a collection of predicate symbols
Π = {, , ℎ, . . .}, we consider the atomic propositions of the form (′, ′′), where  ∈ Π and
′, ′′ ∈ N, such that ′ ≤ ′′, are two time instants. (By some abuse of notation, we also consider
the special cases in which ′′ = ∞.) Then, let  = {(′, ′′) |  ∈ Π and ′, ′′ ∈ N} denote
the set of all atomic propositions. Here, an atomic proposition of the form (1, 2) stands for
“p is true from the time instant 1 to 2” with 1 ≤ 2 which is the Temporal Representation of the
external world. Atomic propositions of the form (, ), stand for “p is true in the time instant ”.</p>
      <p>Customarily, we may also admit predicate symbols of higher arity and hence atomic
propositions involving more than two arguments. In such case, we assume that the first two arguments
are those that identify a time interval (for example, the atomic proposition turn_on(1, 4, pc)
may be used to state that “the agent knows that the pc is turned on from time 1 to time 4” ).</p>
      <p>Let the set Atm denote the collection of the physical actions that an agent can perform,
including “active sensing” actions (such as, for instance, “let’s check if the oven is on”, etc.).</p>
      <p>
        In what follows, we will use the symbols ,  , to denote MTL “time-intervals” [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Each time
interval is either a closed finite interval [, ], for any pair of expressions/values ,  such that
0 ≤  ≤ , or an infinite interval [, ∞) (for some expression/value ).
      </p>
      <p>Let Agt = {1, 2, . . . , } be the set of  agents (identified for simplicity by integer numbers).</p>
      <sec id="sec-2-1">
        <title>Let Grp be the set of groups of agents, i.e., ∀ ∈ Grp,  ⊆</title>
        <p>language ℒL-DINF of L-DINF is defined as follows (where (1, 2) ranges over Atm,  ∈ N,
Agt where  is not empty. The
,  ∈ Agt , and ,  ∈ Grp):
, 

::= (1, 2) | ¬ |  ∧  | B,  | K | □   |
doi (, ) | can_doi (, ) | intend (, ) | exec( ) |
doG (, ) | can_doG (, ) | intend (, ) | exec( ) |
[: ]  | pref _do(, , ) | maxpref _do(, , ) | lend (, , , )
::=
+  | ⊢(,
) | ∩ (,
) | ↓ (, 
) | ⊣ (, 
)</p>
        <sec id="sec-2-1-1">
          <title>Other Boolean operators are defined from</title>
          <p>¬ and ∧ as usual. The language of mental actions
sub-formulas of the form [ :  ]  .
of type  is denoted by ℒACT. The static part L-INF includes only those formulas not having</p>
          <p>Before defining the formal semantics of L-DINF, let us informally describe the intended
meaning of the basic formulas. We are interested in modeling the reasoning of agents that form
coalitions/groups and act cooperatively. Agents may leave their group to join or form another
group. Hence, we consider the set Agt of all agents as partitioned in groups: each agent  ∈ Agt
always belongs to a unique group  ∈ Grp. For simplicity, we assume that all agents initially
belong to the same group. Any agent , at any time , can however perform a (physical) action
joinA(, , ), for  ∈ Agt , to change its group and join ’s group. The case of = denotes the
action that allows agent  to leave its current group and form a new singleton group {}.</p>
          <p>Formulas of the form K express background knowledge of agent : K is nothing but the
S5 well-known modal operator. Background knowledge is considered to be a stable, irrevocable,
reliable, and non-contradictory knowledge base which includes facts known by the agent from
the beginning and it is assumed to satisfy omniscience principles. Instead, the agent’s working
memory stores its belief set. Each agent has its own belief set and also owns a version of the
belief sets of all the other agents. We impose that all agents in the same group  share all their
beliefs (also concerning the version of the belief sets of agents outside ) and that whenever an
agent  joins a group , it implicitly replaces all its beliefs with those of the group.1 Beliefs
concerning what other agents believe are expressed through the modality B, . The formula
B,  expresses the fact that the formula  is in the version of the belief set of agent  owned
by agent . (The case  =  corresponds to the current working memory of .) As we will see, 
can update its belief set by performing mental actions.</p>
          <p>
            The formula intend (, ) indicates the intention of agent  to perform the physical action
 in the interval , in the sense of the BDI agent model [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]. These formulas can be part of the
agent’s knowledge base from the beginning or can be derived later. In this paper we do not
cope with the formalization of BDI, for which the reader may refer, e.g., to [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. Hence, we will
deal with intentions rather informally, also assuming that intend (, ) holds whenever all
agents of group  intend to perform  at a point in the time interval .
          </p>
          <p>
            The formula doG (, ) (and similarly for doi (, )) indicates the actual execution of action
 by , i.e., by any agent  ∈ , at a time point in . Note that, we do not provide an
axiomatization for doG . In fact, we assume that in any concrete implementation of the logical
framework, doG is realized by means of a semantic attachment [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ], that is, a procedure which
connects an agent with its external environment in a way that is unknown at the logical level.
The axiomatization only concerns the relationship between doing and being enabled to do.
          </p>
          <p>The formula can_doi (, ) is an enabling condition for  to perform the physical action
 in the interval , while pref _doi (, , ) indicates the level  of preference/willingness
of agent  to perform  in . maxpref _doG (, , ) indicates that  exhibits the maximum
level of preference on performing  within all group members (in the time interval ). The
formula lend (, , , ), where  and  are two disjoint groups and  ∈ , states that 
can lend  to , agent  can perform  and it is entitled and authorized by  to perform .
Lending agents relates to formulas of the form can_doG (, ), that mean that either an agent
in  can perform  or  can borrow an agent  from another group and  performs .</p>
          <p>In the formula □   the MTL interval “always” operator is applied to a formula, meaning that
 is always true during the interval . (For simplicity, we will sometimes write □ [0,∞) as □ .)
1An alternative possibility would be to admit that the agent  joining  contributes to the shared contents of the
working memory of agents in  ∪ {}, by adding (part of) its belief set, possibly activating some form of belief
fusion/revision. For the sake of simplicity, in this paper, we will not deal with this possibility.</p>
          <p>The formulas execi ( ) and exec( ) express executability of mental actions either by agent
, or by a group  (which means that  is executable for any member of ).</p>
          <p>The formula [: ]  , where  is a mental action, states that “ holds after action  has been
performed by an agent  ∈  and all agents in  have common knowledge about this fact”.
Namely, all agents in  have full visibility of this action and it is as if they had performed the
action themselves. We distinguish five types of mental actions that allow us to capture the
dynamic properties of explicit beliefs. These actions, performed by an agent , form explicit
beliefs via inference with respect to the version of ’s belief set owned by  and update such
representation of ’s working memory. (The case in which  =  corresponds to reasoning steps
performed by agent  on the basis of its own beliefs.)
+ ( ): this mental action represents the action of learning a perceived belief. The formula  is
assumed to be an atomic proposition (1, 2). This mental operation is used to form a
new belief from the perception  . A perception may become a belief whenever an agent
becomes “aware” of the perception and takes it into explicit consideration. Hence, after
this action has been executed, agent  believes  (as far as  knows);
↓ (, 
∩ (,
⊣ (, 
⊢(,</p>
          <p>): this action infers  from  , where  is an atom, say (1, 2), if  is believed by 
(as far as  knows and in some suitable time interval including [1, 2]) and, according to
agent  background knowledge,  is a logical consequence of  and  starts believing
that (1, 2) is true;
): this action closes under conjunction the beliefs  and  of ’s version of ’s belief set;
): this action is a simple form of belief revision, i.e., assuming  and  are (1, 2) and
(3, 4), respectively,  believing (1, 2) and according to ’s background knowledge
that (1, 2) implies ¬(3, 4) , removes the timed belief (3, 4) if the intervals match.
Note that, should  be believed in a wider interval  such that [1, 2] ⊆ , the belief
(., .) is removed concerning intervals [1, 2] and [3, 4], but it is left for the remaining
sub-intervals, hence, it is “restructured”;</p>
          <p>): this action infers  from  if, where  is an atom, say (1, 2), as far as  knows
about ’s beliefs,  is believed by  and according to ’s working memory  is a logical
consequence of  and  starts believing that (1, 2) is true.
2.1. Semantics of L-INF
Definition 1, to be seen, introduces the notion of L-INF model, which is then used to define the
semantics of L-INF. The notion of L-INF model involves a “time function”  that associates a
time interval  with each formula. Such time function is defined as follows:
∙  ((1, 2)) = [1, 2] and  (¬ ) =  ( );
∙  ( ∧  ) =  where  is the unique smallest interval including  ( ) and  ( );
∙  (B,  ) =  ( ) and  (K ) =  ( );
∙  (□   ) = ;
∙ for formulas of the form  ([: ] ) we consider diferent cases depending on  :
1.  ([ : + ( )]  ) =  ( );
2.  ([ : ↓ (,  )]  ) =  ([ :⊢(, )]  ) =  ( );
3.  ([ : ∩ (, )] ( ∧  )) =  ( ∧  );
4.  ([ : ⊣ (,  )]  ) =  , where  is the smallest interval including the “restructured”
set of time instants in which  is believed after the belief update action;
∙  ((, )) =  and  ((, )) = ;
∙  (_(, )) =  and  (_(, )) = ;
∙  (intend (, )) =  and  (intend (, )) = ;
∙  (pref _do(, , )) =  and  (maxpref _do(, , )) = ;
∙  (lend (, , , )) = ;
∙  (exec( )) =  (exec{}( )) while for  (exec( )), depending on  , we have:
1.  (exec(+ ( ))) =  ([ : + ( )]  );
2.  (exec(↓ (,  ))) =  ([ : ↓ (,  )] );
3.  (exec(∩ (, ))) =  ([ : ∩ (, )]( ∧  ));
4.  (exec(⊣ (,  ))) =  ([ : ⊣ (,  )] );
5.  (exec(⊢(, ))) =  ([ : ⊢(, ))] ).</p>
          <p>Definition 1, below, also depends on a given set of worlds  and on a valuation function
 :  →− 2Atm . For each world  ∈  , let 1 the minimum time instant of  ((1, ))
where (1, ) ∈  () and let 2 be the supremum time instant (we can have 2 = ∞) w.r.t. the
atoms (, 2) in  (). When useful, we will make explicit these two time instants 1 and 2 by
denoting the world  as  with  = [1, 2].</p>
          <p>Definition 1. A L-INF model  is composed of set of worlds (or situations)  , a collection
ℛ={}∈Agt of equivalence relations on  (i.e.,  ⊆  ×  for each  ∈ Agt ), and a collection
of semantic functions as follows:
∙ A neighborhood function  : Agt × Agt ×  →− 22 such that these conditions hold:
(C1) ∀ ,  ∈ Agt , ∀  ∈ , ∀  ⊆  (︀  ∈  (, ,  ) →  ⊆ {  ∈  |   }︀) ,
(C2) ∀ ,  ∈ Agt , ∀  ,  ∈  (︀   →  (, ,  ) =  (, ,  )︀) ;
∙ An executability function for mental actions  : Agt ×  →− 2ℒACT such that
(C3) ∀  ∈ Agt , ∀  ,  ∈  (︀   → (,  ) = (,  )︀) ;
∙ A budget function  : Agt ×  →− N such that</p>
          <p>(C4) ∀  ∈ Agt , ∀  ,  ∈  (︀   → (,  ) = (,  )︀) ;
∙ A cost function  : Agt × ℒ ACT ×  →− N such that</p>
          <p>(C5) ∀  ∈ Agt , ∀  ,  ∈ , ∀  ∈ ℒACT ︀(   → (, ,   ) = (, ,   )︀) ;
∙ An executability function for physical actions  : Agt ×  →− 2Atm such that
(C6) ∀  ∈ Agt , ∀  ,  ∈  (︀   → (,  ) = (,  )︀) ;
∙ A preference function for physical actions  : Agt ×  × Atm →− N such that
(C7) ∀  ∈ Agt , ∀  ,  ∈ , ∀  ∈ Atm ︀(   →  (,  , ) =  (,  , ))︀ ;
∙ An enabling function for physical actions  : Agt ×  →− 2 such that
(C8) ∀  ∈ Agt , ∀  ,  ∈  (︀   →  (,  ) =  (,  )︀) ;
∙ A lending function  : Agt × Grp × Grp × AtmA ×  →− { true, false} such that
(C9) ∀ ,  ∈ Grp, ∀  ∈ , ∀  ,  ∈ , ∀  ∈ Atm
︀(   → (, , , ,  ) = (, , , ,  )︀) ;
∙ The time function  defined before;
∙ A valuation function for atomic propositions in Atm  :  →− 2Atm ;
∙ A valuation function  :  →− 2{do(,)|∈Atm,∈Grp,⊆ N} for formulas of the form
do(, ).</p>
          <p>The set ( ) = { ∈  |   } is the epistemic state of agent  at  . It identifies the
situations that  considers possible at world  . In cognitive terms, ( ) can be conceived as
the set of all situations that  can retrieve from its long-term memory and reason about.</p>
          <p>While ( ) concerns background knowledge,  (, ,  ) is the set of all facts that agent
 explicitly believes at world  , a fact being identified with a set of worlds. Hence, if  ∈
 (, ,  ) then, the agent  has the fact  under the focus of its attention and believes it.
While  (, ,  ) is the explicit belief set of agent  at world  , the set  (, ,  ), for  ̸= ,
represents the version known by  of the belief set of agent .</p>
          <p>(,  ) is the set of mental actions that  can execute at  and (,  ) is the budget that 
has available to perform mental actions. Similarly, (, ,   ) is the cost to be paid to execute  .</p>
          <p>As concerns physical actions, (,  ) is the set of actions that  can execute at  , while its
preference on executability of physical actions is determined by the function  . The integer
value  =  (,  , ) is the degree of willingness of  to execute  in world  .  (,  )
instead is the set of physical actions that  is enabled by its group to perform in . Hence,
function  defines the role of an agent in its group, via the actions that it is allowed to execute.</p>
          <p>The expression (, , , ,  ) states that agent  can be lent by its group  to another
group  to perform  (e.g., when no agents in  can do it, cf. the semantics defined below).</p>
          <p>Constraint (C1) imposes that agent  can have explicit in its mind only facts that are compatible
with its current epistemic state. According to (C2), if a world  is compatible with the epistemic
state of agent  at  , then  should have the same explicit beliefs at  and  . In other
words, if two situations are equivalent as concerns background knowledge, then they cannot
be distinguished through the explicit belief set. This aspect of the semantics can be extended
in future work to allow agents to make plausible assumptions. Properties analogous to (C2)
are imposed by constraints (C3)–(C9). For instance, (C3) imposes that agent  always knows
which mental actions it can perform and those it cannot and that executability of mental actions
cannot distinguish between -equivalent situations.</p>
          <p>Remark 1. Notice that our way of defining a model is highly modular, aimed to facilitate
implementation and to allow a designer to “tune” the practical behavior of a logical theory. In fact,
the executability of mental and physical actions, costs, budget, preferences, etc., are all specified
by means of specific special functions. Changing the definition of such functions will change the
behavior of the agent that the underlying theory aims to define. Moreover, in implementation terms,
a designer has total freedom about how these functions are realized.</p>
          <p>Definition 2. Given a model  as in Definition 1, the truth values for formulas are inductively

defined as follows, where ,  ∈ Agt ,  ∈ Grp,  ∈  , and where we put || ||, = { ∈
 :   and ,  |=  }:
(i) ,  |= (1, 2) if ∀′1, ′2 such that 1 ≤ ′1 ≤ ′2 ≤ 2 it holds that (′1, ′2) ∈  ( )
and  ((1, 2)) ⊆</p>
          <p>⊆  and ∀  ∈ ( ) it holds that ,  |= 
(vii) ,  |= exec( ) if  ∈ (,  ) and  (exec( )) ⊆ 
(vi) ,  |= □   if  ( )</p>
          <p>⊆  ⊆  and for all  ∈ ( ) it holds that ,  |= 
(viii) ,  |= exec( ) if ∃ ∈  such that  ∈ (,  ) and  (execG ( )) ⊆ 
(ix) ,  |
= lend (, , ,  ) if ,  ∈</p>
          <p>Grp,  ∈ , (, , , ,  ) = true,
 ∈ (,  ) ∩  (,  ),  (lend (, , ,  )) ⊆ ,  ∩  = ∅ and ,  |=
maxpref _do (, ,  )
 (pref _do(, ,  )) ⊆ 
 (,  )}
(x) ,  |= pref _do(, ,  ) if  (,  , ) = ,  ∈ (,  ),  ∈  (,  ), and
(xi) ,  |=
maxpref _do(, ,  ) if  (maxpref _do(, ,  )) ⊆
,  ∈ , and
,  |= pref _do(, ,  ) for  = max{ (,  , ) |  ∈  ∧  ∈ (,  ) ∩
(xiii) ,  |= do(,  ) if ,  |= do{}(,  )
(xii) ,  |= do(,  ) if do(,  ) ∈ ( ) and  (do(),  ) ⊆ 
(xiv) ,  |= can_do(,  ) if  ∈ (,  ) ∩  (,  ) ∧  (can_do(,  )) ⊆ 
(xv) ,  |
=
can_do(,  ) if (︀ ∃ ∈</p>
          <p>such that  ∈ (,  ) ∩  (,  ) ∧
 (can_do(,  )) ⊆ ︀) ∨ ∃  ∈ Grp, ∃ ∈ , ,  |= lend (, , ,  ))︀ .</p>
          <p>︀(</p>
          <p>Point (i) establishes that if a predicate is true within an interval, it is also uniformly true in all
sub-intervals. Note that a physical action  can be performed by a group of agents if at least
one agent of the group can do it. In this case, the agent which best prefers will execute the action.
The last item of the above list, states that a group  can perform in a certain situation (world
 ) and time interval  an action  in case an agent in the group has the right competencies
(it is able, by the function  and enabled by the function  ), or if there exists another group
 including an agent  with these competencies, and which is available to lend  to  (i.e.,
(, , , ,  ) = true).</p>
          <p>
            A specific evaluation function  deals with formulas of the form doG (). These kinds
of formulas are, nevertheless, left not axiomatized. This is because doG (, ) refers to the
practical execution of an action by some kind of actuator, where in a robotic application this
action can have physical efects in time interval . To find a way to account for such expression,
we choose to resort to a concept that has been called by Weyhrauch in the seminal work [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] a
semantic attachment, i.e., it is assumed that some device exists, which connects an agent with
its external environment in a way that is unknown at the logical level. The aim of [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] was
exactly to explain how formal systems could be used in AI by being “mechanized” in a practical
way; thus, this work aimed to provide ideas about a principled though potentially running
implementation of these systems. We assume that the function  reflects at the semantic level
the presence of such a semantic attachment mechanism. Hence, the semantics is concerned only
with the relationship between doing and being enabled to do. A similar treatment is exploited
for join actions. Performing joinA(, , ) implies that agents ,  are at time  in the same group.
We assume that the execution of joinA(, , ) afects the contents of the working memories of
the agents  and  (and, consequently, of the other members of their groups).
2.2. Semantics of mental actions
Given a model  as in Definition 1 and a world  , we set:
(xvi) ,  |= [: ] if  [: ],  |= 
where  [: ] is the model obtained from  by replacing the two functions  and  with
their updated versions  [: ] and [: ], resp. (to be seen below).  [: ] represents the fact
that the execution of  afects the sets of beliefs of agents in  (i.e., the neighborhood function
changes as mental actions are performed) and modifies the available budget.
          </p>
          <p>We write |=L-DINF  to denote that ,  |=  holds for all worlds  of every model  .</p>
          <p>A key aspect in the definition of the logic is the following, which states under which conditions,
and by which agents, an action may be performed:</p>
          <p>enabled  (, ,  ) ↔ (︀  ∈  ∧  ∈ (,  ) ∧ (|,,|  ) ≤ minℎ∈ (ℎ,  )︀) .
To handle the case of multiple enabled agents, we assume defined a predicate doer  (, ,  ) to
univocally select one among the enabled agents. Its definition might rely on any criteria, even
involving background knowledge and belief sets. For simplicity, let us define such predicate as:
doer  (, ,  ) ↔  = min{|enabled  (, ,  )}.</p>
          <p>This condition, as defined above, expresses the fact that a mental action is enabled when:
at least an agent can perform it; and, the “payment” due by each agent, obtained by dividing
the action’s cost equally among all agents of the group, is within each agent’s available budget
(this choice is inherited from L-DINF ). In case more than one agent in  can execute an action,
we implicitly assume the agent  performing the action is the one corresponding to the lowest
possible cost. Namely,  is such that (, ,   ) = minℎ∈ (ℎ, ,   ). Other choices might
be viable, so variations of this logic can be easily defined by devising some other enabling
condition and, possibly, introducing diferences in neighborhood update.</p>
          <p>Notice that the definition of the enabling function basically specifies the “role” that agents
take while concurring with their own resources to actions’ execution. Also, in the case of the
specification of diferent resources, diferent corresponding enabling functions might be defined.
Neighborhood updating The updated neighborhood  [: ] resulting from execution of a
mental action  is specified as follows (where  is an agent not necessarily in ):
∙ If  is + ( ), then, for each , ℎ ∈ Agt and  ∈  ,
 [:+( )](, ℎ,  ) =  (, ℎ,  ) ∪ {|| ||, }

if ∈ and ℎ= and doer  (, , + ( )) holds and, moreover,  ([:+ ( )] ) ⊆ .
Otherwise, the neighborhood function does not change (i.e.,  [:+( )](, ℎ,  ) =  (, ℎ,  )).
∙ If  is ↓ (,  ), then, for each , ℎ ∈ Agt and  ∈  ,
 [:↓(, )](, ℎ,  ) =  (, ℎ,  ) ∪ {|| ||, }

if ∈ and ℎ= and doer  (, , ↓ (,  )) and ,  |= B,  ∧ K( →  ) and
 ([:↓ (,  )]  ) ⊆  hold. Otherwise, the neighborhood function does not change (i.e.,
 [:↓(, )](, ℎ,  ) =  (, ℎ,  )).
∙ If  is ∩(, ), then, for each , ℎ ∈ Agt and  ∈  ,
(i.e.,  [:∩(,
if  ∈  and ℎ =  and doer  (, , ∩(,
)) and ,  |
=</p>
          <p>B,
∧ B, and
)] (</p>
          <p>∧  )) ⊆ . Otherwise, the neighborhood function remains unchanged
∙ If  is ⊢(, ), then, for each , ℎ ∈ Agt and  ∈  ,
 [:⊢(, )](, ℎ,  ) =  (, ℎ,  ) ∪ {|| ||, }

)](, ℎ,  ) =  (, ℎ,  ).
 ([: ⊢ (,
 [:⊢(,
if ∈ and ℎ= and doer  (, , ⊢(,
)) and ,  |= B,
∧ B,(
→  ) and
)]  ) ⊆  hold. Otherwise, the neighborhood function remains unchanged:
∙ If  is ⊣((1, 2), (3, 4)), then, for each , ℎ ∈ Agt and  ∈  ,
 [:⊣((1,2),(3,4))](, ℎ,  ) = ︁( (︀  (, ℎ,  ) ∖ {||(, )||,}︀) ∪
{||(, 3− 1)||, } ∪ {||(4+1, )||, }

︁)
if ∈,
ℎ=,
doer  (, , ⊣((1, 2), (3, 4))),
,  |=(B,((1, 2))
∧
K((1, 2)→¬(3, 4))) and  ((, ))⊆  hold, where [, ] is the smallest
interval including [3, 4] and is such that ,  |=B,((, )) holds. If these conditions do
not hold, the neighborhood function does not change (i.e.,  [:⊣((1,2),(3,4))](, ℎ,  )
=  (, ℎ,  )). Note that the possible update ||(, 3− 1)||, (resp., ||(4+1, )||, ) is

added to the neighborhood only if the interval [, 3− 1] (resp., [4+1, ]) is not empty.</p>
          <p>Notice that, after an agent  ∈  performed  , all agents  ∈  see the same update in the
neighborhood. Conversely, for any agent  ̸∈  the neighborhood remains unchanged (i.e.,
 [: ](, ℎ,  )= (, ℎ,  ), for each ℎ). However, even for agents in  the neighborhood
function remains unchanged if the required preconditions, on beliefs, knowledge, and budget
(see the definitions of doer  and enabled  ), do not hold (and hence the action is not executed).
Budget updating</p>
          <p>Since each agent in  has to contribute to cover the costs of execution by
consuming part of its available budget, an update of the budget function is needed. Hence, for
each  ∈ Agt and each  ∈  , whenever a mental action  is performed (recall that mental
actions are performed by  w.r.t. the ’s own version of the working memory of ), we set
[: ](,  ) = (,  ) − (, ,   )/||,
if ∈, doer  (, ,  ) holds, and, depending on  , the same conditions described
before to enable neighborhood updates are satisfied. Otherwise, the budget is preserved, i.e.,
[: ](,  )=(,  ). Clearly, the budget is preserved for those agents that are not in .
Joining a group</p>
          <p>A comment is due concerning the action joinA(, , ). As mentioned,
we assume that whenever an agent  ∈  joins the group of another agent  (by executing
joinA(, , )), the neighborhood function  (, ℎ,  ) becomes equal to  (, ℎ,  ), for each
ℎ ∈ Agt. In case  ∈  executes joinA(, , ) (i.e., it leaves  and forms a new singleton group)
then it maintains its current neighborhood function, but without any binding with the belief
set of the other agents in .
Remark 2. In the mental actions ⊢(,
) and ↓(, 
), the new belief  which is inferred can
be of the forms can_do(,  ), can_do(,  ), do(,  ), or do(,  ), which denotes the
actual (possibility of) execution of the physical action , by  or by . The conclusion do(,  )
(derived, for instance, from can_do(,  ) and possibly other conditions) implies that the physical
action  is performed by  (and similarly for do(,  )).</p>
          <p>We assume that actions succeed by default. In a concrete system, in case of failure, a corresponding
failure event would be perceived by the agent (again, we rely on semantic attachment).</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Axiomatization</title>
      <p>In this section, we describe an axiomatic system for L-DINF. This yields a derivation notion ⊢
for L-DINF. The axiomatization is sound for the class of L-INF models and strongly complete
(cf. Appendix A). The axiomatic system is composed of the following axioms and inference
rules (where  ∈ Gpr and , ,  ∈ Agt ), together with the usual axioms of propositional logic:
9. (1, 2) → (′1, ′2) with [′1, ′2] ⊆ [1, 2]</p>
      <p>→  )) → K
2. K
1. (K ∧ K(</p>
      <p>→ 
3. ¬K( ∧ ¬ )
4. K
5. ¬K
→ KK</p>
      <p>→ K¬K
7. B, 
8. K</p>
      <p>6. B,  ∧ K(</p>
      <p>↔  ) → B, 
→ KB, 
11. □  
10. □   ∧ □  (</p>
      <p>→ □   with  ⊆ 
12. [: ](1, 2) ↔ (1, 2)
13. [: ] 
¬</p>
      <p>↔ ¬[: ]
14. [: ]( ∧  ) ↔ [: ] ∧ [: ]</p>
      <p>→  ) → □  ( )
15. [: ]K
16. [:+ ( )]B, 
17. [:↓ (, 
)]B, 
↔ K([: ] )
19. [:⊢ (, 
20. [:⊣ (, 
)]B, 
)]B, 
18. [: ∩ (, 
)]B, 
↔ (︀ B, ([: ∩ (, 
)] ) ∨ ︀( doer  (, , ∩ (,
)) ∧ B,  ∧
↔
︀( B, ([:+ ( )] ) ∨ (︀ doer  (, , + ( )) ∧ K([:+ ( )]
↔  ))︀
↔ (︀ B, ([:↓ (, 
)] ) ∨ ︀( doer  (, , ↓ (,</p>
      <p>)) ∧ B,  ∧
K(
→  ) ∧ K([:↓ (, 
)] ↔  ))︀
↔ (︀ B, ([:⊢(, 
)] ) ∨ ︀( doer  (, , ⊢ (,</p>
      <p>)) ∧ B,  ∧
↔ (︀ B, ([:⊣ (, 
)] ) ∨ ︀( doer  (, , ⊣ (,</p>
      <p>)) ∧ B,  ∧
B,  ∧ K([: ∩ (,</p>
      <p>)] ↔ ( ∧  )))︀
B, (
→  ) ∧ K([:⊢ (,</p>
      <p>)] ↔  ))︀
K( →¬ ) ∧ K(︀ [⊣ (, 
)] ↔¬ )︀
21. exec( ) ↔ ⋁︀
22. can_do(, ) ↔ ⋁︀</p>
      <p>∈ exec( )
23. pref _do(, , ) → can_do(, )
24. maxpref _do∪{}(, , ) ↔ pref _do(, , ) ∧ ⋀︀</p>
      <p>∈ can_do(, ) ∨ ⋁︀∈Grp,∈ lend (, , ,  )
25. lend (, , , ) ↔ ( ∩  = ∅) ∧  ∈  ∧ can_do(, ) ∧ maxpref _do (, , )
∈(pref _do (,  , ) →  ≤ )
26.  ↔ [/ ]</p>
      <sec id="sec-3-1">
        <title>We write L-DINF ⊢  to denote that  is a theorem of L-DINF.</title>
        <p>The above axiomatization is sound for the class of L-INF models. All axioms are valid and
the inference rules (8) and (26) preserve validity. In particular, soundness of axioms (16)–(20)
follows from the semantics of [ :  ] , as previously defined. As mentioned earlier in the paper,
the axiomatization does not deal with formulas of the forms doG (), as they are intended to
be realized by a semantic attachment, that connects an agent with its external environment.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Canonical Model</title>
      <p>In this section, we introduce the notion of canonical model of our logic (compare Definitions 3
and 1). We have developed a proof of strong completeness w.r.t. the proposed class of models,
by means of a standard canonical-model argument, that can be found in Appendix A.</p>
      <p>As before, let Agt be a set of agents.
{ (,  )|B,</p>
      <p>∈  }.</p>
      <p>Definition 3.</p>
      <p>The canonical L-INF model  is specified by choosing the set  of all maximal
consistent subsets of ℒL-INF as a collection of situations and by defining the following components:
∙ For  ∈ ,  ∈ ℒL-INF let  (,  )={ ∈ ,( )| ∈ }. Then, we put (, ,  ) =
and  ,  ∈ ,  , if and only if (for all  , K  ∈  implies 
∈  ).
∙ ℛ = {,}∈Agt is a collection of equivalence relations on  such that, for every  ∈ Agt
2ℒACT is such that for each ∈Agt and  ,  ∈, if  , then</p>
      <p>N is such that for each  ∈ Agt and  ,  ∈ , if  , then
∙  : Agt ×  →−</p>
      <p>(,  ) = (,  ).
∙  : Agt ×  →−</p>
      <p>(,  ) = (,  ).
∙  : Agt ×  →−</p>
      <p>(,  ) = (,  ).
∙  : Agt ×  →−
(,  ) = (,  );
 , then (, ,   ) = (, ,   ).
∙  : Agt × ℒ</p>
      <p>ACT ×
 →−</p>
      <p>N is such that for each  ∈ Agt ,  ∈ ℒACT, and  ,  ∈ , if
2Atm is such that for each  ∈ Agt and  ,  ∈ , if  , then
 ,  ∈  , if  , then (,  , ) = (,  , ).
∙  : Agt ×  ×</p>
      <p>Atm →−</p>
      <p>N is such that for each physical action  and  ∈ Agt and
2 is such that, for each  ∈ Agt and  ,  ∈  , if  , then
∙  is the time function, as defined in Sect. 2.1;
∙  : Agt × Grp× Grp× AtmA×  →− { true, false} is such that for each  ∈ Agt , ,  ∈</p>
      <p>Grp,  and  ,  ∈  , if  , then (, , , ,  ) = (, , , ,  );
∙  :  →−
∙  :  →−
2Atm is such that ( ) = Atm ∩  .</p>
      <p>2{do(;),|∈Atm,∈Agt,∈Grp,⊆ N} such that ( ) ⊆  .</p>
      <p>Where, analogously to what we have done before, ,( ) = { ∈  |  , }.</p>
      <p>It is easy to verify that  is an L-INF model as specified in Definition 1, since it satisfies
conditions (C1)–(C9). Hence, it models the axioms and the inference rules introduced before.
The following properties hold for each  ∈  and  ∈ ℒL-INF :
∙ K  ∈  if ∀ ∈  such that  , we have  ∈  ;
∙ if B,  ∈  and  , then B,  ∈  .</p>
      <p>Thus, ,-related worlds have the same knowledge and -related worlds have the same
beliefs, i.e. there can be ,-related worlds with diferent beliefs.</p>
    </sec>
    <sec id="sec-5">
      <title>5. An example</title>
      <p>In this section, we propose an example to explain the usefulness of this kind of logic and of the
new extensions. Consider a group  of three agents, who constitute a research group, say
in . They are the co-authors of a paper that has to be submitted to a conference: the first
author, , deals with the drafting of the introduction and finding the references, the second
one, , deals with the experiments, and the third one, , deals with the formalization
part. The second one, , is the only one who can perform the experiments because she has
the required certifications; the other two agents are enabled to perform diferent tasks, such
as write the abstract, searching references, checking the correctness of the formal part, and
so on. However, none of the agents can perform the checking of the English. Luckily, there
is another research group, say in Software Engineering, , that is willing (as we assume
is dictated by the function  in Definition 1) to provide help, and has among its members an
agent, , that is able, is allowed and is the most willing (of the group) to check the English.
Hence, according to point (ix) of Definition 2, this agent will be lent from  to  in the
required time interval.</p>
      <p>The group  receives notification of a deadline for a paper, so they decide to
organize themselves for submitting it. The group will reason, and devise the intention/goal
K(□  (submit _fullpaper (0, 2)): the group intends to submit their paper within the
indicated time interval . Here 0 is the time instant when the group begins to organize to write the
paper after the call was issued at  (0 ≥ ), so  = [, 1] where 1 is the deadline and
2 is the time instant when they really submit the paper and 2 ≤ 1.</p>
      <p>Among the physical actions that agents in the group can perform are for instance the
following: submit _abstract , do_experiment , write_introduction, write_formal _part and
write_experiment _results. They are instead not able to perform the action check _the_english.</p>
      <p>The group will now be required to perform a planning activity. Assume that, as a result of the
planning phase, the knowledge base of each agent  contains the following rule, that specifies
how to reach the intended goal in terms of actions to perform and sub-goals to achieve (listed
after the “ → ”):</p>
      <p>K(︀ □  (submit _fullpaper (t0 , t2 )) →
□ 1 (submit _abstract (t0 , t3 )) ∧ □ 2 (do_experiment (t0 , t4 )) ∧
□  (write_formal _part (t0 , t5 )) ∧ □ 3 (check _the_english(t5 , t6 )))︀
where 1, 2, 3 ⊆ , 3 is the time instant when the authors submit the abstract, and 3 ≤ 1,
4 is the time instant when the author  has finished the experiments and has written the
results at 4 ≤ 1, finally 5 is the time instant when the other agent has finished to write the
formal part. At this point, someone can start checking the English before submitting. By 6 ≤ 2
the English check should have been completed, so that the paper can be submitted.</p>
      <p>Assume now that the knowledge base of the group  , shared by every agent  ∈ 
contains also facts, stating the abilities, permissions, and preferences of agents of the group,
which turn out to be able to perform each of the necessary actions, but one. Which agent will
in particular perform each action ? According to items (x) and (xiv) and (xi) in Definition 2,
this agent will be chosen as the one that best prefers to perform this action, among those that
can do it. Formally, maxpref _doG (i , A, I ) identifies the agent  in the group with the highest
degree of preference on performing , and can_doG (A, I ) is true if, in the time interval ,
there is some agent  in the group which is able and allowed to perform .</p>
      <p>Since no member of the group is capable of checking the English, and supposing
that in the present situation,  is willing to help by lending an agent , i.e., that
(, , check _the_english,  , 3 ) = true then by point (xiv) of Definition 2 the best
willing agent in , say agent  is , will be lent to  for interval 3, and will thus
perform the check of the English. Thus we will have:
K(︀ □ 1 (submit _abstract (t0 , t3 )) ∧ can_doGAI (submit _abstract (t0 , t3 ), I1 )∧</p>
      <p>maxpref _doGAI (bob, submit _abstract (t0 , t3 ), I1 ) → dobob(submit _abstract (t0 , t3 ), I1 ))︀
K(︀ □ 2 (do_experiment (t0 , t4 )) ∧ can_doGAI (do_experiment (t0 , t4 ), I2 )∧</p>
      <p>maxpref _doGAI (alice, do_experiment (t0 , t4 ), I2 ) → doalice (do_experiment (t0 , t4 ), I2 ))︀
K(︀ □  (write_formal _part (t0 , t5 )) ∧ can_doGAI (write_formal _part (t0 , t5 ), I )∧
maxpref _doGAI (partick , write_formal _part (t0 , t5 ), I ) → dopartick (write_formal _part (t0 , t5 ), I ))︀
K(︀ □ 3 (check _the_english(t5 , t6 )) ∧ can_doGAI (check _the_english(t5 , t6 ), I3 )∧
lendGAI (mary, GSE , check _the_english, I3 ) → domary (check _the_english(t5 , t6 ), I3 ))︀
Hence, the  group will be able to complete the plan and achieve the goal.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>
        In this paper we extended an epistemic logical framework previously introduced in [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ].
      </p>
      <p>The extension enriches the framework with: (a) The possibility of specifying time intervals
to express the time periods in which agents’ beliefs hold. Clearly, this afects the executability
of actions and efects of the mental actions that agents use to modify their belief sets; (b) The
possibility of exchanging agents between groups, i.e., an agent can be lent by a group  to
another group ; this possibility is relevant to model situations where in  there is no agent
that can perform a specific action and is conditioned to the willingness of group  to lend
(established by the outcome of the lending function  included in the definition of L-INF model),
and to the availability of an agent able and authorized to perform the required action. Then, a
group can obtain help from another group if the latter includes (at least) one agent that can
perform the required action. This greatly enhances a group’s capabilities to reach its goals.</p>
    </sec>
    <sec id="sec-7">
      <title>A. Proofs of Strong Completeness</title>
      <p>By exploiting the notion of canonical model (Definition 3), we can develop a proof of strong
completeness for the axiomatic system defined in Sect. 3. Let us start with some preliminary
results:
Lemma 1. For all  ∈  and B, , B,  ∈ ℒL-INF, if B,  ∈  but B,  ̸∈  , it
follows that there exists  ∈ ,( ) such that  ∈  ↔  ̸∈  .
Proof. Let  ∈  and B,  ∈  and B,  /∈  . Assume now that for every  ∈ ,( )
we have 
axiom (6), that B,  ∈  , which is a contradiction.</p>
      <p>∈  or  /∈  ∧  /∈  ; then, it follows that K( ↔  ) ∈  , and, by
Lemma 2. For all  ∈ ℒL-INF and  ∈  it holds that  ∈  if and only if ,  |=  .
Proof. The proof is by induction on the structure of formulas 
∈ ℒL-INF. For example, if</p>
      <p>= (1, 2),  ∈ , and  ((1, 2)) ⊆ , then (1, 2) ∈  if (′1, ′2) ∈ ( ) for all
′1, ′2 such that 1 ≤</p>
      <p>′1 ≤ ′2 ≤ 2. This means that ,  |= (1, 2) by Definition 2. Other
cases follows similarly from the definition of  and the semantics of L-INF. For formulas of the
form B,  we have to consider the neighborhood function  : assume B, 
∈  for  ∈ 
and  (B,  ) ⊆ . We have that  (,  ) = { ∈ ,( ) |</p>
      <p>of ‖ · ‖ , , and by the inductive hypothesis, we have  (,  ) =‖ 
∈  }. By the definition</p>
      <p>‖, ∩,( ).</p>
      <p>Hence, ,  |= B,  . Suppose B,  /∈  , and then ¬B, 
∈  ; we have to prove
‖  ‖</p>
      <p>∩ ,( ) ∈/ (, ,  ). Choose  ∈ (, ,  ). By definition, we know that
 =  (,  ) for some  , with B,</p>
      <p>∈  . By Lemma 1 there is some  ∈ ,( ) such
that 
∈  ↔  /∈  . By the induction hypothesis, we obtain that either  ∈ (‖ 

∩ ,( )) ∖  (,  ) or  ∈  (,  ) ∖ (‖  ‖, ∩ ,( )) holds. Consequently, in

we conclude that ‖  ‖, ∩ ,( ) ∈/ (, ,  ). Hence ,  ̸|= B,  .</p>
      <p>A crucial result states that for any L-DINF formula we can find an equivalent L-INF formula:
Lemma 3. For all  ∈ ℒL-DINF there exists ˜ ∈ ℒL-INF such that L-DINF ⊢  ↔ ˜.
Proof. The proof is by induction on the structure of formulas. Let us show the case of  =(1, 2),
; then, we can obtain ˜ by replacing [: ](1, 2) with (1, 2) in  .
the others cases are shown analogously. By axiom (12) we have [ :  ](1, 2) ↔ (1, 2) with
 ((1, 2)) ⊆ and by axiom (26) we have  ↔ [[: ](1,2)/(1,2)] with  ([ :  ](1, 2)) ⊆
[: ](1,2)↔(1,2)</p>
      <sec id="sec-7-1">
        <title>The previous lemmas allow us to prove the following:</title>
        <p>Theorem 1. L-INF is strongly complete for the class of L-INF models.</p>
        <p>Proof. Any consistent set Φ may be extended to a maximal consistent set ⋆ ∈  and
,  ⋆ |= Φ by Lemma 2. Then, L-INF is strongly complete for the class of L-INF models.
Theorem 2. L-DINF is strongly complete for the class of L-INF models.</p>
        <p>L-DINF is strongly complete for the class of L-INF models.</p>
        <p>Proof. If Φ is a consistent set of ℒL-DINF formulas then by applying Lemma 3 we can obtain the
set Φ˜ = {˜ |  ∈ Φ}, which is a consistent set of ℒL-INF formulas. By Theorem 1 ,  |= Φ˜ .
But since L-DINF is sound and for each  ∈ Φ, L-DINF ⊢  ↔ ˜, it follows ,  |= Φ then</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>A. I. Goldman</surname>
          </string-name>
          , Theory of mind, in: E. Margolis,
          <string-name>
            <given-names>R.</given-names>
            <surname>Samuels</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. P.</surname>
          </string-name>
          Stich (Eds.),
          <source>The Oxford Handbook of Philosophy of Cognitive Science</source>
          , volume
          <volume>1</volume>
          , Oxford University Press,
          <year>2012</year>
          , pp.
          <fpage>402</fpage>
          -
          <lpage>424</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>An epistemic logic for modular development of multi-agent systems</article-title>
          , in: N.
          <string-name>
            <surname>Alechina</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Baldoni</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          Logan (Eds.),
          <source>EMAS</source>
          <year>2021</year>
          : 9th International Workshop on Engineering Multi-Agent
          <string-name>
            <surname>Systems</surname>
          </string-name>
          , Proceedings, volume
          <volume>13190</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Modelling agents roles in the epistemic logic L-DINF</article-title>
          , in: O.
          <string-name>
            <surname>Arieli</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Casini</surname>
          </string-name>
          , L. Giordano (Eds.),
          <source>Proc. of the 20th International Workshop on Non-Monotonic Reasoning, NMR</source>
          <year>2022</year>
          ,
          <article-title>Part of the Federated Logic Conference</article-title>
          (FLoC
          <year>2022</year>
          ), Haifa, Israel,
          <source>August 7-9</source>
          ,
          <year>2022</year>
          , volume
          <volume>3197</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>79</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3197</volume>
          /paper7.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>An epistemic logic for formalizing group dynamics of agents</article-title>
          ,
          <source>Interaction Studies</source>
          <volume>23</volume>
          (
          <year>2022</year>
          )
          <fpage>391</fpage>
          -
          <lpage>426</lpage>
          . doi:
          <volume>10</volume>
          .1075/is.22019.cos.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Timed memory in resource-bounded agents</article-title>
          , in: C.
          <string-name>
            <surname>Ghidini</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Magnini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Passerini</surname>
          </string-name>
          , P. Traverso (Eds.),
          <source>Proc. of AI*IA</source>
          <year>2018</year>
          , volume
          <volume>11298</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2018</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Temporalizing epistemic logic L-DINF</article-title>
          , in: R. Calegari,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ciatto</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Omicini (Eds.),
          <source>Proc. of the 37th Italian Conference on Computational Logic</source>
          , Bologna, Italy, June 29 - July 1,
          <year>2022</year>
          , volume
          <volume>3204</volume>
          <source>of CEUR Workshop Proc</source>
          .,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Koymans</surname>
          </string-name>
          ,
          <article-title>Specifying real-time properties with metric temporal logic, Real-Time Systems 2 (</article-title>
          <year>1990</year>
          )
          <fpage>255</fpage>
          -
          <lpage>299</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Georgef</surname>
          </string-name>
          ,
          <article-title>Modeling rational agents within a BDI architecture</article-title>
          ,
          <source>in: Proc. of the Second Int. Conf. on Principles of Knowledge Representation and Reasoning (KR'91)</source>
          , Morgan Kaufmann,
          <year>1991</year>
          , pp.
          <fpage>473</fpage>
          -
          <lpage>484</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H. V.</given-names>
            <surname>Ditmarsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. V. D.</given-names>
            <surname>Hoek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kooi</surname>
          </string-name>
          , Handbook of Epistemic Logic, College Publications,
          <year>2015</year>
          . Editors.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Weyhrauch</surname>
          </string-name>
          ,
          <article-title>Prolegomena to a theory of mechanized formal reasoning</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>13</volume>
          (
          <year>1980</year>
          )
          <fpage>133</fpage>
          -
          <lpage>170</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>Towards a logic of "inferable" for self-aware transparent logical agents</article-title>
          , in: C.
          <string-name>
            <surname>Musto</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Magazzeni</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Ruggieri</surname>
          </string-name>
          , G. Semeraro (Eds.),
          <source>Proc. of the Italian Workshop on Explainable AI co-located with AI*IA</source>
          <year>2020</year>
          , volume
          <volume>2742</volume>
          <source>of CEUR Workshop Proc., CEUR-WS.org</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>68</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>An epistemic logic for multi-agent systems with budget and costs</article-title>
          , in: W. Faber, G. Friedrich,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gebser</surname>
          </string-name>
          , M. Morak (Eds.),
          <source>Logics in Artificial Intelligence, Proc. of 17th European Conference, JELIA'21</source>
          , volume
          <volume>12678</volume>
          <source>of LNCS</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>