<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a Logic of “Inferable” for Self-Aware Transparent Logical Agents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Stefania Costantini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentina Pitoni</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DISIM, University of L'Aquila</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In Artificial Intelligence, multi agent systems constitute an interesting typology of society modeling, and have vast fields of application. Logic is often used to model such kind of systems as it is provides explainability and verifiabilty. In this paper1 we talk about the cognitive aspect of autonomous systems, and we propose a particular logical framework (Logic of “Inferable” (L-DINF)) which introduces aspects of self-awareness, which is fundamental for reaching explainability. In fact, the proposed logic allow agents to reason about actions that they are able to perform, which is the logical inference chain that allows each action to be performed, and in how many steps. We consider resource-bounded agents, that can execute an action only if possessing the necessary resources to do so.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], that also uses a neighborhood semantics for the notion of evidence, the sentential
approach to explicit beliefs and their dynamics by [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], the dynamic theory of explicit
and implicit beliefs by [17] and the dynamic logic of explicit beliefs and knowledge by
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        The logic of inference stems from Vela´zquez-Quesada [16] and the logical system
DES4n by Duc [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], which share a similar view with our logic. In particular,
Vela´zquezQuesada shares with us the idea of modeling inference steps by means of dynamic
operators in the style of dynamic epistemic logic (DEL). But, he does not emphasizes
either the concepts of explicit belief and of background knowledge, nor issues related
to the executability of actions. The system discussed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] shares with our logic the
idea that an agent gets to know (or believe) something by performing inferences, and
making inferences takes time. Nonetheless, while in our logic inferential operations
are represented both at the syntactic level, via dynamic operators in the DEL style,
and at the semantic level, as model-update operations, in Duc’s system and its formal
semantics they are not. In addition, we check whether an action can be performed or
not and how many steps are needed to perform it.
      </p>
      <p>
        Our constructive approach to explicit beliefs also distinguishes L-INF from Active
logics [
        <xref ref-type="bibr" rid="ref7 ref8">7,8</xref>
        ], in which the basic semantics includes three components: (i) an agent’s
belief set, identifying all facts that an agent explicitly believes, (ii) an observation function,
identifying all facts that an agent observes at a given time point, and (iii) an inference
function, specifying what an agent should believe at the next time point on the basis of
the application of her inference rules on her belief set, given her actual observations.
There are important differences between active logics and our logic L-INF. Like active
logics, we provide models of reasoning based on the assumption that an agent has a
long-term memory and a short-term memory (or working memory). However, active
logics do not distinguish, as we do, between the notion of explicit belief and the
notion of background knowledge, conceived as different kinds of epistemic attitudes. In
our approach, the long-term memory is ‘stable’, in the sense that it keeps the agent’s
background knowledge, that is assumed not to change during the agent’s operation. The
short-term memory instead contains beliefs (facts and rules) that either record the
exogenous events that the agent has perceived or the new knowledge that it has learned, by
its interactions with the external environment; it will also contain new beliefs that can
be inferred, either locally or on the basis of rules retrieved from the long-term memory.
In fact, our logic accounts for a variety of inferential actions (or ‘mental operations’)
that have not been explored in the active logic literature and are very important to
infer new beliefs. These actions correspond to basic operations of ‘mind-reading’ in the
sense of Theory of Mind (ToM) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], since they are mental and not physical ones, and
require an agent to be aware of his own action’s executability conditions, related to the
available resources.
      </p>
      <p>
        For our logic we we drew inspiration from the approach of Self-aware
computing: quoting [15], Self-aware and self-expressive computing describes an emerging
paradigm for systems and applications that proactively gather information; maintain
knowledge about their own internal states and environments; and then use this
knowledge to reason about behaviors, revise self-imposed goals, and self-adapt.: : : Systems
that gather unpredictable input data while responding and self-adapting in uncertain
environments are transforming our relationship with and use of computers. Reporting
from [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], From an autonomous agent view, a self-aware system must have sensors,
effectors, memory (including representation of state), conflict detection and handling,
reasoning, learning, goal setting, and an explicit awareness of any assumptions. The
system should be reactive, deliberative, and reflective.
      </p>
      <p>The introspective abilities of our agents as described in the proposed logic are
limited. I.e., an agent is aware of the actions it can execute, and how many steps are needed
to reach an objective. Nonetheless, our logic constitutes a first step towards self-aware
agents.</p>
      <p>The paper is organized as follows. In Section 2 we introduce the Syntax and the
Semantic of our logic. In Sections 3 we show the axiomatization and the proof of
soundness, instead in 4 we show the proof of strongl completeness. In Section 5 we propose
a brief discussion on future work and conclude.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Logical framework</title>
      <p>L-DINF is a logic which consists of a static component and a dynamic one. The static
component, called L-INF, is a logic of explicit beliefs and background knowledge. The
dynamic component, called L-DINF, extends the static one with dynamic operators
capturing the consequences of the agents’ inferential operations on their explicit beliefs
as well as a dynamic operator capturing what an agent can conclude by performing
some inferential operation in her repertoire.
2.1</p>
      <sec id="sec-2-1">
        <title>Syntax</title>
        <p>Assume a countable set of atomic propositions Atm = fp; q; : : :g. By P rop we denote
the set of all propositional formulas, i.e. the set of all Boolean formulas built out of the
set of atomic propositions Atm. A subset AtmA of the atomic propositions represent
the actions that an agent can perform.</p>
        <p>The language of L-DINF, denoted by LL-DINF , is defined by the following
grammar in Backus-Naur form:</p>
        <p>::= `('; ) j \('; ) j #('; )
; ::= p j exec( ) j :' j ' ^ j B ' j K ' j</p>
        <p>[ ]' j ' j do( A) j can do( A)
where the language of inferential actions of type is denoted by LACT (and a finite
set of inferential actions X stems from LACT), and p ranges over Atm. Notice the
expression do( A), where it is required that A 2 AtmA. This expression indicates
actual execution of action A. In fact do is not axiomatized, as it is what has been
called in [18] a semantic attachment, i.e., a procedure which connects an agent with its
external environment in a way that is unknown at the logical level. can do( A), where
again A 2 AtmA, is a reserved syntax that, as seen later, will allow an agent to reason
about its own capabilities.</p>
        <p>
          Obviously the static part, L-INF, has the same definition save removing the
inferential actions. The other Boolean constructions are defined from p, : and ^ in the
standard way. The formula B ' is read “the agent explicitly believes that ' is true” or,
more shortly, “the agent believes that ' is true”. Explicit beliefs are accessible in the
working memory and are the basic elements of the agents’ reasoning process,
according the logic of local reasoning by Fagin &amp; Halpern [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. An effect of this approach is
that agents cannot distinguish between logically equivalent formulas, i.e., if two facts
' and are logically equivalent and an agent explicitly believes that ' is true, then she
believes that is true as well. There are other approaches, such as justification logics
[14], that do not have this feature.
        </p>
        <p>Unlike explicit beliefs, background knowledge is assumed to satisfy ‘omniscience’
principles like closure under conjunction and known implication, closure under logical
consequence and introspection. Specifically, K is nothing but the well-known S5
operator for knowledge widely used in computer science. The fact that background
knowledge is closed under logical consequence is justified by the fact that we conceive it as a
kind of deductively closed belief base. Specifically, we assume background knowledge
to include all facts that the agent has stored in her long-term memory (LTM), after
having processed them in her working memory (WM), as well as all logical consequences
of these facts.</p>
        <p>
          The formula [ ]' should be read “' holds after the inferential operation (or
inferential action) is performed by the agent”. Borrowing from and extending [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], we
include in the language of inferential actions LACT three types of inferential operations
, which allow us to capture some of the dynamic properties of explicit beliefs and
background knowledge. The operations that we consider are in particular the
following: `('; ), \('; ) and #('; ). This operations characterize the basic operations of
forming explicit beliefs via inference:
– `('; ) is the inferential operation which consists in inferring from ' in case ' is
believed and, according to an agent’s working memory, is a logical consequence
of '. With this last operation we operate directly on the working memory without
retrieving anything from the background knowledge.
– #('; ) is the inferential operation which consists in inferring from ' in case '
is believed and, according to an agent’s background knowledge, is a logical
consequence of '. In other words, by performing this inferential operation, an agent
tries to retrieve from her background knowledge in long-term memory the
information that ' implies and, if she succeeds, she starts to believe ; we assume that,
differently from explicit beliefs, background knowledge is irrevocable in the sense
of being stable over time.
– \('; ) is the inferential operation which consists in closing the explicit belief that
' and the explicit belief that under conjunction. In other words, \('; )
characterizes the inferential operation of deducing ' ^ from the explicit belief that '
and the explicit belief that ;
Remark 1. In the mental actions `('; ) and #('; ), the formula which is inferred
and asserted as a new belief can be do( A), which denotes the actual execution of action
        </p>
        <p>A. In fact, we assume that when inferring do( A) the action is actually executed, and
the corresponding belief asserted, possibly augmented with a time-stamp. Actions are
supposed to succeed by default, in case of failure a corresponding failure event will be
perceived by the agent. Therefore, the occurrence of physical actions (performed via
the procedure do) is recorded through the formation of new beliefs.</p>
        <p>The atomic formulas exec( ) have to be read “ is an inferential action that the
agent can perform”, meaning one of the above.</p>
        <p>Remark 2. Specifying executability of actions pertains to the fact that we intend to
consider agents with limited resources. Specific instances ' OP1 where one of the
two formulas is an action may or may not be executable depending upon the agent
having the necessary resources for performing that action. Executability is defined for
inferential actions because such actions, if executable, may enact ‘physical’ actions.
Remark 3. Explainability in our approach can be directly obtained from proofs. If, for
instance, the user asks an explanation of why the action A has been performed, the
system can respond by exhibiting the proof that has lead to A, of the form
(exec('1OP1 1) ^ ('1OP1 1)) ^ : : : ^ (exec('nOPn A) ^ 'N OPn do( A))
where each OPi is one of the (mental) actions discussed above. The proof can possibly
be translated into natural language, and declined either top-down or bottom-up.</p>
        <p>Finally, the formula ' has to be read “the agent can ensure ' by executing some
inferential action in her repertoire”. The interesting aspect of our language is that, at
least in perspective (as the formalization is not complete yet), it allows us to express the
concept of “being able to infer ' in k inferential steps”. Specifically, let us inductively
define: 0' = ', k+1 = k'. The formula kB ' represents the fact that the agent
is capable of inferring ' in k steps.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Semantics</title>
        <p>The main semantics notions are outlined in the following definition of L-INF model
which provides the basic components for the interpretation of the static version of the
logic:
Definition 1. We define a model to be a tuple M = (W; N; R; E; V ) where:
– W is a set of worlds or situations;
– R W W is an equivalence relation on W ;
– N : W ! 22W is a neighborhood function such that for all i 2 Agt, w; v 2 W
and X W :
(C1) if X 2 N (w) then X R(w),
(C2) if wRv then N (w) = N (v);
– E : W ! 2LACT is an executability function such that for all w; v 2 W :
(D1) if wRv then E(w) = E(v);
– V : W ! 2Atm is a valuation function.</p>
        <p>For every w 2 W , R(w) = fv 2 W j wRvg identifies the set of situations that the
agent considers possible at world w. In cognitive terms, R(w) can be conceived as the
set of all situations that the agent can retrieve from her long-term memory and reason
about them. More generally, R(w) is called the agent’s epistemic state at w.</p>
        <p>For every w 2 W , N (w) defines the set of all facts that the agent explicitly believes
at world w, a fact being identified with a set of worlds. More precisely, if A 2 N (w)
then, at world w, the agent has the fact A under the focus of her attention and believes
it. N (w) is called the agent’s explicit belief set at world w.</p>
        <p>E(w) is the set of mental operations that the agent can execute at w, as it has the
resources to do so.</p>
        <p>Constraint (C1) just means that an agent can have explicit in her mind only facts
which are compatible with her current epistemic state. According to Constraint (C2), if
world v is compatible with the agent’s epistemic state at world w, then the agent should
have the same explicit beliefs at w and v. Constraint (D1) means that an agent always
knows the actions which she can perform and those that she cannot.</p>
        <p>Truth conditions of L-DINF formulas are inductively defined as follows: for a model
M = (W; N; R; E; V ), a world w 2 W , a formula ' 2 LL-INF , and an action , we
define the truth relation M; w j= ' and a new model M by simultaneous recursion on
and ' as follows. Below, we write</p>
        <p>k'kwM = fv 2 W : wRv and M; v j= 'g
whenever M; v j= ' is well-defined. Then, we set:
– M; w j= p iff p 2 V (w)
– M; w j= exec( ) iff 2 E(w)
– M; w j= :' iff M; w 6j= '
– M; w j= ' ^ iff M; w j= ' and M; w j=</p>
        <p>M
– M; w j= B ' iff jj'jjw 2 N (w)
– M; w j= K ' iff M; v j= ' for all v 2 R(w)
– M; w j= [ ]' iff M ; w j= '
– M; w j= ' iff 9 2 E(w) s.t. M ; w j= '
where M</p>
        <p>= (W; N ; R; E; V ) and N is defined as follows. For w 2 W , set</p>
        <p>We write j=L-DINF ' to denote that ' is true in all worlds w of every L-DINF
model M .</p>
        <p>8N (w) [ fjj jjw g</p>
        <p>M
&gt;
N #( ; )(w) = &lt;
&gt;:N (w)
&gt;:N (w)
&gt;:N (w)
8N (w) [ fjj ^ jjw g</p>
        <p>M
&gt;
N \( ; )(w) = &lt;
8N (w) [ fjj jjw g</p>
        <p>M
&gt;
N `( ; )(w) = &lt;
if M; w j= B
K ( ! )
otherwise</p>
        <p>^
if M; w j= B
B
otherwise</p>
        <p>^
if M; w j= B
B ( ! )
otherwise
^
Property 1. As consequence of previous definitions we have the following:
– j=L-INF (K(' ! )) ^ B ') ! [#('; )] B .</p>
        <p>Namely, if an agent has ' as one of its beliefs and has K(' ! ) in its background
knowledge, then as a consequence of the action #('; ) the agent starts believing ;
– j=L-INF (B' ^ B ) ! [\('; )]B(' ^ ).</p>
        <p>Namely, if an agent has ' and as beliefs, then as a consequence of the action
\('; ) the agent i starts believing ' ^ ;
– j=L-INF (B(' ! )) ^ B ') ! [`('; )] B . Namely, if an agent has ' as one
of its beliefs and has B(' ! ) in its working memory, then as a consequence of
the action `('; ) the agent starts believing ;
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Axiomatization</title>
      <p>The L-INF and L-DINF axioms are:
1. (K ' ^ K(' ! )) ! K ;
2. K ' ! ';
3. :K (' ^ :');
4. K ' ! K K ';
5. :K ' ! K :K ';
6. B ' ^ K (' $ ) ! B ;
7. B ' ! K B ';
8. K'' ;
9. [ ]p $ p;
10. [ ]:' $ :[ ]';
11. exec( ) ^ [ ]' ! ';
12. exec( ) ! K (exec( ));
13. [ ](' ^ ) $ [ ]' ^ [ ] ;
14. [ ]K ' $ K ([ ]');
15. [#('; )]B $ B ([#('; )] ) _ ((B ' ^ K (' ! )) ^ K ([#('; )] $ ));
16. [\('; )]B $ B ([\('; )] ) _ ((B ' ^ B ) ^ K ([\('; )] $ (' ^ ));
17. [`('; )]B $ B ([`('; )] ) _ ((B ' ^ B (' ! )) ^ B ([`('; )] $ ));
18. '$'[ = ] ;</p>
      <p>
        $
19. p ! p;
20. (' ^ ) ! ' ^ ;
21. ' ! ';
22. B ' ! B ';
23. K ' ! K '.
24. ([ ]') ! 1'.
25. ([
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]([
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]')) ! 2'.
26. ([
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]([
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]([
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]'))) ! 3'
We write L-DINF ` ' which signifies that ' is a theorem of L-DIN F . Thanks to the
previous axioms, L-INF and L-DINF are sound for the class of L-INF models.
      </p>
      <p>
        This with the exception of the last axioms concerning k that, in this formal setting,
cannot be generalized, as we do not have the equivalence of “transitive closure”.
Therefore, in the present version one has to establish the maximum number of execution step
to be considered (in the above formulation, just three steps) and write the
corresponding axioms. These axioms are however of great practical utility for a self-aware agent:
if the agent is able to prove k ', where ' = can do( A), then it becomes aware of
being able to perform a certain action in k steps. As seen in the axioms, k can do( A)
is proved by finding a sequence of mental actions ([
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]; : : : [ k]) which leads to the
result. Therefore, the agent can possibly enact the analogous sequence (that, notice, is
the equivalent of a plan), where can do( A) is substituted by do( A), so that action
      </p>
      <p>A is eventually performed. This at the condition that all the [ i]’s are executable in the
agent’s present state.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Strong Completeness</title>
      <p>For the proof that L-INF is strongly complete we limit ourselves, as we do not consider
the operator, that is still an experimental feature that we aim to better formalize in
future work. We can achieve the proof by means of a standard canonical model argument.
Definition 2. The canonical L-INF model is a tuple Mc = hWc; Nc; Rc; Vc; Eci where:
– Wc is the set of all maximal consistent subsets of LL-INF;
– For every w 2 Wc, wRcv if and only if K ' 2 w iff K ' 2 v; i.e., Rc is an
equivalence relation on knowledge; as before, we define Rc(w) = fv 2 Wc j
wRcvg.
– For w 2 Wc, ' 2 LL-INF we define A'(w) = fv 2 Rc(w) j ' 2 vg. Then, we
put Nc(w) = fA'(w) j B ' 2 wg.
– Vc is a valuation function defined as before;
– EC is the executable function defined as before.</p>
      <p>There are the following immediate consequences of the above definition: if w 2 Wc
then
– given ' 2 LL-INF, it holds that K ' 2 w if and only if 8v 2 Wc such that wRcv
we have ' 2 v;
– for ' 2 LL-INF, if B ' 2 w and wRcv then B ' 2 v;
– for 2 Ec(w), if wRcv then 2 Ec(v).</p>
      <p>Thus, Rci -related worlds have the same knowledge and Nc-related worlds have the
same beliefs, i.e. there can be Rci -related worlds with different beliefs. The following
results hold:
Lemma 1. For all w 2 Wc and B '; B 2 LL-INF, if B ' 2 w but B
follows that there exists v 2 Rc(w) such that ' 2 v $ 62 v.
62 w, it
Proof. Let w 2 Wc and '; be such that B ' 2 w and Bi 2= w. Assume now that
for every v 2 Rc(w) we have ' 2 v, 2 v or ' 2= v, 2= v; then, from previous
statements it follows that K(' $ ) 2 w so that by axiom 6 in (3) B 2 w which is
a contradiction.</p>
      <p>Lemma 2. For all ' 2 LL INF and w 2 Wc it holds that ' 2 w if and only if
Mc; w '.</p>
      <p>Proof. We have to prove the statement for all ' 2 LL-INF.</p>
      <p>– ' = p, w 2 Wc, if p 2 w then p 2 Vc(w) so for the truth conditions in definition
(1) we have Mc; w p; to prove the opposite implication we have to proceed with
the same reasoning;
– ' = exec( ), w 2 Wc, if exec( ) 2 w then 2 Ec(w) so for the truth conditions
in definition (1) we have Mc; w exec( );
– all the other cases have the same proof except ' = B '. Assume B ' 2 w and
w 2 Wc:</p>
      <p>A'(w) = fv 2 Rc(w) j ' 2 vg =
= by definition (1) =</p>
      <p>=k ' kwMc \Rc(w)</p>
      <p>Nc(w) = fA'(w) j B' 2 wg
so for the previous definition of canonical model:
than k ' kwMc 2 Nc(w) and for definition (1) Mc; w B '.</p>
      <p>Suppose B ' 2= w so :B ' 2 w and we have to prove k ' kwMc \ Rc(w) 2= Nc(w).
Choose A 2 Nc(w), by definition we know A = A (w) for some with B 2 w.
By Lemma (1) there is some v 2 Rc(w) such thatM'c2\vR$c(w)) n A (w);
2= v, so we have:
1. for (!) thanks to the induction hp v 2 (k ' kw
2. for ( ) thanks to the induction hp vI 2 A (w) n (k ' kwMc \ Rc(w));
than for (1) and (2) A (w) 6=k ' kwMc \ Rc(w) and since A = A (w) was
arbitrary in Nc(w) we conclude that k ' kwMc \ Rc(w) 2= Nc(w), and so than
Mc; w 2 B '.</p>
      <p>Lemma 3. For all ' 2 LL-DINF there exists '~ 2 LL-INF such that L
(for any L-DINF formula we can find an equivalent L-INF formula).</p>
      <p>DINF `
$ '~
[ ]p with p in '.</p>
      <p>Proof. We have to prove the sentence for all ' 2 LL-INF but we show the proof only for
' = p because the others are proved analogously. By the axioms in Section (3) we have
[ ]p $ p and by axiom (18) we have [ ]p $ p which means that we can replace
' $ '[[ ]p=p]
The previous lemmas allow us to prove the following theorems.</p>
      <p>Theorem 1. L-INF is strongly complete for the class of L-INF models.
Proof. Any consistent set ' may be extended to a maximal consistent set of formulas
w? 2 Wc and Mc; w? by Lemma (2). Then, L-INF is strongly complete for the
class of L-INF models.</p>
      <p>Theorem 2. L-DINF (without ) is strongly complete for the class of L-INF models.
Proof. If K is a consistent set of LL-DINF formulas then K~ = f'~ j ' 2 Kg is a
consistent set of LL-INF formulas by Lemma (3). By Theorem (1) there is a model Mc
with a world w such that Mc; w K~ . But since L-DINF is sound and for each ' 2 K,
L DINF ` ' $ '~, it follows Mc; w K then L-DINF is strongly complete for the
class of L-INF models.</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>In this paper we discussed some cognitive aspects of autonomous system, concerning
execution steps and executability. To model these aspects we have proposed the modal
logic L-INF, and we have proved some useful properties among which strong
completeness, though under significant restrictions.</p>
      <p>
        In future work we mean to extend our logic to the multi-agents case and to prove,
without restrictions, that L-INF is strongly complete. We will then extend L-INF to the
multi-agents case and there is an intention to insert the concept of budget, as a value
that represents how much an agent can spend of his own resources in the world w. This
is important to better represent the fact that the agent is resource-bounded. Moreover,
we have to compute the complexity and extend the group of inferential actions that we
consider. Also, the temporal aspects of an agent’s operation has not been considered
here, but we tackled this aspects in [
        <xref ref-type="bibr" rid="ref5">5,13</xref>
        ]. We intende to merge the two approaches, so
as to obtain a comprehensive framework.
13. Pitoni, V.: Memory Management in Resource-Bounded Agents. In: Proceedings 35th
International Conference on Logic Programming (Technical Communications), ICLP 2019.
      </p>
      <p>EPTCS, Volume 306 (2019) 452–460
14. Savic, N., Studer, T.: Relevant justification logic. FLAP 6(2), 397–412 (2019)
15. Tørresen, J., Plessl, C., Yao, X.: Self-aware and self-expressive systems. IEEE Computer
48(7) (2015) 18–20
16. Vela´zquez-Quesada, F.R.: Explicit and implicit knowledge in neighbourhood models. In:
Grossi, D., Roy, O., Huang, H. (eds.) Logic, Rationality, and Interaction - 4th International
Workshop, LORI 2013, Hangzhou, China, October 9-12, 2013, Proceedings. pp. 239–252.</p>
      <p>Springer (2013). https://doi.org/10.1007/978-3-642-40948-6-19
17. Vela´zquez-Quesada, F.R.: Dynamic epistemic logic for implicit and explicit beliefs. Journal
of Logic, Language and Information 23(2), 107–140 (2014).
https://doi.org/10.1007/s10849014-9193-0
18. Richard W. Weyhrauch Prolegomena to a Theory of Mechanized Formal Reasoning Artificial
Intelligence, 13(1-2) (1980) 133–170</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Amir</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andreson</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chaudri</surname>
            ,
            <given-names>V.K.</given-names>
          </string-name>
          :
          <source>Report on DARPA workshop on self aware computer systems</source>
          .
          <source>Technical Report</source>
          , SRI International Menlo Park United States (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Balbiani</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Duque</surname>
            ,
            <given-names>D.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lorini</surname>
          </string-name>
          , E.:
          <article-title>A logical theory of belief dynamics for resourcebounded agents</article-title>
          .
          <source>In: Proceedings of the 2016 International Conference on Autonomous Agents &amp; Multiagent Systems</source>
          ,
          <string-name>
            <surname>AAMAS</surname>
          </string-name>
          <year>2016</year>
          . pp.
          <fpage>644</fpage>
          -
          <lpage>652</lpage>
          . ACM (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Balbiani</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <article-title>Ferna´ndez-</article-title>
          <string-name>
            <surname>Duque</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lorini</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>The dynamics of epistemic attitudes in resource-bounded agents</article-title>
          .
          <source>Studia Logica</source>
          <volume>107</volume>
          (
          <issue>3</issue>
          ),
          <fpage>457</fpage>
          -
          <lpage>488</lpage>
          (
          <year>2019</year>
          ). https://doi.org/10.1007/s11225-018-9798-4
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. van Benthem,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Pacuit</surname>
          </string-name>
          , E.:
          <article-title>Dynamic logics of evidence-based beliefs</article-title>
          .
          <source>Studia Logica</source>
          <volume>99</volume>
          (
          <issue>1- 3</issue>
          ),
          <fpage>61</fpage>
          -
          <lpage>92</lpage>
          (
          <year>2011</year>
          ). https://doi.org/10.1007/s11225-011-9347-x, https://doi.org/10. 1007/s11225-011-9347-x
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Pitoni</surname>
          </string-name>
          , V.:
          <article-title>A Temporal Module for Logical Frameworks</article-title>
          .
          <source>In: Proceedings 35th International Conference on Logic Programming (Technical Communications)</source>
          ,
          <source>ICLP 2019. EPTCS</source>
          , Volume
          <volume>306</volume>
          (
          <year>2019</year>
          )
          <fpage>240</fpage>
          -
          <lpage>346</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Duc</surname>
            ,
            <given-names>H.N.</given-names>
          </string-name>
          :
          <article-title>Reasoning about rational, but not logically omniscient, agents</article-title>
          .
          <source>J. Log. Comput</source>
          .
          <volume>7</volume>
          (
          <issue>5</issue>
          ),
          <fpage>633</fpage>
          -
          <lpage>648</lpage>
          (
          <year>1997</year>
          ). https://doi.org/10.1093/logcom/7.5.
          <fpage>633</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Elgot-Drapkin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kraus</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nirkhe</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Active logics: A unified formal approach to episodic reasoning (09</article-title>
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Elgot-Drapkin</surname>
            ,
            <given-names>J.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>M.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.:</given-names>
          </string-name>
          <article-title>Life on a desert island: Ongoing work on realtime reasoning (</article-title>
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Eric</surname>
            <given-names>Margolis</given-names>
          </string-name>
          ,
          <string-name>
            <surname>R.S.</surname>
          </string-name>
          , Stich(eds.),
          <string-name>
            <surname>S.P.</surname>
          </string-name>
          :
          <source>The Oxford Handbook of Philosophy of Cognitive Science</source>
          . Oxford University Press (
          <year>2012</year>
          ). https://doi.org/10.1093/oxfordhb/9780195309799.001.0001
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Fagin</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Halpern</surname>
          </string-name>
          , J.Y.:
          <article-title>Belief, awareness, and limited reasoning</article-title>
          .
          <source>Artif. Intell</source>
          .
          <volume>34</volume>
          (
          <issue>1</issue>
          ),
          <fpage>39</fpage>
          -
          <lpage>76</lpage>
          (
          <year>1987</year>
          ). https://doi.org/10.1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>87</issue>
          )
          <fpage>90003</fpage>
          -
          <lpage>8</lpage>
          , https://doi.org/10.1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>87</issue>
          )
          <fpage>90003</fpage>
          -
          <lpage>8</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Goldman</surname>
            ,
            <given-names>A.I.</given-names>
          </string-name>
          , et al.:
          <article-title>Theory of mind</article-title>
          .
          <source>The Oxford handbook of philosophy of cognitive science 1</source>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Jago</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Epistemic logic for rule-based agents</article-title>
          .
          <source>Journal of Logic, Language and Information</source>
          <volume>18</volume>
          (
          <issue>1</issue>
          ),
          <fpage>131</fpage>
          -
          <lpage>158</lpage>
          (
          <year>2009</year>
          ). https://doi.org/10.1007/s10849-008-9071-8
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>