Towards a Logic of “Inferable” for Self-Aware Transparent Logical Agents Stefania Costantini and Valentina Pitoni DISIM, University of L’Aquila Abstract. In Artificial Intelligence, multi agent systems constitute an interesting typology of society modeling, and have vast fields of application. Logic is often used to model such kind of systems as it is provides explainability and verifiabilty. In this paper1 we talk about the cognitive aspect of autonomous systems, and we propose a particular logical framework (Logic of “Inferable” (L-DINF)) which introduces aspects of self-awareness, which is fundamental for reaching explain- ability. In fact, the proposed logic allow agents to reason about actions that they are able to perform, which is the logical inference chain that allows each action to be performed, and in how many steps. We consider resource-bounded agents, that can execute an action only if possessing the necessary resources to do so. 1 Introduction According to the Oxford Handbook of Philosophy and Cognitive Science [9], Chapter by Alvin I. Goldman, “Theory of Mind” refers to the cognitive capacity to attribute mental states to self and others. Other names for the same capacity include “com- monsense psychology”, “naı̈ve psychology”, “folk psychology”, “mindreading” and “mentalizing”. . . . In cognitive science the core question in this terrain is: How do peo- ple execute this cognitive capacity? How do they, or their cognitive systems, go about the task of forming beliefs or judgments about others’ mental states, states that aren’t directly observable? Less frequently discussed in psychology is the question of how people self-ascribe mental states. In the literature, we can find different kinds of logical frameworks which try to emulate cognitive aspects of human beings. In this paper we propose a particular logical framework (Logic of “Inferable” (L-DINF)), in which we consider the concepts of step and of executability of an agent’s actions. Logics of ’awareness’ have been studied in the recent years starting from the semi- nal work of Fagin & Halpern [10]. These logics distinguish between awareness, implicit belief and explicit belief. The crucial difference between our logic and existing logics of awareness is that the latter make no use of concepts as ‘reasoning’ or ‘inference’. Instead, L-DINF provides a constructive theory of explicit beliefs, as it accounts for the perceptive and inferential steps leading from an agent’s knowledge and beliefs to new beliefs; also, we considered two other important aspect: “steps” and “executabil- ity”. The aspects related to epistemic attitudes is something our theory shares with other approaches in the literature including the dynamic theory of evidence-based beliefs by 1 Copyright c 2020 for this paper by Stefania Costantini and Valentina Pitoni. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0) [4], that also uses a neighborhood semantics for the notion of evidence, the sentential approach to explicit beliefs and their dynamics by [12], the dynamic theory of explicit and implicit beliefs by [17] and the dynamic logic of explicit beliefs and knowledge by [3]. The logic of inference stems from Velázquez-Quesada [16] and the logical system DES4n by Duc [6], which share a similar view with our logic. In particular, Velázquez- Quesada shares with us the idea of modeling inference steps by means of dynamic operators in the style of dynamic epistemic logic (DEL). But, he does not emphasizes either the concepts of explicit belief and of background knowledge, nor issues related to the executability of actions. The system discussed in [6] shares with our logic the idea that an agent gets to know (or believe) something by performing inferences, and making inferences takes time. Nonetheless, while in our logic inferential operations are represented both at the syntactic level, via dynamic operators in the DEL style, and at the semantic level, as model-update operations, in Duc’s system and its formal semantics they are not. In addition, we check whether an action can be performed or not and how many steps are needed to perform it. Our constructive approach to explicit beliefs also distinguishes L-INF from Active logics [7,8], in which the basic semantics includes three components: (i) an agent’s be- lief set, identifying all facts that an agent explicitly believes, (ii) an observation function, identifying all facts that an agent observes at a given time point, and (iii) an inference function, specifying what an agent should believe at the next time point on the basis of the application of her inference rules on her belief set, given her actual observations. There are important differences between active logics and our logic L-INF. Like active logics, we provide models of reasoning based on the assumption that an agent has a long-term memory and a short-term memory (or working memory). However, active logics do not distinguish, as we do, between the notion of explicit belief and the no- tion of background knowledge, conceived as different kinds of epistemic attitudes. In our approach, the long-term memory is ‘stable’, in the sense that it keeps the agent’s background knowledge, that is assumed not to change during the agent’s operation. The short-term memory instead contains beliefs (facts and rules) that either record the ex- ogenous events that the agent has perceived or the new knowledge that it has learned, by its interactions with the external environment; it will also contain new beliefs that can be inferred, either locally or on the basis of rules retrieved from the long-term memory. In fact, our logic accounts for a variety of inferential actions (or ‘mental operations’) that have not been explored in the active logic literature and are very important to in- fer new beliefs. These actions correspond to basic operations of ‘mind-reading’ in the sense of Theory of Mind (ToM) [11], since they are mental and not physical ones, and require an agent to be aware of his own action’s executability conditions, related to the available resources. For our logic we we drew inspiration from the approach of Self-aware comput- ing: quoting [15], Self-aware and self-expressive computing describes an emerging paradigm for systems and applications that proactively gather information; maintain knowledge about their own internal states and environments; and then use this knowl- edge to reason about behaviors, revise self-imposed goals, and self-adapt.. . . Systems that gather unpredictable input data while responding and self-adapting in uncertain environments are transforming our relationship with and use of computers. Reporting from [1], From an autonomous agent view, a self-aware system must have sensors, ef- fectors, memory (including representation of state), conflict detection and handling, reasoning, learning, goal setting, and an explicit awareness of any assumptions. The system should be reactive, deliberative, and reflective. The introspective abilities of our agents as described in the proposed logic are lim- ited. I.e., an agent is aware of the actions it can execute, and how many steps are needed to reach an objective. Nonetheless, our logic constitutes a first step towards self-aware agents. The paper is organized as follows. In Section 2 we introduce the Syntax and the Se- mantic of our logic. In Sections 3 we show the axiomatization and the proof of sound- ness, instead in 4 we show the proof of strongl completeness. In Section 5 we propose a brief discussion on future work and conclude. 2 Logical framework L-DINF is a logic which consists of a static component and a dynamic one. The static component, called L-INF, is a logic of explicit beliefs and background knowledge. The dynamic component, called L-DINF, extends the static one with dynamic operators capturing the consequences of the agents’ inferential operations on their explicit beliefs as well as a dynamic operator capturing what an agent can conclude by performing some inferential operation in her repertoire. 2.1 Syntax Assume a countable set of atomic propositions Atm = {p, q, . . .}. By P rop we denote the set of all propositional formulas, i.e. the set of all Boolean formulas built out of the set of atomic propositions Atm. A subset AtmA of the atomic propositions represent the actions that an agent can perform. The language of L-DINF, denoted by LL-DIN F , is defined by the following gram- mar in Backus-Naur form: α ::= `(ϕ,ψ) | ∩(ϕ,ψ) | ↓(ϕ, ψ) φ, ψ ::= p | exec(α) | ¬ϕ | ϕ ∧ ψ | B ϕ | K ϕ | [α]ϕ | ♦ϕ | do(φA ) | can do(φA ) where the language of inferential actions of type α is denoted by LACT (and a finite set of inferential actions X stems from LACT ), and p ranges over Atm. Notice the expression do(φA ), where it is required that φA ∈ AtmA . This expression indicates actual execution of action φA . In fact do is not axiomatized, as it is what has been called in [18] a semantic attachment, i.e., a procedure which connects an agent with its external environment in a way that is unknown at the logical level. can do(φA ), where again φA ∈ AtmA , is a reserved syntax that, as seen later, will allow an agent to reason about its own capabilities. Obviously the static part, L-INF, has the same definition save removing the infer- ential actions. The other Boolean constructions are defined from p, ¬ and ∧ in the standard way. The formula B ϕ is read “the agent explicitly believes that ϕ is true” or, more shortly, “the agent believes that ϕ is true”. Explicit beliefs are accessible in the working memory and are the basic elements of the agents’ reasoning process, accord- ing the logic of local reasoning by Fagin & Halpern [10]. An effect of this approach is that agents cannot distinguish between logically equivalent formulas, i.e., if two facts ϕ and ψ are logically equivalent and an agent explicitly believes that ϕ is true, then she believes that ψ is true as well. There are other approaches, such as justification logics [14], that do not have this feature. Unlike explicit beliefs, background knowledge is assumed to satisfy ‘omniscience’ principles like closure under conjunction and known implication, closure under logical consequence and introspection. Specifically, K is nothing but the well-known S5 oper- ator for knowledge widely used in computer science. The fact that background knowl- edge is closed under logical consequence is justified by the fact that we conceive it as a kind of deductively closed belief base. Specifically, we assume background knowledge to include all facts that the agent has stored in her long-term memory (LTM), after hav- ing processed them in her working memory (WM), as well as all logical consequences of these facts. The formula [α]ϕ should be read “ϕ holds after the inferential operation (or in- ferential action) α is performed by the agent”. Borrowing from and extending [2], we include in the language of inferential actions LACT three types of inferential operations α, which allow us to capture some of the dynamic properties of explicit beliefs and background knowledge. The operations that we consider are in particular the follow- ing: `(ϕ,ψ), ∩(ϕ,ψ) and ↓(ϕ, ψ). This operations characterize the basic operations of forming explicit beliefs via inference: – `(ϕ,ψ) is the inferential operation which consists in inferring ψ from ϕ in case ϕ is believed and, according to an agent’s working memory, ψ is a logical consequence of ϕ. With this last operation we operate directly on the working memory without retrieving anything from the background knowledge. – ↓(ϕ, ψ) is the inferential operation which consists in inferring ψ from ϕ in case ϕ is believed and, according to an agent’s background knowledge, ψ is a logical con- sequence of ϕ. In other words, by performing this inferential operation, an agent tries to retrieve from her background knowledge in long-term memory the informa- tion that ϕ implies ψ and, if she succeeds, she starts to believe ψ; we assume that, differently from explicit beliefs, background knowledge is irrevocable in the sense of being stable over time. – ∩(ϕ,ψ) is the inferential operation which consists in closing the explicit belief that ϕ and the explicit belief that ψ under conjunction. In other words, ∩(ϕ,ψ) charac- terizes the inferential operation of deducing ϕ ∧ ψ from the explicit belief that ϕ and the explicit belief that ψ; Remark 1. In the mental actions `(ϕ,ψ) and ↓(ϕ, ψ), the formula ψ which is inferred and asserted as a new belief can be do(φA ), which denotes the actual execution of action φA . In fact, we assume that when inferring do(φA ) the action is actually executed, and the corresponding belief asserted, possibly augmented with a time-stamp. Actions are supposed to succeed by default, in case of failure a corresponding failure event will be perceived by the agent. Therefore, the occurrence of physical actions (performed via the procedure do) is recorded through the formation of new beliefs. The atomic formulas exec(α) have to be read “α is an inferential action that the agent can perform”, meaning one of the above. Remark 2. Specifying executability of actions pertains to the fact that we intend to consider agents with limited resources. Specific instances ϕ OP1 φ where one of the two formulas is an action may or may not be executable depending upon the agent having the necessary resources for performing that action. Executability is defined for inferential actions because such actions, if executable, may enact ‘physical’ actions. Remark 3. Explainability in our approach can be directly obtained from proofs. If, for instance, the user asks an explanation of why the action φA has been performed, the system can respond by exhibiting the proof that has lead to φA , of the form (exec(ϕ1 OP1 φ1 ) ∧ (ϕ1 OP1 φ1 )) ∧ . . . ∧ (exec(ϕn OPn φA ) ∧ ϕN OPn do(φA )) where each OPi is one of the (mental) actions discussed above. The proof can possibly be translated into natural language, and declined either top-down or bottom-up. Finally, the formula ♦ϕ has to be read “the agent can ensure ϕ by executing some inferential action in her repertoire”. The interesting aspect of our language is that, at least in perspective (as the formalization is not complete yet), it allows us to express the concept of “being able to infer ϕ in k inferential steps”. Specifically, let us inductively define: ♦0 ϕ = ϕ, ♦k+1 = ♦♦k ϕ. The formula ♦k B ϕ represents the fact that the agent is capable of inferring ϕ in k steps. 2.2 Semantics The main semantics notions are outlined in the following definition of L-INF model which provides the basic components for the interpretation of the static version of the logic: Definition 1. We define a model to be a tuple M = (W, N, R, E, V ) where: – W is a set of worlds or situations; – R ⊆ W × W is an equivalence relation on W ; W – N : W −→ 22 is a neighborhood function such that for all i ∈ Agt, w, v ∈ W and X ⊆ W : (C1) if X ∈ N (w) then X ⊆ R(w), (C2) if wRv then N (w) = N (v); – E : W −→ 2LACT is an executability function such that for all w, v ∈ W : (D1) if wRv then E(w) = E(v); – V : W −→ 2Atm is a valuation function. For every w ∈ W , R(w) = {v ∈ W | wRv} identifies the set of situations that the agent considers possible at world w. In cognitive terms, R(w) can be conceived as the set of all situations that the agent can retrieve from her long-term memory and reason about them. More generally, R(w) is called the agent’s epistemic state at w. For every w ∈ W , N (w) defines the set of all facts that the agent explicitly believes at world w, a fact being identified with a set of worlds. More precisely, if A ∈ N (w) then, at world w, the agent has the fact A under the focus of her attention and believes it. N (w) is called the agent’s explicit belief set at world w. E(w) is the set of mental operations that the agent can execute at w, as it has the resources to do so. Constraint (C1) just means that an agent can have explicit in her mind only facts which are compatible with her current epistemic state. According to Constraint (C2), if world v is compatible with the agent’s epistemic state at world w, then the agent should have the same explicit beliefs at w and v. Constraint (D1) means that an agent always knows the actions which she can perform and those that she cannot. Truth conditions of L-DINF formulas are inductively defined as follows: for a model M = (W, N, R, E, V ), a world w ∈ W , a formula ϕ ∈ LL-IN F , and an action α, we define the truth relation M, w |= ϕ and a new model M α by simultaneous recursion on α and ϕ as follows. Below, we write kϕkM w = {v ∈ W : wRv and M, v |= ϕ} whenever M, v |= ϕ is well-defined. Then, we set: – M, w |= p iff p ∈ V (w) – M, w |= exec(α) iff α ∈ E(w) – M, w |= ¬ϕ iff M, w 6|= ϕ – M, w |= ϕ ∧ ψ iff M, w |= ϕ and M, w |= ψ – M, w |= B ϕ iff ||ϕ||M w ∈ N (w) – M, w |= K ϕ iff M, v |= ϕ for all v ∈ R(w) – M, w |= [α]ϕ iff M α , w |= ϕ – M, w |= ♦ϕ iff ∃α ∈ E(w) s.t. M α , w |= ϕ where M α = (W, N α , R, E, V ) and N α is defined as follows. For w ∈ W , set  M N (w) ∪ {||χ||w } if M, w |= B ψ ∧  N ↓(ψ,χ) (w) = K (ψ → χ)  N (w) otherwise   M N (w) ∪ {||ψ ∧ χ||w }  if M, w |= B ψ ∧ N ∩(ψ,χ) (w) = Bχ  N (w) otherwise   M N (w) ∪ {||χ||w }  if M, w |= B ψ ∧ N `(ψ,χ) (w) = B (ψ → χ)  N (w) otherwise  We write |=L-DINF ϕ to denote that ϕ is true in all worlds w of every L-DINF model M . Property 1. As consequence of previous definitions we have the following: – |=L-INF (K(ϕ → ψ)) ∧ B ϕ) → [↓(ϕ, ψ)] B ψ. Namely, if an agent has ϕ as one of its beliefs and has K(ϕ → ψ) in its background knowledge, then as a consequence of the action ↓(ϕ, ψ) the agent starts believing ψ; – |=L-INF (Bϕ ∧ Bψ) → [∩(ϕ, ψ)]B( ϕ ∧ ψ). Namely, if an agent has ϕ and ψ as beliefs, then as a consequence of the action ∩(ϕ, ψ) the agent i starts believing ϕ ∧ ψ; – |=L-INF (B(ϕ → ψ)) ∧ B ϕ) → [`(ϕ, ψ)] B ψ. Namely, if an agent has ϕ as one of its beliefs and has B(ϕ → ψ) in its working memory, then as a consequence of the action `(ϕ, ψ) the agent starts believing ψ; 3 Axiomatization The L-INF and L-DINF axioms are: 1. (K ϕ ∧ K(ϕ → ψ)) → K ψ; 2. K ϕ → ϕ; 3. ¬K (ϕ ∧ ¬ϕ); 4. K ϕ → K K ϕ; 5. ¬K ϕ → K ¬K ϕ; 6. B ϕ ∧ K (ϕ ↔ ψ) → B ψ; 7. B ϕ → K B ϕ; 8. Kϕϕ ; 9. [α]p ↔ p; 10. [α]¬ϕ ↔ ¬[α]ϕ; 11. exec(α) ∧ [α]ϕ → ♦ϕ; 12. exec(α) → K (exec(α)); 13. [α](ϕ ∧ ψ) ↔ [α]ϕ ∧ [α]ψ; 14. [α]K ϕ ↔ K ([α]ϕ); 15. [↓(ϕ, ψ)]B χ ↔ B ([↓(ϕ, ψ)]χ) ∨ ((B ϕ ∧ K (ϕ → ψ)) ∧ K ([↓(ϕ, ψ)]χ ↔ ψ)); 16. [∩(ϕ, ψ)]B χ ↔ B ([∩(ϕ, ψ)]χ) ∨ ((B ϕ ∧ B ψ) ∧ K ([∩(ϕ, ψ)]χ ↔ (ϕ ∧ ψ)); 17. [`(ϕ, ψ)]B χ ↔ B ([`(ϕ, ψ)]χ) ∨ ((B ϕ ∧ B (ϕ → ψ)) ∧ B ([`(ϕ, ψ)]χ ↔ ψ)); ψ↔χ 18. ϕ↔ϕ[ψ/χ] ; 19. p → ♦p; 20. ♦(ϕ ∧ ψ) → ♦ϕ ∧ ♦ψ; 21. ♦ϕ → ♦♦ϕ; 22. ♦B ϕ → B ♦ϕ; 23. ♦K ϕ → K ♦ϕ. 24. ([α]ϕ) → ♦1 ϕ. 25. ([α1 ]([α2 ]ϕ)) → ♦2 ϕ. 26. ([α1 ]([α2 ]([α3 ]ϕ))) → ♦3 ϕ We write L-DINF ` ϕ which signifies that ϕ is a theorem of L-DIN F . Thanks to the previous axioms, L-INF and L-DINF are sound for the class of L-INF models. This with the exception of the last axioms concerning ♦k that, in this formal setting, cannot be generalized, as we do not have the equivalence of “transitive closure”. There- fore, in the present version one has to establish the maximum number of execution step to be considered (in the above formulation, just three steps) and write the correspond- ing axioms. These axioms are however of great practical utility for a self-aware agent: if the agent is able to prove ♦k ϕ, where ϕ = can do(φA ), then it becomes aware of being able to perform a certain action in k steps. As seen in the axioms, ♦k can do(φA ) is proved by finding a sequence of mental actions ([α1 ], . . . [αk ]) which leads to the result. Therefore, the agent can possibly enact the analogous sequence (that, notice, is the equivalent of a plan), where can do(φA ) is substituted by do(φA ), so that action φA is eventually performed. This at the condition that all the [αi ]’s are executable in the agent’s present state. 4 Strong Completeness For the proof that L-INF is strongly complete we limit ourselves, as we do not consider the ♦ operator, that is still an experimental feature that we aim to better formalize in fu- ture work. We can achieve the proof by means of a standard canonical model argument. Definition 2. The canonical L-INF model is a tuple Mc = hWc ; Nc ; Rc ; Vc ; Ec i where: – Wc is the set of all maximal consistent subsets of LL-INF ; – For every w ∈ Wc , wRc v if and only if K ϕ ∈ w iff K ϕ ∈ v; i.e., Rc is an equivalence relation on knowledge; as before, we define Rc (w) = {v ∈ Wc | wRc v}. – For w ∈ Wc , ϕ ∈ LL-INF we define Aϕ (w) = {v ∈ Rc (w) | ϕ ∈ v}. Then, we put Nc (w) = {Aϕ (w) | B ϕ ∈ w}. – Vc is a valuation function defined as before; – EC is the executable function defined as before. There are the following immediate consequences of the above definition: if w ∈ Wc then – given ϕ ∈ LL-INF , it holds that K ϕ ∈ w if and only if ∀v ∈ Wc such that wRc v we have ϕ ∈ v; – for ϕ ∈ LL-INF , if B ϕ ∈ w and wRc v then B ϕ ∈ v; – for α ∈ Ec (w), if wRc v then α ∈ Ec (v). Thus, Rci -related worlds have the same knowledge and Nc -related worlds have the same beliefs, i.e. there can be Rci -related worlds with different beliefs. The following results hold: Lemma 1. For all w ∈ Wc and B ϕ, B ψ ∈ LL-INF , if B ϕ ∈ w but B ψ 6∈ w, it follows that there exists v ∈ Rc (w) such that ϕ ∈ v ↔ ψ 6∈ v. Proof. Let w ∈ Wc and ϕ, ψ be such that B ϕ ∈ w and Bi ψ ∈ / w. Assume now that for every v ∈ Rc (w) we have ϕ ∈ v, ψ ∈ v or ϕ ∈ / v, ψ ∈ / v; then, from previous statements it follows that K(ϕ ↔ ψ) ∈ w so that by axiom 6 in (3) B ψ ∈ w which is a contradiction. Lemma 2. For all ϕ ∈ LL−IN F and w ∈ Wc it holds that ϕ ∈ w if and only if Mc , w  ϕ. Proof. We have to prove the statement for all ϕ ∈ LL-INF . – ϕ = p, w ∈ Wc , if p ∈ w then p ∈ Vc (w) so for the truth conditions in definition (1) we have Mc , w  p; to prove the opposite implication we have to proceed with the same reasoning; – ϕ = exec(α), w ∈ Wc , if exec(α) ∈ w then α ∈ Ec (w) so for the truth conditions in definition (1) we have Mc , w  exec(α); – all the other cases have the same proof except ϕ = B ϕ. Assume B ϕ ∈ w and w ∈ Wc : Aϕ (w) = {v ∈ Rc (w) | ϕ ∈ v} = = by definition (1) = =k ϕ kM w ∩Rc (w) c so for the previous definition of canonical model: Nc (w) = {Aϕ (w) | Bϕ ∈ w} than k ϕ kM w ∈ Nc (w) and for definition (1) Mc , w  B ϕ. c Suppose B ϕ ∈ / w so ¬B ϕ ∈ w and we have to prove k ϕ kM w ∩ Rc (w) ∈ c / Nc (w). Choose A ∈ Nc (w), by definition we know A = Aψ (w) for some ψ with B ψ ∈ w. By Lemma (1) there is some v ∈ Rc (w) such that ϕ ∈ v ↔ φ ∈ / v, so we have: 1. for (→) thanks to the induction hp v ∈ (k ϕ kMw ∩ Rc (w)) \ Aψ (w); c 2. for (←) thanks to the induction hp vI ∈ Aψ (w) \ (k ϕ kM w ∩ Rc (w)); c than for (1) and (2) Aψ (w) 6=k ϕ kM w c ∩ R c (w) and since A = Aψ (w) was arbitrary in Nc (w) we conclude that k ϕ kM w c ∩ R c (w) ∈ / N c (w), and so than Mc , w 2 B ϕ. Lemma 3. For all ϕ ∈ LL-DINF there exists ϕ̃ ∈ LL-INF such that L − DINF ` Φ ↔ ϕ̃ (for any L-DINF formula we can find an equivalent L-INF formula). Proof. We have to prove the sentence for all ϕ ∈ LL-INF but we show the proof only for ϕ = p because the others are proved analogously. By the axioms in Section (3) we have [α]p ↔ p [α]p ↔ p and by axiom (18) we have which means that we can replace ϕ ↔ ϕ[[α]p/p] [α]p with p in ϕ. The previous lemmas allow us to prove the following theorems. Theorem 1. L-INF is strongly complete for the class of L-INF models. Proof. Any consistent set ϕ may be extended to a maximal consistent set of formulas w? ∈ Wc and Mc , w?  Φ by Lemma (2). Then, L-INF is strongly complete for the class of L-INF models. Theorem 2. L-DINF (without ♦) is strongly complete for the class of L-INF models. Proof. If K is a consistent set of LL-DINF formulas then K̃ = {ϕ̃ | ϕ ∈ K} is a consistent set of LL-INF formulas by Lemma (3). By Theorem (1) there is a model Mc with a world w such that Mc , w  K̃. But since L-DINF is sound and for each ϕ ∈ K, L − DINF ` ϕ ↔ ϕ̃, it follows Mc , w  K then L-DINF is strongly complete for the class of L-INF models. 5 Conclusions In this paper we discussed some cognitive aspects of autonomous system, concerning execution steps and executability. To model these aspects we have proposed the modal logic L-INF, and we have proved some useful properties among which strong complete- ness, though under significant restrictions. In future work we mean to extend our logic to the multi-agents case and to prove, without restrictions, that L-INF is strongly complete. We will then extend L-INF to the multi-agents case and there is an intention to insert the concept of budget, as a value that represents how much an agent can spend of his own resources in the world w. This is important to better represent the fact that the agent is resource-bounded. Moreover, we have to compute the complexity and extend the group of inferential actions that we consider. Also, the temporal aspects of an agent’s operation has not been considered here, but we tackled this aspects in [5,13]. We intende to merge the two approaches, so as to obtain a comprehensive framework. References 1. Amir, E., Andreson, M.L., Chaudri, V.K.: Report on DARPA workshop on self aware com- puter systems. Technical Report, SRI International Menlo Park United States (2007) 2. Balbiani, P., Duque, D.F., Lorini, E.: A logical theory of belief dynamics for resource- bounded agents. In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, AAMAS 2016. pp. 644–652. ACM (2016) 3. Balbiani, P., Fernández-Duque, D., Lorini, E.: The dynamics of epistemic at- titudes in resource-bounded agents. Studia Logica 107(3), 457–488 (2019). https://doi.org/10.1007/s11225-018-9798-4 4. van Benthem, J., Pacuit, E.: Dynamic logics of evidence-based beliefs. Studia Logica 99(1- 3), 61–92 (2011). https://doi.org/10.1007/s11225-011-9347-x, https://doi.org/10. 1007/s11225-011-9347-x 5. Costantini, S. and Pitoni, V.: A Temporal Module for Logical Frameworks. In: Proceedings 35th International Conference on Logic Programming (Technical Communications), ICLP 2019. EPTCS, Volume 306 (2019) 240–346 6. Duc, H.N.: Reasoning about rational, but not logically omniscient, agents. J. Log. Comput. 7(5), 633–648 (1997). https://doi.org/10.1093/logcom/7.5.633 7. Elgot-Drapkin, J., Kraus, S., Miller, M., Nirkhe, M., Perlis, D.: Active logics: A unified formal approach to episodic reasoning (09 1996) 8. Elgot-Drapkin, J.J., Miller, M.I., Perlis, D.: Life on a desert island: Ongoing work on real- time reasoning (1987) 9. Eric Margolis, R.S., Stich(eds.), S.P.: The Oxford Handbook of Phi- losophy of Cognitive Science. Oxford University Press (2012). https://doi.org/10.1093/oxfordhb/9780195309799.001.0001 10. Fagin, R., Halpern, J.Y.: Belief, awareness, and limited reasoning. Artif. Intell. 34(1), 39–76 (1987). https://doi.org/10.1016/0004-3702(87)90003-8, https://doi.org/10.1016/ 0004-3702(87)90003-8 11. Goldman, A.I., et al.: Theory of mind. The Oxford handbook of philosophy of cognitive science 1 (2012) 12. Jago, M.: Epistemic logic for rule-based agents. Journal of Logic, Language and Information 18(1), 131–158 (2009). https://doi.org/10.1007/s10849-008-9071-8 13. Pitoni, V.: Memory Management in Resource-Bounded Agents. In: Proceedings 35th In- ternational Conference on Logic Programming (Technical Communications), ICLP 2019. EPTCS, Volume 306 (2019) 452–460 14. Savic, N., Studer, T.: Relevant justification logic. FLAP 6(2), 397–412 (2019) 15. Tørresen, J., Plessl, C., Yao, X.: Self-aware and self-expressive systems. IEEE Computer 48(7) (2015) 18–20 16. Velázquez-Quesada, F.R.: Explicit and implicit knowledge in neighbourhood models. In: Grossi, D., Roy, O., Huang, H. (eds.) Logic, Rationality, and Interaction - 4th International Workshop, LORI 2013, Hangzhou, China, October 9-12, 2013, Proceedings. pp. 239–252. Springer (2013). https://doi.org/10.1007/978-3-642-40948-6-19 17. Velázquez-Quesada, F.R.: Dynamic epistemic logic for implicit and explicit beliefs. Journal of Logic, Language and Information 23(2), 107–140 (2014). https://doi.org/10.1007/s10849- 014-9193-0 18. Richard W. Weyhrauch Prolegomena to a Theory of Mechanized Formal Reasoning Artificial Intelligence, 13(1-2) (1980) 133–170