Proceedings of the 6th Workshop on Formal and Cognitive Reasoning Reasoning with Artificial Mental States An Algebraic Approach Nourhan Ehab1 and Haythem O. Ismail2,1 1 German University in Cairo, Egypt Department of Computer Science and Engineering 2 Cairo University, Egypt Department of Engineering Mathematics {nourhan.ehab,haythem.ismail}@guc.edu.eg Abstract. Modelling the human mind, with its astounding complexity, has al- ways been a long-sought goal of AI research. One of the most successful ap- proaches to attain this goal is to ascribe human-like mental states to artificial agents. A mental state is based on a set of mental attitudes such as beliefs, de- sires, intentions, promises, obligations...etc. While there are several accounts in the literature for endowing artificial agents with mental attitudes, such approaches predominantly focus on investigating each attitude separately or on studying the interaction of a handful of particular attitudes, notably beliefs, desires, and in- tentions. Since human epistemic and practical reasoning processes are typically more complex, involving a myriad of attitudes, accounting for the interaction among generic mental attitudes is called for. To this end, we present an alge- braic framework for modeling the interaction among generic mental attitudes. The framework is used to provide formal semantics for a logical language which may be used by a logic-based agent to reason with arbitrary mental attitudes. Keywords: Agents Architectures · Mental States · Algebraic Semantics 1 Introduction A hallmark of human intelligence is the ability to reason with a wide diversity of mental attitudes including beliefs, intentions, desires, promises, obligations...etc. which consti- tute our collective mental state. We are confronted everyday with situations that require us to deliberate given our current mental state, and we usually do so with ease. To demonstrate the variety of mental attitudes we deal with, even in the simplest of situa- tions, consider the following example. Example 1. Ted promised his best friend Marshall to go on a hunting trip with him during the weekend. Since Ted is a man of his word, he feels obliged to intend his promises and indeed intends what he is obliged to intend. If Ted intends to go to the trip, he must rent a car. At the same time, Ted has been procrastinating working on a long overdue report for weeks. He believes that if he does not work on the report this weekend, his boss will be really mad and will fire him. Ted fears being fired as he really likes his job. He regrets that he did not work on the report the previous weeks which Copyright c 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 27 Reasoning with Artificial Mental States: An Algebraic Approach makes him feel obliged to work on the report during this weekend. Much to Ted’s relief, he believes that he can go to the trip and dedicate some time to work on the report there if the trip location has internet connectivity. However, Ted doubts that there is internet connectivity at the trip location which makes him fear that he will not be able to work on the report after all. Since Ted is paranoid, he believes what he fears. In this example, Ted is reasoning with a mental state comprised of his promises, beliefs, obligations, intentions, fears, regrets, and doubts. In the modern world we often talk about machines as if they exhibit human-like mental attitudes as the aforemen- tioned. Our daily lives typically involve numerous references to machines knowing, believing, desiring, intending, liking or disliking, understanding, owing, having duties and rights, or deserving rewards and punishments [16]. For this reason, a commonly investigated approach to achieving general AI is to ascribe mental attitudes to artificial agents as first suggested by McCarthy [15] and Newell [19]. Supporters of this line of research argue that mental-level modelling of artificial agents offers several advantages on both the theoretical and practical levels [27]. From a theoretical perspective, the abstract nature of mental models proved to be very useful in analysing and comparing different agent architectures. An example of this is Levesque’s Computers as Believers paradigm which offered a uniform basis for analysing general knowledge representation schemes [14]. On the other hand, from a practical perspective, mental models offer a convenient abstraction based on well- understood attitudes while hiding low-level implementation-specific details [3]. This is a very useful feature in developing coopeartive multiagent systems as abstract explicit representations of each of the agents’ mental attitudes enable more coherent interac- tions between them [20]. Moreover, endowing artificial agents with mental attitudes can facilitate the design of autonomous planning agents. For such agents, explicit rep- resentations of beliefs, desires, and obligations, for example, can drive the agents to take actions compatible with their beliefs to achieve their desires while respecting their obli- gations. Another practical realization of ascribing mental attitudes to artificial agents in a computational setting is the Agent Oriented Programming (AOP) paradigm. In AOP, the different modules of a program are viewed as agents possessing mental attitudes such as beliefs, decisions, capabilities, and obligations [26]. Perhaps the most renowned approach to designing agents with mental attitudes is the BDI agent architecture [23] and its extensions to include obligations [4]. These ap- proaches exclusively focus on investigating the interactions between beliefs, desires, obligations, and intentions. For this reason, the BDI architecture fails to provide a good mental model for the human epistemic and practical reasoning processes as they typ- ically involve a myriad of other mental attitudes such as plans, goals, and fears (just to name a few). This makes the BDI architecture not suitable for modelling human- centered trust-worthy agents which recently attracted a lot of research interest [9]. Even if we restrict ourselves to modelling rational agents, the BDI architecture still, in our opinion, falls short. Archetypal rational behaviour, for instance, is to form intentions to avoid one’s fears or to mitigate ones’s doubts which can not be represented (in a straight forward way) within the BDI/BOID frameworks. To address this shortcoming, we propose in this paper a general algebraic framework capable of representing a first-person perspective of artificial agents possessing any 28 Reasoning with Artificial Mental States: An Algebraic Approach set of mental attitudes while capturing their interactions. We follow [27] and define a mental state as a set of mental attitudes. To the best of our knowledge, there does not exist in the literature a framework that allows for the reasoning with an arbitrary set of attitudes like our framework does offering a more refined mental model for a human-centered logical agent. To this end, we present a logic we refer to as LogA M (“Log” stands for logic, “A” for algebraic, and “M” for mental states) with precise semantics where mental states can be represented and the reasoning with the different mental attitudes of the agent can be captured. In defining the semantics of LogA M, we depart from the mainstream modal approaches to representing mental attitudes and take the algebraic approach instead. As a starting step, we assume in LogA M that the mental state is comprised of binary mental That is, the mental state can include information about the agent’s beliefs (for example) but will not include information about the degrees of such beliefs. Further, we define a monotonic consequence relation for each mental attitude in the mental state based on pure algebraic notions. We already developed an extension of LogA M to accommodate non-monotonic reasoning with graded mental states. We will informally outline this extension in Section 5, but we reserve the formal presentation to a longer version of the paper. An interesting special case of this graded extension for practical reasoning with graded beliefs and motivations is presented in [6]. The rest of the paper is structured as follows. We start by motivating our choice to pursue the algebraic route in defining the semantics of LogA M in Section 2. We review in Section 3 foundational concepts of Boolean algebra on which LogA M is based. We also generalize the classical notion of filters in Boolean algebra into what we will refer to as multifilters providing a generalized algebraic treatment of reasoning with mental states. Next, in Section 4, we present the syntax and semantics of LogA M. In Section 5, we informally describe the graded non-monotonic extension of LogA M. Finally, in Section 6, we present some concluding remarks. 2 Why the Algebraic Approach? Before we delve into the technical details of LogA M, it is perhaps apt to ponder the merits of choosing to pursue the algebraic route. LogA M is the most recent addition to a growing family of algebraic logics [11,12,13,7]. As such, it is essential for a treatment of reasoning with mental states within the algebraic framework. Hence, independent motivations for the algebraic approach are also motivations for LogA M. We will briefly present the motivations for the algebraic approach in this section. The algebraic approach is based on an ontological commitment to propositions as first-class individuals in the universe of discourse; this leads to a language with no sen- tences, but with some of the terms taken to denote propositions. What does this buy us? Take LogA B [11] for example. As an algebraic language for reasoning about be- lief, it strikes a middle ground between two major approaches to doxastic logic: the dominant, modal approach [28, for example] and the (now relatively out of fashion) first-order syntactical approach [22, for instance]. This allows LogA B, on one hand, to avoid problems of logical omniscience, which mar the classical modal approach, while, on the the other hand, staying immune to paradoxes of self-reference plaguing 29 Reasoning with Artificial Mental States: An Algebraic Approach the syntactical approach. LogA G [7] is an algebraic logic for non-monotonic reasoning about graded beliefs. It is demonstrably useful for modelling resource-bounded reason- ing; simulating inconclusive reasoning with circular, liar-like sentences; and reasoning about information arriving over a chain of sources each with a different degree of trust. As proven in [7], LogA G can capture a wide array of non-monotonic reasoning for- malisms such as possibilistic logic, circumscription, default logic, autoepistemic logic, and the principle of negation as failure. As such, LogA G can be considered an alge- braic unifying framework for non-monotonicity. LogA Cn [13], which is an algebraic logic for reasoning about preference, desire, and obligation, avoids the so-called para- doxes of deontic logic [18] by, again, abandoning classical possible-worlds semantics. In [12], the algebraic approach is adopted for the representation of temporal phenomena using the language LogA S. In classical first-order approaches to temporal logic [17,1, for example], tersely axiomatizing temporal properties often calls for the introduction of reified fluents into the ontology. In these approaches, reference to composite fluents (conjunctions thereof, for example) either is forbidden (as, for example, in the situation calculus [17]) or results in duplicating the logical connectives for statements and fluent- denoting terms (as, for example, in [1].) In LogA S, reference to composite fluents is straightforward, with a single set of proposition-based logical connectives. These dif- ferent motivations for the algebraic approach suggest that it is only natural to consider a language like LogA M if one is to model reasoning with mental states the algebraic way to gain its several indispensable advantages. 3 Boolean Algebras and Multifilters In this section, we start by reviewing the algebraic concepts of Boolean algebras and filters underlying classical logic, then we extend the notion of filters to accommodate a logic of mental states where a mental state is a set of mental attitudes. Definition 1. A Boolean algebra is a sextuple A = hP, +, ·, −, ⊥, >i where P is a non-empty set with {⊥, >} ⊆ P. A is closed under the two binary operators + and · and the unary operator − observing the following properties [24]. B1.1: a + b = b + a (Commutativity) B1.2: a · b = b · a B2.1: a + (b + c) = (a + b) + c (Associativity) B2.2: a · (b · c) = (a · b) · c B3.1: a + (a · b) = a (Absorption) B3.2: a · (a + b) = a B4.1: a · (b + c) = (a · b) + (a · c) (Distribution) B4.2: a + (b · c) = (a + b) · (a + c) B5.1: a + −a = > (Complements) B5.2: a · −a = ⊥ For the purposes of this paper, we take the elements of P to be propositions and the operators +, ·, and − to be disjunction, conjunction, and negation, respectively. 30 Reasoning with Artificial Mental States: An Algebraic Approach The following definition of filters is an essential notion of Boolean algebras to rep- resent an algebraic counterpart to logical consequence [24]. Filters are defined in pure algebraic terms, without alluding to the notion of truth, by utilizing the natural lattice order ≤ on the the algebra: for p1 , p2 ∈ P, p1 ≤ p2 =def p1 · p2 = p1. Henceforth, A is a Boolean algebra hP, +, ·, −, ⊥, >i. Definition 2. A filter of A is a subset F of P where 1. > ∈ F ; 2. If a, b ∈ F , then a · b ∈ F ; and 3. If a ∈ F and a ≤ b, then b ∈ F . The filter generated by Q ⊆ P is the smallest filter F (Q) of which Q is a subset. The just presented definition of a filter is only suitable for modelling reasoning with a single set of propositions Q. With the purpose of modelling the reasoning with mental states defined as a tuple of sets of propositions where each set represents a separate mental attitude of the agent, we extend the notion of filters to what we refer to as multifilters. For this reason, we extend classical filters that rely on the natural order ≤ on the Boolean algebra to what will refer to as multifilters that rely on an order on tuples of propositions where each proposition belongs to a mental attitude. (Recall that ≤ is the classical lattice order.) Definition 3. Let k be a positive integer. A k partial-order on A is a partial order k on P k such that, (a1 , . . . , ak ) k (b1 , . . . , bk ) and bi = ⊥, for some 1 ≤ i ≤ k, only if aj = ⊥, for some 1 ≤ j ≤ k. Further, we say that k is classical in i just in case, (i) if (a1 , ..., ak ) k (b1 , ..., bk ) then ai ≤ bi and (ii) if a ≤ b then ({>}i−1 × {a} × {>}k−i ) × (P i−1 × {b} × P k−i ) ⊆k . In the sequel, we will drop the subscript k in k whenever there is no resulting ambiguity. We now define multifilters based on a k partial-order . Definition 4. Let  be a k partial order on A and S ⊆ {1, ..., k}. A -multifilter of A with respect to S is a tuple F (S) = hF1 , F2 , ...., Fk i of subsets of P such that 1. > ∈ Fi , for all i such that 1 ≤ i ≤ k; 2. for all i, if i ∈ S, a ∈ Fi , and b ∈ Fi , then a.b ∈ Fi ; and × k 3. if (a1 , ..., ak )  (b1 , ..., bk ) and (a1 , ..., ak ) ∈ i=1 Fi , then (b1 , ..., bk ) ∈ ×ki=1 Fi. We can observe at this point that the three conditions on multifilters are just gener- alizations of the three conditions on filters. The second condition though need not apply to all the sets F1 , ..., Fk . The index set S specifies the sets which are closed under the meet operation “·” and hence observe the second condition. This is necessary as some mental attitudes need not be closed under “·”. We next define how multifilters can be generated by a tuple of sets of propositions. The intuition is that each set of propositions represents a mental attitude and the tuple of sets represents the collective mental state. In this way, multifilters generalize filters to accomodate reasoning with multiple mental attitudes where some attitudes need not be closed under “·”. 31 Reasoning with Artificial Mental States: An Algebraic Approach Definition 5. Let Q1 , ..., Qk ⊆ P,  be a k partial order on A, and S ⊆ {1, ..., k}. The -multifilter generated by hQ1 , ..., Qk i with respect to S, denoted F (hQ1 , ..., Qk i, S), is a -multifilter hQ01 , ..., Q0k i with respect to S where Q0i is the smallest set containing Qi , for all 1 ≤ i ≤ k. Having defined multifilters, we are now ready to present an important result. The following theorem states that, under certain conditions, multifilters can be reduced to classical filters applied to the different sets of propositions representing the mental state. Theorem 1. Let Q1 , ..., Qk ⊆ P, S ⊆ {1, ..., k}, and  be a k partial order on A which is classical in i for some i ∈ S. If F (hQ1 , ..., Qk i, S) = hQ01 , ..., Q0k i, then Q0i = F (Qi ). 4 LogA M Languages In this section, we present the syntax and semantics of LogA M in addition to defining a logical consequence relation for each mental attitude in the mental state. Utilizing the multifilters presented in Section 3, we show that our logical consequence relations have the distinctive properties of classical Tarskian logical consequence. The proofs of the theorems presented in this section are omitted for space limitations but can be found in [8]. 4.1 LogA M Syntax LogA M consists of terms constructed algebraically from function symbols. There are no sentences; instead, we use terms of a distinguished syntactic type to denote propo- sitions. Propositions are included as first-class individuals in the LogA M ontology and are structured in a Boolean algebra. Though non-standard, the inclusion of propositions in the ontology has been suggested by several authors [5,2,21,25]. A LogA M language is a many-sorted language composed of a set of terms parti- tioned into two base sorts: σP is a set of terms denoting propositions and σI is a set of terms denoting anything else. A LogA M alphabet Ω includes a non-empty, count- able set of constant and function symbols each having a syntactic sort from the set σ = {σP , σI } ∪ {τ1 −→ τ2 | τ1 ∈ {σP , σI } and τ2 ∈ σ} of syntactic sorts. Intu- itively, τ1 −→ τ2 is the syntactic sort of function symbols that take a single argument of sort σP or σI and produce a functional term of sort τ2 . Given the restriction of the first argument of function symbols to base sorts, LogA M is, in a sense, a first-order language. An alphabet Ω includes a countably infinite set of variables of the two base sorts; a set of syncategorematic symbols including the comma, various matching pairs of brackets and parentheses, the symbol ∀, and a set of logical symbols defined as the union of the following sets: – {¬} ⊆ σP −→ σP . – {∧, ∨} ⊆ σP −→ σP −→ σP – {Ai }ki=1 ⊆ σP −→ σP . 32 Reasoning with Artificial Mental States: An Algebraic Approach The symbols ¬, ∧, ∨ denote negation, conjunction, and disjunction respectively. Ai (t) denotes that the agent has the attitude Ai towards the propositional term t. Terms involving ⇒ (material implication), ⇔ (logical equivalence), and ∃ are abbreviations defined in the standard way. In the following, we define LogA M languages. Definition 6. A LogA M language L is the smallest set of terms formed according to the following rules, where t and ti (i ∈ N) are terms in L. – All variables and constants in the alphabet Ω are in L. – f (t1 , . . . , tm ) ∈ L, where f ∈ Ω is of type τ1 −→ . . . −→ τm −→ τ (m > 0) and ti is of type τi . – ¬t ∈ L, where t ∈ σP . – (t1 ⊗ t2 ) ∈ L, where ⊗ ∈ {∧, ∨} and t1 , t2 ∈ σP . – ∀x(t) ∈ L, where x is a variable in Ω and t ∈ σP . – Ai (t) ∈ L, where t ∈ σP . We are now ready to define LogA M theories based on the previously defined LogA M languages. Definition 7. A LogA M theory T is a triple hA, R, Si where: – A = (A1 , ..., Ak ) is a k-tuple where A1 , ..., Ak ⊆ σP ; and – R is a set of bridge rules each of the form A1 , ..., Ak 7−→ A01 , ..., A0k where A1 , ..., Ak , A01 , ..., A0k ⊆ σP . – S ⊆ {1, ..., k}. The tuple A represents the mental state of the agent. Each set A1 , ..., Ak represents a separate mental attitude of the agent. If a propositional term t ∈ Ai , then the agent has the attitude Ai towards t. It is worth pointing out here the utility of the terms of the form Ai (t). The membership of Ai (t) in Aj for all 1 ≤ i, j ≤ k means that the agent has the attitude Aj towards Ai (t). For example, if A3 (φ) ∈ A2 , A3 (φ) represents that the agent intends φ and A2 represents the agent’s beliefs, then this means that the agent believes that it intends φ. This is very useful in representing higher-order motivations first suggested in [10]. The bridge rules serve to “bridge” propositions across the dif- ferent mental attitudes. A bridge rule A1 , ..., Ak 7−→ A01 , ..., A0k means that if Ai is a subset of the current i-th mental attitude, then A0i should be added to the current i-th mental attitude. The bridge rules facilitate the representation of the interaction between the mental attitudes. The set S specifies the sets in A whose denotations are closed un- der the meet operation and, hence, observe the second condition in the definition of multifilters. This facilitates having some attitudes in the mental state that are not closed under meet/conjunction. An example of such attitudes are desires. If one desires to go to the beach and desires to work on the report, one might not desire to go to the beach and work on the report. We now go back to Example 1 showing a corresponding encoding of it as a LogA M theory. Example 2. Let “r” denote working on the report, “t” denote going to the trip, “m” denote the boss’s getting mad, “f ” denote Ted getting fired, “c” denote Ted renting a car, “l” denote Ted liking his job, and “i” denote internet connectivity at the trip location. A possible LogA M theory representing Example 1 is T = h(A1 , A2 , A3 , A4 , A5 , A6 ), R, Si where: 33 Reasoning with Artificial Mental States: An Algebraic Approach – A1 represents Ted’s promises (P ). A1 = {t}. – A2 represents Ted’s beliefs (B). A2 = {¬r ⇒ m, m ⇒ f, l, i ⇒ r ∧ t}. – A3 represents Ted’s intentions (I). A3 = {}. – A4 represents Ted’s fears (F ). A4 = {}. – A5 represents Ted’s regrets (R). A5 = {¬r}. – A6 represents Ted’s doubts (D). A6 = {¬i}. – A7 represents Ted’s obligations (O). A7 = {}. – R is the set of instances of the following rule schema where φ is a variable. In what follows, we eliminate the empty sets in the bridge rules and add to each set the first letter of the attitude it is representing. For example, the rule {φ},{},{},{},{},{},{} 7−→ {}, {}, {},{},{},{},{φ} will be written as P = {φ} 7−→ O = {φ}. r1. P = {φ} 7−→ O = {I(φ)} r2. O = {I(φ)} 7−→ I = {φ} r3. I = {t} 7−→ I = {c} r4. B = {l} 7−→ F = {f } r5. R = {¬r} 7−→ O = {r} r6. D = {¬i} 7−→ F = {¬r} r7. F = {φ} 7−→ B = {φ} In the above rules, we use I(φ) as a mnemonic equivalent to A3 (φ) to denote that Ted intends φ. To illustrate how the bridge rules can be read, r1 represents that if Ted promised to φ, then he is obliged to intend φ and r2 represents that if Ted is obliged to intend any φ, then he intends φ. The rest of the rules can be read in a similar way. – S = {2}. This means that only the set of beliefs is closed under the meet/conjunction operation. The representation of a first-person variant of a BDI agent as a LogA M theory should now be straightfoward. The corresponding LogA M theory will contain a mental state A composed of three sets of attitudes representing the agent’s beliefs, desires, and intentions. The bridge rules can be used to represent the axioms of BDI logics governing the interactions between the three mental attitudes. 4.2 From Syntax to Semantics In this section, we present semantics for the syntax of LogA M in addition to defining an interpretation function. We start by presenting a key element in the semantics of LogA M which is the notion of a LogA M structure. Definition 8. A LogA M structure is a triple Sk = hD, A, Ak i, where – D is the domain of discourse containing a distinguished non-empty countable set of propositions P. – A = hP, +, ·, −, ⊥, >i is a complete, non-degenerate Boolean algebra. – Ak = {ai | 1 ≤ i ≤ k} where ai : P −→ P, 1 ≤ i ≤ k, is a function modeling an mental attitude. 34 Reasoning with Artificial Mental States: An Algebraic Approach A valuation V of a LogA M language is a triple hSk , Vf , Vx i, where Sk is a LogA M structure, Vf is a function that assigns to each function symbol an appropriate function on D, and Vx is a function mapping each variable to a corresponding element of the appropriate block of D. An interpretation of LogA M terms is given by a function [[·]]V . Definition 9. Let L be a LogA M language and let V be a valuation of L. An interpre- tation of the terms of L is given by a function [[·]]V : – [[x]]V = Vx (x), for a variable x – [[c]]V = Vf (c), for a constant c – [[f (t1 , . . . , tn )]]V = Vf (f )([[t1 ]]V , . . . , [[tm ]]V ), for an m-adic (m ≥ 1) function symbol f – [[(t1 ∧ t2 )]]V = [[t1 ]]V · [[t2 ]]V – [[(t1 ∨ t2 )]]V = [[t1 ]]V + [[t2 ]]V – [[¬t]]V = −[[t]] Y V – [[∀x(t)]] = V [[t]]V[a/x] a∈D – [[Ai (t1 )]]V = ai ([[t1 ]]V ) Q In the rest of the paper, for any Γ ⊆ σp , we will use [[Γ ]]V to denote p∈Γ [[p]] V for notational convenience. 4.3 Logical Consequence Having defined the syntax and semantics of LogA M. What remains for us is to define logical consequence. Since we are taking the algebraic route, we employ our notion of multifilters from Section 3 to define a consequence relation for each mental attitude in a LogA M theory. In Section 3, we defined multifilters based on an arbitrary partial order . We start by defining how to construct such an order for the tuples of propositions in P. The intuition is that the order is induced by the bridge rules in a LogA M theory in addition to the natural order ≤ among the attitudes that observe the second condition of the definition of multifilters (closure under the meet/conjunction operation). Definition 10. Let T = hA, R, Si be a LogA M theory and V a valuation. A TV - induced order, denoted TV , is a partial order over P k with the following properties. 1. If i ∈ S and a ≤ b, then ({>}i−1 ×{a}×{>}k−i ) TV ({>}i−1 ×{b}×{>}k−i ). 2. If (A1 , . . . , Ak 7−→ A01 , . . . , A0k ) ∈ R, then ([[A1 ]]V , . . . , [[Ak ]]V ) TV ([[A01 ]]V , . . . , [[A0k ]]V ). At this point we observe that if the bridge rules in a LogA M theory T observe some restrictions, then the order induced by the theory is classical (recall what it means for an order to be classical according to Definition 3). Observation 1. Let T = h(A1 , . . . , Ak ), R, Si is a LogA M theory, V a valuation, and TV be a k partial-order on A. For every (A1 , ..., Ak 7−→ A01 , ..., A0k ) ∈ R, and for every i, j such that 1 ≤ i, j ≤ k and j 6= i, TV is classical in i if and only if A0i 6= {} just in case Aj = A0j = {} and [[Ai ]]V ≤ [[A0i ]]V . 35 Reasoning with Artificial Mental States: An Algebraic Approach We next utilise a multifilter based on a TV -induced order to define an extended logical consequence relation for each mental attitude. Definition 11. Let T = h(A1 , ..., Ak ), R, Si a LogA M theory and TV be a TV - induced order. For every φ ∈ σP , φ is an Ai consequence of T for 1 ≤ i ≤ k, denoted T |=Ai φ, if, for every valuation V, [[φ]]V ∈ Fi where hF1 , ..., Fk i = FTV (h[[A1 ]]V , ..., [[Ak ]]V i, S). We now inspect the properties of our extended consequence relations. The following theorem states that each |=Ai is monotonic and has the distinctive properties of classical Tarskian logical consequence. Further, if i ∈ S, then |=Ai observes a variant of the deduction theorem. Theorem 2. Let T = h(A1 , ..., Ak ), R, Si and T0 = h(A01 , ..., A0k ), R0 , S0 i be LogA M theories with S = S0 1. If φ ∈ Ai for some Ai ∈ A, then T |=Ai φ. 2. If T |=Ai φ, Aj ⊆ A0j for all 1 ≤ j ≤ k, and R0 ⊆ R, then T0 |=Ai φ. 3. Let A0i = Ai ∪ {ψ} for some i such that 1 ≤ i ≤ k, A0j = Aj for all j 6= i, and R0 = R. If T |=Ai ψ and T0 |=Ai φ, then T |=Ai φ. 4. Let A0i = Ai ∪{φ} for some i such that 1 ≤ i ≤ k. If i ∈ S, R0 = R, and T0 |=Ai ψ, then T |=Ai φ ⇒ ψ. In the remainder of this section, we go back to our running example showing the consequences of the LogA M theory of Example 2. In what follows, let A0i = {φ | T |=Ai φ} for 1 ≤ i ≤ k. Example 3. Recall the LogA M theory T in Example 2. In the following, we demon- strate the effect of the application of the bridge rules. 1. Initially, the applicable bridge rules are r1, r4, r5, and r6. This causes I(t) to be an obligation consequence of T, f a fear consequence of T, r an obligation conse- quence of T, and ¬r a fear consequence of T. 2. Once I(t) becomes an obligation consequence and f and ¬r become fear conse- quences, r2 and r7 become applicable. This causes t to be an intention consequence of T, and ¬r and f to be belief consequences of T. Adding ¬r to the belief conse- quences adds m to the belief consequences as well due to the belief ¬r ⇒ m. 3. Finally r3 becomes applicable. This causes c to become an intention consequence of T. The following are the final consequences of T. – Promises: A01 = {t}. – Beliefs: A02 = {¬r ⇒ m, m ⇒ f, l, i ⇒ r ∧ t, ¬r, f, m} – Intentions: A03 = {t, c} – Fears: A04 = {f, ¬r} – Regrets: A05 = {¬r} – Doubts: A06 = {¬i} – Obligations: A07 = {I(t), r} 36 Reasoning with Artificial Mental States: An Algebraic Approach 5 Incorporating Non-Monotonicity in LogA M According to Theorem 2, the consequence relations for the different mental attitudes of LogA M have a monotonic nature. This means that newly acquired propositions in the different mental attitudes can never invalidate previous propositions. Moreover, the con- sistency of the mental attitudes of the agent is not guaranteed. These are inconvenient assumptions for some mental attitudes. For example, a natural consequence of typical incomplete knowledge about the world is that newly acquired beliefs can invalidate previous beliefs. Consequently, some intentions might be dropped as their supporting beliefs are not believed anymore. Furthermore, it would also make sense that the beliefs and intentions (for instance) of the agents are always collectively consistent if we are to model a rational agent. For these reasons, in this section we informally describe a non- monotonic extension of LogA M where the consistency of selected mental attitudes is preserved. The extension we are proposing is a generalization of a framework we de- veloped in [6] for non-monotonic practical reasoning with beliefs and motivations. Towards incorporating non-monotonicity in LogA M, the first thing we do is that we associate grades with the different mental attitudes. The grades are reified objects with some total order on them and are taken to represent measures of trust or preference. Moreover, we define the agent’s character as a total order over the mental attitudes that we wish to maintain their collective consistency. Whenever inconsistencies arise, the agent character and the grades of the contradictory propositions are utilised to re- solve them. The agent character orders the attitudes from the least preffered to the most preferred. The agent always prefers to give up propositions from the least preferred at- titude. Similarly, the least preferred proposition is the proposition with the least grade. We also enforce that the consequents of the bridge rules are only graded propositions to make sure that any newly added proposition has a grade that can be inspected if this newly added proposition causes a contradiction. To illustrate this, we go back to the LogA M theory in Example 2. We first show the modified theory after we add grades to the different attitudes and modify the consequents of the bridge rules. Example 4. In what follows, we use P for A1 , B for A2 , I for A3 , F for A4 , R for A5 , D for A6 , and O for A7 for readability. A possible graded LogA M theory representing Example 1 is T = h(A1 , A2 , A3 , A4 , A5 , A6 ), R, Si where: – A1 represents Ted’s promises. A1 = {P(t, 3)}. – A2 represents Ted’s beliefs. A2 = {B(¬r ⇒ m, 1), B(m ⇒ f, 5), B(l, 10), B(i ⇒ r ∧ t, 4)}. – A3 represents Ted’s intentions. A3 = {}. – A4 represents Ted’s fears. A4 = {}. – A5 represents Ted’s regrets. A5 = {R(¬r, 6)}. – A6 represents Ted’s doubts. A6 = {D(¬i, 3)}. – A7 represents Ted’s obligations. A7 = {}. – R is the set of instances of the following rule schema where φ and g are variables. These are a modified version of the rules in Example 2 to add grades to the conse- quences of the bridge rules. r1. P = {P(φ, g)} 7−→ O = {I(φ, g)} 37 Reasoning with Artificial Mental States: An Algebraic Approach r2. O = {I(φ, g)} 7−→ I = {I(φ, g)} r3. I = {I(t, g)} 7−→ I = {I(c, g)} r4. B = {B(l, g)} 7−→ F = {F(f, g)} r5. R = {R(¬r, g)} 7−→ O = {O(r, g)} r6. D = {D(¬i, g)} 7−→ F = {F(¬r, g)} r7. F = {F(φ, g)} 7−→ B = {B(φ, g)} – S = {2}. Now consider the above graded LogA M theory and suppose that we only care that Ted’s collective beliefs, promises, obligations, and intentions are consistent. Given that, for instance, Ted believes B(l, 10) and does not believe ¬l, it would make sense for him to accept l despite his uncertainty about it. Similarly, it would make sense for Ted to add t to his promises if they do not conflict with other beliefs, promises, obligations, or intentions. However, if we only use multifilters, we will never be able to reason with those nested graded attitudes as they are not themselves in the agent’s theory but only grading propositions thereof. For this reason, we extend our notion of multifilters into a more liberal notion of graded multifilters to enable the agent to conclude, in addition to the consequences of the initial theory, attitudes graded by the initial attitudes (like l and t). Should this lead to contradictions, the agent’s character and the grades of the contradictory propositions are used to resolve them. In what follows, we show how graded multifilters are used to get the set of consequences for each mental attitude. Example 5. We first apply the bridge rules to T just like we did in Example 3. We get the following updated mental state. – Promises: A01 = A1 . – Beliefs: A02 = A2 ∪ {B(f, 10), B(¬r, 3)}. – Intentions: A03 = {I(t, 3), I(c, 3)}. – Fears: A04 = {F(f, 10), F(¬r, 3)}. – Regrets: A05 = A5 – Doubts: A06 = A6 – Obligations: A07 = {I(t, 3), O(r, 6)}. Next, we admit the graded attitudes in the initial theory. The following becomes the updated mental state of the agent. – Promises: A001 = A01 ∪ {t}. – Beliefs: A002 = A02 ∪ {¬r ⇒ m, m ⇒ f, l, i ⇒ r ∧ t, f, ¬r, m}. – Intentions: A003 = {t, c}. – Fears: A004 = {f, ¬r}. – Regrets: A005 = A05 ∪ {¬r}. – Doubts: A006 = A06 ∪ {¬i}. – Obligations: A007 = {r}. Note that we add m to the agent’s beliefs not because it was extracted out of a graded belief, but because it follows from ¬r ⇒ m and ¬r that was just extracted out of B(¬r, 3). At this point, Ted’s beliefs and obligations are contradictory as his beliefs 38 Reasoning with Artificial Mental States: An Algebraic Approach include ¬r and his obligations include r. This is where the agent’s character come into play. If Ted’s character prefers to give up his beliefs, then ¬r will be retracted from his beliefs. Otherwise, r will be given up as an obligation. Now suppose that Ted acquires the new belief that he will not be fired B(¬f, 15). We extract ¬f and add it to Ted’s beliefs. Once we do this, Ted’s set of beliefs becomes contradictory as it contains ¬f and f . Since the contradiction is now within the same attitude, we allude to the grades of the contradictory propositions. Since ¬f has the grade of 15 and f has the grade of 10, then f will be kicked out of Ted’s beliefs resolving the contradiction. In general, to resolve inconsistencies among the attitudes we select to be consis- tent, we always remove the propositions with the lowest grade in the least preferred attitude according to the agent character. Next, any propositions supported only by the removed propositions are removed as well. If two contradictory propositions have the same grade, they both go away. 6 Conclusion In this paper, we presented a general algebraic framework for reasoning with mental states. We also provided semantics for an algebraic logic, LogA M, where any set of mental attitudes can be represented. We defined a monotonic consequence relation for each mental attitude and showed that the consequence relations observe the distinctive properties of Tarskian logical consequence. Moreover, we informally described how LogA M can be extended to handle non-monotonic reasoning with graded mental at- titudes. We are currently working on a proof theory for the non-monotonic version of LogA M. Reasons for the different mental attitudes are to be computed in the same way reason-maintenance systems compute supports for beliefs. Hence, the end result will be a versatile framework for reasoning with graded mental attitudes giving rise to an explainable AI system. References 1. James Allen. Towards a general theory of action and time. Artificial Intelligence, 23:123– 154, 1984. 2. George Bealer. Theories of properties, relations, and propositions. The Journal of Philoso- phy, 76(11):634–648, 1979. 3. Ronen I. Brafman and Moshe Tennenholtz. Modeling agents as qualitative decision makers. Artificial Intelligence, 94(1-2):217–268, 1997. 4. Jan Broersen, Mehdi Dastani, Joris Hulstijn, Zisheng Huang, and Leendert van der Torre. The BOID architecture: conflicts between beliefs, obligations, intentions and desires. In Proceedings of the fifth international conference on Autonomous agents, pages 9–16, 2001. 5. Alonzo Church. On carnap’s analysis of statements of assertion and belief. Analysis, 10(5):97–99, 1950. 6. Nourhan Ehab and Haythem O. Ismail. Algebraic foundations for non-monotonic practical reasoning. In Ivan José Varzinczak and Marı́a Vanina Martı́nez, editors, Proceedings of the 18th International Workshop on Non-Monotonic Reasoning (NMR2020), 2020. To appear. 39 Reasoning with Artificial Mental States: An Algebraic Approach 7. Nourhan Ehab and Haythem O. Ismail. LogA G: An algebraic non-monotonic logic for reasoning with graded propositions. Annals of Mathematics and Artificial Intelligence, 2020. 8. Nourhan Ehab and Haythem O. Ismail. Reasoning with artificial mental states: An alge- braic approach. Technical report, German University in Cairo, 2020. https://met.guc.edu.eg/ Repository/Faculty/Publications/950/FCR2020-Appendix.pdf. 9. Kenneth M. Ford, Patrick J. Hayes, Clark Glymour, and James Allen. Cognitive orthoses: toward human-centered AI. AI Magazine, 36(4):5–8, 2015. 10. Harry G. Frankfurt. Freedom of the will and the concept of a person. In What is a person?, pages 127–144. Springer, 1988. 11. Haythem O. Ismail. LogA B: A first-order, non-paradoxical, algebraic logic of belief. Logic Journal of the IGPL, 20(5):774–795, 2012. 12. Haythem O. Ismail. Stability in a commonsense ontology of states. Proceedings of the Eleventh International Symposium on Logical Formalization of Commonsense sense Rea- soning (COMMONSENSE 2013), 2013. 13. Haythem O. Ismail. The good, the bad, and the rational: Aspects of character in logical agents. In Alia ElBolock, Yomna Abdelrahman, and Slim Abdennadher, editors, Character Computing. Springer, 2020. 14. Hector J. Levesque. Making believers out of computers. Artificial Intelligence, 30(1):81– 108, 1986. 15. John McCarthy. Ascribing mental qualities to machines. Philosophical Perspectives in Artificial Intelligence, Humanities Press, pages 161–195, 1979. 16. John McCarthy. The little thoughts of thinking machines. Psychology Today, 17(12):46–49, 1983. 17. John McCarthy and Patrick Hayes. Some philosophical problems from the standpoint of artificial intelligence. In D. Meltzer and D. Michie, editors, Machine Intelligence, volume 4, pages 463–502. Edinburgh University Press, Edinburgh, Scotland, 1969. 18. Paul McNamara. Deontic logic. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, fall 2018 edition, 2018. 19. Allen Newell et al. The knowledge level. Artificial intelligence, 18(1):87–127, 1982. 20. Pietro Panzarasa, Nicholas R. Jennings, and Timothy J. Norman. Formalizing collabora- tive decision-making and practical reasoning in multi-agent systems. Journal of logic and computation, 12(1):55–117, 2002. 21. Terence Parsons. On denoting propositions and facts. Philosophical Perspectives, 7:441– 460, 1993. 22. Donald Perlis. Languages with self-reference II: Knowledge, belief, and modality. Artificial Intelligence, 34(2):179–212, 1988. 23. Anand S. Rao and Michael P. Georgeff. BDI agents: From theory to practice. In ICMAS, volume 95, pages 312–319, 1995. 24. H.P. Sankappanavar and Stanley Burris. A course in universal algebra. Graduate Texts Math, 78, 1981. 25. Stuart C. Shapiro. Belief spaces as sets of propositions. Journal of Experimental & Theoret- ical Artificial Intelligence, 5(2-3):225–235, 1993. 26. Yoav Shoham. An overview of agent-oriented programming. Software agents, 4:271–290, 1997. 27. Yoav Shoham and Steve B. Cousins. Logics of mental attitudes in AI. In Foundations of Knowledge Representation and Reasoning, pages 296–309. Springer, 1994. 28. Hans van Ditmarsch, Joseph Halpern, and Barteld Kooi. An introduction to logics of knowl- edge and belief. In Hans van Ditmarsch, Joseph Halpern, Wiebe van der Hoek, and Barteld Kooi, editors, Handbook of Epistemic Logic, pages 1–51. College Publications, 2015. 40