=Paper= {{Paper |id=Vol-3251/paper8 |storemode=property |title= Logic-Based Ethical Planning (short paper) |pdfUrl=https://ceur-ws.org/Vol-3251/paper8.pdf |volume=Vol-3251 |authors=Umberto Grandi,Emiliano Lorini,Timothy Parker,Rachid Alami |dblpUrl=https://dblp.org/rec/conf/ijcai/GrandiLPA22 }} == Logic-Based Ethical Planning (short paper)== https://ceur-ws.org/Vol-3251/paper8.pdf
Logic-Based Ethical Planning
Umberto Grandi1 , Emiliano Lorini1 , Timothy Parker1,* and Rachid Alami2
1
    IRIT, CNRS and University of Toulouse, France
2
    LAAS-CNRS, France


                  Abstract
                  In this paper we propose a model for planning with multiple values, with intended application to ethics
                  and robotics. Our language for ethical planning combines linear temporal logic with lexicographic
                  preference modelling, allowing us to assess plans both with respect to an agent’s values and their
                  desires and introducing the novel concept of morality level of an agent. We provide some foundational
                  complexity results for our setting, and we discuss potential applications to robotics.

                  Keywords
                  KR and ethics, Linear temporal logic, Compact preference representation, Robotics




1. Introduction
In ethical planning the planning agent has to find a plan for promoting a certain number of
ethical values. Unlike classical planning in which the goal to be achieved is unique, in ethical
planning the agent can have multiple and possibly conflicting values. Consequently, in ethical
planning the agent needs to evaluate and compare different plans depending on how many and
which values are promoted by each of them.
   Including ethical considerations in robotics planning requires (at least) two steps. First, design
a language to express these considerations as values, taking in mind that they often conflict
both amongst themselves, and with the goal. Such a value representation language needs to be
compact and computationally tractable. Second, design an algorithm that compares plans based
on the ethical values.
   In this paper we put forward a framework for ethical planning based on a simple temporal
logic language to express both an agent’s values and goals. For simplicity we focus on single-
agent planning with deterministic sequential actions in a known environment. Our model
borrows from the existing literature on planning and combines it in an original way with
research in compact representation languages for preferences. The latter is a widely studied
topic in knowledge representation, where logical and graphical languages are proposed to
represent compactly the preferences of an agent over a combinatorial space of alternatives,
often described by means of variables. In particular, we commit to a prioritised or lexicographic
approach to solve any conflicts between goals, desires, and best practice in a unified planning
model.
IJCAI 2022: Cognitive Aspects of Knowledge Representation, July 2022, Vienna, Austria
*
 Corresponding author.
$ umberto.grandi@irit.fr (U. Grandi); emiliano.lorini@irit.fr (E. Lorini); timothy.parker@irit.fr (T. Parker);
rachid.alami@laas.fr (R. Alami)
    Β© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR Workshop Proceedings (CEUR-WS.org)
   There is considerable research in the field of ethics and AI, see MΓΌller (2021) for a general
overview. Popular ethical theories for application are consequentialism, deontology, and virtue
ethics.1 Our approach is designed to be theory neutral and should be able to handle most ethical
systems, though it is probably a most natural fit for pluralistic consequentialism [6].
   In terms of practical applications of ethics to robotics, there are approaches both in terms
of formal models [7] and allowing agents to learn ethical values [8]. Yu et al. (2018) provides
a recent survey of this research area. The closest approaches to ours are the recent work
on (𝑖) logics for ethical reasoning and (𝑖𝑖) using a compact representation language to aid
with decision-making in an ethically sensitive domain. The former are based on different
methodologies including event calculus (ASP) [10], epistemic logic and preference logic [11, 12],
BDI (belief, desire, intention) agent language [13], classical higher-order logic (HOL) [14]. The
latter was presented in β€œblue sky” papers [15, 16] complemented with a technical study of
distances between CP-nets [17] and, more recently, with an empirical study on human ethical
decision-making [18].
   In the field of robotics, there are approaches to enabling artificial agents to compute ethical
plans. The evaluative component, which consists in assessing the β€œgoodness” of an action or a
plan in relation to the robot’s values, is made explicit by Arkin et al. (2012) and Vanderelst and
Winfield (2018). Evans et al. (2020) introduces ethical decision-making by way of considering
the competing ethical claims of various agents on a robot’s behaviour. Other work helps robots
to produce socially acceptable plans by assigning weights to social rules [22] .


2. Model
In this section, we present the formal model of ethical evaluation and planning which consist,
respectively, in comparing the goodness of plans and in finding the best plan relative to a given
base of ethical values.

2.1. LTL Language
Let Prop be a countable set of atomic propositions and let Act be a finite non-empty set of action
names. The set of states is S = 2Prop . In order to represent the agent’s values, we introduce
the language of LTL (Linear Temporal Logic) [23], noted β„’LTL (Prop) (or β„’LTL ), defined by the
grammar: πœ™ ::= 𝑝 | Β¬πœ™ | πœ™1 ∧ πœ™2 | Xπœ™ | πœ™1 U πœ™2 with 𝑝 ranging over Prop. X and U are
the operators β€œnext” and β€œuntil” of LTL. We can also add the operators β€œhenceforth” (G) and
                                                                         def                         def
β€œeventually” (F) which are defined in the usual way: Gπœ™ = Β¬(⊀ U πœ™) and Fπœ™ = Β¬GΒ¬πœ™.

2.2. Histories, Actions and Plans
History Histories describe how a state changes over time. In our model a history describes
the state of the environment after each action performed by the agent, as well as the actions
themselves. We define a history to be a pair 𝐻 = (𝐻st , 𝐻act ) with 𝐻st : N βˆ’β†’ S and 𝐻act :

1
    See Copp (2007) for a philosophical introduction, and Jenkins et al. (2017), Powers (2005), and Vallor (2016) for a
    discussion of these three theories in robotics.
N βˆ’β†’ Act. We define Hist to be the set of all possible histories. Semantic interpretation of
formulas in β„’LTL relative to a history 𝐻 ∈ Hist and a time point π‘˜ ∈ N goes as follows (boolean
cases are as usual):

                𝐻, π‘˜ |= 𝑝          ⇐⇒ 𝑝 ∈ 𝐻st (π‘˜),
                𝐻, π‘˜ |= Xπœ™         ⇐⇒ 𝐻, π‘˜ + 1 |= πœ™,
                𝐻, π‘˜ |= πœ™1 U πœ™2 ⇐⇒ βˆƒπ‘˜ β€² β‰₯ π‘˜ : 𝐻, π‘˜ β€² |= πœ™2 and
                                         βˆ€π‘˜ β€²β€² β‰₯ π‘˜ : if π‘˜ β€²β€² < π‘˜ β€² then 𝐻, π‘˜ β€²β€² |= πœ™1 .

Action We suppose actions in Act are described by an action theory 𝛾 = (𝛾 + , 𝛾 βˆ’ ), where
𝛾 + and 𝛾 βˆ’ are, respectively, the positive and negative effect precondition functions, where
𝛾 + : Act Γ— Prop βˆ’β†’ β„’PL , 𝛾 βˆ’ : Act Γ— Prop βˆ’β†’ β„’PL (β„’PL is propositional logic).
   Therefore if 𝐻act (π‘˜) = π‘Ž ∈ Act (meaning that action π‘Ž is performed at time π‘˜) and 𝐻, π‘˜ ⊨
𝛾 + (π‘Ž, 𝑝) then 𝑝 ∈ 𝐻st (π‘˜ + 1) (meaning that 𝑝 is true at time π‘˜ + 1). Similarly, if 𝐻act (π‘˜) = π‘Ž
and 𝐻, π‘˜ ⊨ 𝛾 βˆ’ (π‘Ž, 𝑝) then 𝑝 ∈ / 𝐻st (π‘˜ + 1). If both or neither of 𝛾 + (π‘Ž, 𝑝) and 𝛾 βˆ’ (π‘Ž, 𝑝) are true
at time π‘˜ (where 𝐻act (π‘˜) = π‘Ž) then 𝑝 ∈ 𝐻st (π‘˜ + 1) ⇔ 𝑝 ∈ 𝐻st (π‘˜) (𝑝 does not change).
   We also suppose that every action theory contains the special action skip, such βˆ€π‘Ž ∈ Act, 𝑝 ∈
Prop, 𝛾 + (π‘Ž, 𝑝) = 𝛾 βˆ’ (π‘Ž, 𝑝) = 𝑝 ∧ ¬𝑝 (this action does nothing).

Plan Given π‘˜ ∈ N, a π‘˜-plan is a function πœ‹ : {0, . . . , π‘˜} βˆ’β†’ Act. In other words, a plan is a
sequence of actions. Since actions are deterministic, given a plan πœ‹, an action theory 𝛾 and an
initial state 𝑠0 it is possible to create the corresponding history by setting 𝐻act (𝑑) = πœ‹(𝑑) for
0 ≀ 𝑑 ≀ π‘˜ and 𝐻act (𝑑) = skip for 𝑑 > π‘˜, setting 𝐻st (0) = 𝑠0 and generating the rest of 𝐻st
using 𝛾. Given a set of LTL-formulas Ξ£, we define Sat(Ξ£,πœ‹,𝑠0 ,𝛾) to be the set of formulas from
Ξ£ that are guaranteed to be true by the execution of plan πœ‹ at state 𝑠0 under the action theory 𝛾.

2.3. Values and Desires
Values In our setting an agent’s values are represented by sets of LTL formulas ordered
according to their priority level (Ω1 are the most important and β„¦π‘š are the least). Values can
take various forms, but many values can be interpreted as saying that either a certain state
of affairs must always/never hold, or should hold at some point. These can be expressed as
Gπœ™/GΒ¬πœ™ (example: β€œhumans must not be harmed”) and Fπœ™ (example: β€œthe dog should be taken
for a walk”). Since our model can handle an arbitrary number of prioritised value sets, this
means we can handle values of various types, including moral values, social norms and values
of best practice.

Definition 1 (Ethical planning domain). An ethical planning domain is a tuple βˆ† =
(𝛾, 𝑠0 , Ω) where:

    β€’ 𝛾 = (𝛾 + , 𝛾 βˆ’ ) is an action theory and 𝑠0 is an initial state, as specified above;
    β€’ Ω = (Ω1 , . . . , β„¦π‘š ) is the agent’s value base with β„¦π‘˜ βŠ† β„’LTL for every 1 ≀ π‘˜ ≀ π‘š.
   Following [12], we call evaluation the operation of computing an ideality ordering over
plans from a value base. Building on classical preference representation languages [24], we
define the following qualitative criterion of evaluation, noted βͺ―qual
                                                                 Ξ” , which compares two plans
lexicographically on the basis of inclusion between sets of values. It is also possible to define a
quantitative ordering based on the number of satisfied values at each level.

Definition 2 (Qualitative ordering of plans). Let βˆ† = (𝛾, 𝑠0 , Ω) be an ethical planning do-
main with Ω = (Ω1 , . . . , β„¦π‘š ) and πœ‹1 , πœ‹2 ∈ Plan . Then, πœ‹1 βͺ―qual
                                                                Ξ”    πœ‹2 if and only if:

                   (𝑖) βˆƒ1 ≀ π‘˜ ≀ π‘š s.t. Sat(β„¦π‘˜ ,πœ‹1 ,𝑠0 ,𝛾) βŠ† Sat(β„¦π‘˜ ,πœ‹2 ,𝑠0 ,𝛾),
                   (𝑖𝑖) βˆ€1 ≀ π‘˜ β€² < π‘˜, Sat(β„¦π‘˜β€² ,πœ‹1 ,𝑠0 ,𝛾) = Sat(β„¦π‘˜β€² ,πœ‹2 ,𝑠0 ,𝛾).

Desires We expect autonomous ethical agents to be driven by both ethical values and also
endogenous motivations, also called desires or goals. The following definition extends the notion
of ethical planning domain by the notions of desire and introduces the novel concept of degree
of morality.

Definition 3 (Mixed-motive planning domain). A mixed-motive planning domain is a tuple
Ξ“ = (𝛾, 𝑠0 , Ω, Ω𝐷 , πœ‡) where

    β€’ (𝛾, 𝑠0 , Ω) is an ethical planning domain (Definition 1);
    β€’ Ω𝐷 βŠ† β„’LTL is the agent’s set of desires or goals;
    β€’ πœ‡ ∈ {0, . . . , dg(Ω)} is the agent’s degree of morality.

    A mixed-motive planning domain induces an ethical planning domain whereby the agent’s
set of desires is treated as a set of values whose priority level depends on the agent’s degree of
morality. The lower the agent’s degree of morality, the higher the "goal set" is ranked relative
to the agent’s values. This works as follows: for morality level πœ‡ and mixed-motive planning
                                                                                          β€²
domain 𝑀 = (𝛾, 𝑠0 , Ω, Ω𝐷 , πœ‡) the induced ethical planning domain is 𝑀 β€² = (𝛾, 𝑠0 , Ω ) where
  β€²
Ω = Ω1 , ..., β„¦πœ‡βˆ’1 , Ω𝐷 , β„¦πœ‡ , ..., β„¦π‘š .


3. Complexity Results
We borrow our terminology from the work of Lang (2004) on compact preference representation,
but the problems we study have obvious counterparts in the planning literature. Our first
problem is Comparison, which takes as input an initial state 𝑠0 , an ethical planning domain βˆ†,
two π‘˜-plans πœ‹1 and πœ‹2 , and asks whether πœ‹1 βͺ―qual
                                                Ξ”   πœ‹2 . Our second problem is Non-Dominance,
i.e., the problem of determining if given a π‘˜-plan πœ‹1 for ethical planning domain βˆ† there exists
a better π‘˜-plan wrt. βͺ―qual
                        Ξ” .
   Despite the complexity of our setting, Comparison can be solved quite efficiently (it is in P).
Our second problem, Non-Dominance, like most instances of classical planning satisfaction, is
PSPACE-complete. These should be interpreted as baseline results showing the computational
feasibility of our setting for ethical planning with LTL. Formal results and proofs have been
omitted in the interest of space and can be provided on request.
4. Conclusion
We put forward a novel setting for ethical planning obtained by combining a simple logical
temporal language with lexicographic preference modelling. Our setting applies to planning
situations with a single agent who has deterministic and instantaneous actions to be performed
sequentially in a static and known environment. Aside from the addition of values, our frame-
work differs from classical planning in two aspects, by having multiple goals and by allowing
temporal goals. In particular, the expressiveness of LTL means that we can express a wide variety
of goals and values, including complex temporal goals such as β€œif the weather is cold, close
external doors immediately after opening them”, with a computational complexity equivalent
to that of standard planners. As a limitation, the system is less able to express values that tend
to be satisfied by degree rather than absolutely or not at all.
   With regards to the current literature on ethical planning, we feel that one of the strengths of
our model is its relative simplicity and ease of understanding, which could be an important factor
for the acceptance of ethical robots by the general public. A similar idea to our lexicographic
ordering of values is discussed in Dennis et al. (2016), although they use propositional rather
than temporal logic. Possibly the most significant feature of our model is the concept of the
morality level of an agent or goal, as this appears to be a novel idea in the field of ethical
planning and should allow robots to appropriately handle goals with vastly different levels of
urgency/importance.
   Among the multiple directions for future work that our definitions open, we plan to study
the multi-agent extension with possibly conflicting values among agents, moving from plans to
strategies (functions from states or histories to actions), from complete to incomplete informa-
tion, expand on the computational complexity analysis and, most importantly, test our model
by implementing it in simple robotics scenarios.


References
 [1] V. C. MΓΌller, Ethics of Artificial Intelligence and Robotics, in: The Stanford Encyclopedia
     of Philosophy, Metaphysics Research Lab, Stanford University, 2021.
 [2] D. Copp, The Oxford Handbook of Ethical Theory, Oxford University Press, 2007.
 [3] R. Jenkins, B. Talbot, D. Purves, When Robots Should Do the Wrong Thing, in: Robot
     Ethics 2.0, Oxford University Press, 2017.
 [4] T. M. Powers, Deontological Machine Ethics, in: Association for the Advancement of
     Artificial Intelligence Fall Symposium Technical Report, 2005.
 [5] S. Vallor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting,
     Oxford University Press, 2016.
 [6] A. Sen, On Ethics and Economics, Basil Blackwell, 1987.
 [7] L. A. Dennis, C. P. del Olmo, A Defeasible Logic Implementation of Ethical Reasoning, in:
     First International Workshop on Computational Machine Ethics (CME), 2021.
 [8] M. Anderson, S. L. Anderson, Geneth: a general ethical dilemma analyzer, in: Paladyn
     (Warsaw), De Gruyter, 2018.
 [9] H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser, Q. Yang, Building Ethics into Artificial
     Intelligence, in: Proceedings of the 27th International Joint Conference on Artificial
     Intelligence (IJCAI), 2018.
[10] F. Berreby, G. Bourgne, J. Ganascia, A Declarative Modular Framework for Representing
     and Applying Ethical Principles, in: Proceedings of the 16th Conference on Autonomous
     Agents and MultiAgent Systems (AAMAS), 2017.
[11] E. Lorini, A logic for reasoning about moral agents, in: Logique & Analyse, 2015.
[12] E. Lorini, A Logic of Evaluation, in: Proceedings of the 20th International Conference on
     Autonomous Agents and Multiagent Systems (AAMAS), 2021.
[13] L. A. Dennis, M. Fisher, M. Slavkovik, M. Webster, Formal verification of ethical choices in
     autonomous systems, in: Robotics and Autonomous Systems, 2016.
[14] C. BenzmΓΌller, X. Parent, L. W. N. van der Torre, Designing normative theories for ethical
     and legal reasoning: LogiKEy framework, methodology, and tool support, in: Artificial
     Intelligence, 2020.
[15] A. Loreggia, F. Rossi, K. B. Venable, Modelling Ethical Theories Compactly, in: The
     Workshops of the The Thirty-First AAAI Conference on Artificial Intelligence, 2017.
[16] F. Rossi, N. Mattei, Building Ethically Bounded AI, in: The Thirty-Third AAAI Conference
     on Artificial Intelligence (AAAI), 2019.
[17] A. Loreggia, N. Mattei, F. Rossi, K. B. Venable, On the Distance Between CP-nets, in:
     Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent
     Systems (AAMAS), 2018.
[18] E. Awad, S. Levine, A. Loreggia, N. Mattei, I. Rahwan, F. Rossi, K. Talamadupula, J. B.
     Tenenbaum, M. Kleiman-Weiner, When Is It Acceptable to Break the Rules? Knowledge
     Representation of Moral Judgement Based on Empirical Data, in: CoRR abs/2201.07763,
     2022.
[19] R. C. Arkin, P. Ulam, A. R. Wagner, Moral Decision Making in Autonomous Systems:
     Enforcement, Moral Emotions, Dignity, Trust, and Deception, in: Proceedings of the IEEE,
     2012.
[20] D. Vanderelst, A. F. T. Winfield, An architecture for ethical robots inspired by the simulation
     theory of cognition, in: Cognitive Systems Research, 2018.
[21] K. Evans, N. de Moura, S. Chauvier, R. Chatila, E. Dogan, Ethical Decision Making in
     Autonomous Vehicles: The AV Ethics Project, in: Science and engineering ethics, Springer
     Netherlands, 2020.
[22] S. Alili, R. Alami, V. Montreuil, A Task Planner for an Autonomous Social Robot, in:
     Proceedings of the 9th International Symposium on Distributed Autonomous Robotic
     Systems (DARS), 2008.
[23] A. Pnueli, The temporal logic of programs, in: Proceedings of the 18th Annual Symposium
     on Foundations of Computer Science (FOCS), 1977.
[24] J. Lang, Logical Preference Representation and Combinatorial Vote, in: Annals of Mathe-
     matics and Artificial Intelligence, 2004.