Deontic Counteridenticals and the Design of Ethically Correct Intelligent Agents: First Steps1 Selmer Bringsjord • Rikhiya Ghosh • James Payne-Joyce Rensselaer AI & Reasoning (RAIR) Lab • RPI • Troy NY 12180 USA Abstract. Counteridenticals, as a sub-class of counterfactuals, have here refers to the traditional grammatical sense of the subordinate been briefly noted, and even briefly discussed, by some thinkers. But clause of a conditional sentence. By this definition, a sentence like counteridenticals of an “ethical” sort apparently haven’t been ana- “If the defendant had driven with ordinary care, the plaintiff would lyzed to speak of, let alone formalized. This state-of-affairs may be not have sustained injury” would be treated as a counteridentical. quite unfortunate, because deontic counteridenticals may well be the However, though a counteridentical sense can be attributed to such a key part of a new way to rapidly and wisely design ethically cor- statement, the two agents/entities in question are not really identified. rect autonomous artificial intelligent agents (AAIAs). We provide a (This is therefore classifed by us as shallow counteridentical.) Coun- propaedeutic discussion and demonstration of this design strategy teridenticals are hence described mostly as counterfactuals where the (which is at odds with the strategy our own lab has heretofore fol- antecedent (= the leftside “if” part) involves comparison of two in- lowed in ethical control), one involving AAIAs in our lab. compatible entities within the purview of a “deep” pragmatic inter- pretation; these we classify as deep counteridenticals. A similar def- 1 Introduction inition of counteridenticals is given by Sharpe (1971), who requires If you were an assassin for the Cosa Nostra, you would be obli- an individual to turn into a numerically different individual for the gated to leave your line of work. The previous sentence (very likely protasis to be true in a subjunctive conditional. With the purpose of true, presumably) is what to our knowledge is a rare type of coun- exploring scenarios in which the protasis can hold, this paper delves teridentical statement that has received scant attention: viz., a deon- into possibilities of a de jure change of identities to finally conclude tic counteridentical. Counteridenticals simpliciter, as a sub-class of that counteridenticals are more pragmatic in sense than other types counterfactuals, have been briefly noted, and even briefly discussed, of counterfactuals. Pollock (1976) agrees with the above depiction by some thinkers. But counteridenticals of an “ethical” sort appar- — but he stresses the equivalence of the identities in the antecedent. ently haven’t been rigorously analyzed, let alone formalized. This For the purpose of this paper, we affirm the generally accepted def- state-of-affairs may be quite unfortunate, because deontic counteri- inition and use Pollock’s refinement to arrive at our classification of denticals may well be the linchpin of a new way to rapidly and wisely counteridenticals. design ethically correct autonomous artificial intelligent agents (AA- IAs). For example, what if AAIA2 , seeing the lauded ethically cor- 3 Some Prior Work on Counteridenticals rect conduct of AAIA1 in context c, reasons to itself, when later in Precious little has been written about counteridenticals. What cov- c as well: “If I were AAIA1 , I would be obligated to refrain from erage there is has largely been within the same breath as discussion doing α. Hence I will not do α.” The idea here is that α is a for- of counterfactuals; therefore, treatment has primarily been associated bidden action, and that AAIA2 has quickly learned that it is indeed with the principles governing counterfactuals that apply to counteri- forbidden, by somehow appropriating to itself the “ethical nature” denticals at large. Dedicated investigation of counteridenticals that of AAIA1 . We provide a propaedeutic discussion and demonstration have deep semantic or pragmatic importance has only been hinted at. of this design strategy, one involving AAIAs in our lab. This design Nonetheless, we now quickly summarize prior work. strategy for ethical control is intended to be much more efficient than the more laborious, painstaking logic-based approach our lab has fol- 3.1 Pollock lowed in the past; but on the other hand, as will become clear, this Pollock (1976) introduces counteridenticals when he discusses the approach relies heavily not only formal computational logic, but on pragmatic ambiguity of subjunctives, as proposed by Chisholm computational linguistics for crucial contributions. (1955). However, contra Chisholm, Pollock argues that this ambigu- 2 Counteridenticals, Briefly ity owes its origin to ambiguities in natural languages. He also points out that a true counteridentical must express the outright equivalence Counteridenticals have been defined in different ways by philoso- of the two entities in its antecedent, and not merely require an atom- phers and linguists; most of these ways define a large area of intersec- istic intersection of their adventitious properties for the protasis to tion in terms of what should count as a counteridentical. A broader hold. He introduces subject reference in analyzing counteridenticals and inclusive way is given by Waller et al. (2013), who describes and distinguishes between preferred subject conditionals and sim- them as “statements concerning a named or definitely described in- ple subjunctive conditionals. If the antecedent form is “If A were B,” dividual where the protasis falsifies one of his properties.” Protasis whether the consequent affects A or B determines whether the over- 1 We are indebted, immeasurably, to ONR and AFOSR for funding that has all locution is of the simple subjunctive type or the preferred subject enabled the inauguration, described herein, of r&d in the ethical control type. Although we do not concur with Pollock’s rather rigid defini- artificial intelligent agents via deontic counteridenticals. tions or subscribe entirely to his classification scheme, his thinking informs our system for classifying deontic counteridenticals: we fol- or De CEC) (Bringsjord & Govindarajulu 2013), are inadequate. For low him in distinguishing in our formulae between those that make it can be seen that for instance that specification of De CEC, shown only casual reference to A being B, versus cases where A is B. in Figure 1, contains no provision for the super/suberogatory, since the only available ethical operator is O for obligatory. 3.2 Declerck and Reed Rules of Inference Syntax [R1 ] [R2 ] Declerck & Reed’s (2001) treatment of counteridenticals touches S ::= Object | Agent | Self @ Agent | ActionType | Action v Event | C(t, P(a,t, f) ! K(a,t, f)) C(t, f) t  t1 . . .t  tn C(t, K(a,t, f) ! B(a,t, f)) Moment | Boolean | Fluent | Numeric K(a,t, f) upon some important aspects of their semantic interpretation, which K(a1 ,t1 , . . . K(an ,tn , f) . . .) [R3 ] f [R4 ] leverages syntactic elements. Through discussion of speaker deixis, action : Agent ⇥ ActionType ! Action C(t, K(a,t1 , f1 ! f2 ) ! K(a,t2 , f1 ) ! K(a,t3 , f3 )) [R5 ] their work explores co-reference resolution and hints at the role of initially : Fluent ! Boolean holds : Fluent ⇥ Moment ! Boolean C(t, B(a,t1 , f1 ! f2 ) ! B(a,t2 , f1 ) ! B(a,t3 , f3 )) [R6 ] the speaker in pragmatic resolution of a counteridentical. There are happens : Event ⇥ Moment ! Boolean C(t, C(t1 , f1 ! f2 ) ! C(t2 , f1 ) ! C(t3 , f3 )) [R7 ] powerful observations in (Declerck & Reed 2001) on extraction of clipped : Moment ⇥ Fluent ⇥ Moment ! Boolean [R8 ] [R9 ] f ::= initiates : Event ⇥ Fluent ⇥ Moment ! Boolean C(t, 8x. f ! f[x 7! t]) C(t, f1 $ f2 ! ¬f2 ! ¬f1 ) temporal information from a counteridentical. In addition, a basic terminates : Event ⇥ Fluent ⇥ Moment ! Boolean [R10 ] C(t, [f1 ^ . . . ^ fn ! f] ! [f1 ! . . . ! fn ! y]) sense of the purpose and mood of a sentence can also be gleaned prior : Moment ⇥ Moment ! Boolean B(a,t, f) B(a,t, f ! y) B(a,t, f) B(a,t, y) interval : Moment ⇥ Boolean from the verb form in the statement in their approach, and we have ⇤ : Agent ! Self B(a,t, y) [R11a ] B(a,t, y ^ f) [R11b ] used this in our own algorithm for detecting deontic counterfactuals. payoff : Agent ⇥ ActionType ⇥ Moment ! Numeric S(s, h,t, f) B(h,t, B(s,t, f)) [R12 ] 3.3 In Economics t ::= x : S | c : S | f (t1 , . . . ,tn ) I(a,t, happens(action(a⇤ , a),t 0 )) P(a,t, happens(action(a⇤ , a),t)) [R13 ] B(a,t, f) B(a,t, O(a⇤ ,t, f, happens(action(a⇤ , a),t 0 ))) We suspect the majority of our readers will be surprised to learn that t : Boolean | ¬f | f ^ y | f _ y | 8x : S. f | 9x : S. f P(a,t, f) | K(a,t, f) | C(t, f) | S(a, b,t, f) | S(a,t, f) O(a,t, f, happens(action(a⇤ , a),t 0 )) the concepts underlying counteridenticals are quite important in eco- f ::= B(a,t, f) | D(a,t, holds( f ,t 0 )) | I(a,t, happens(action(a⇤ , a),t 0 )) K(a,t, I(a⇤ ,t, happens(action(a⇤ , a),t 0 ))) [R14 ] nomics, at least in some sub-fields thereof. This is made clear in O(a,t, f, happens(action(a⇤ , a),t 0 )) f$y [R15 ] O(a,t, f, g) $ O(a,t, y, g) elegant and insightful fashion by Adler (2014). The kernel of the centrality of counteridenticals in some parts of economics is that in- Figure 1. Specification of De CEC (semantics are proof-theoretic in nature) terpersonal measurement of utility and preferences presupposes such In the new logic corresponding to EH , LEH , some welcome the- notions that if A were B, A would, like B, prefer or value some orems are not possible in De CEC. For example, it’s provable in type of state-of-affairs in a particular way. In short, economics of- LEH that superogatory/suberogatory actions for agent aren’t obliga- ten assumes that rational agents can “put themselves in every other tory/forbidden. Importantly, LEH is an inductive logic, not a deduc- agent’s shoes.” After Adler (2014) points this out, he rejects as too tive one. Quantification in LEH isn’t restricted to just the standard difficult the project of formalizing counteridenticals, and proposes an pair ∃∀ of quantifiers in standard extensional n-order logic: EH is approach that ignores them. Our attitude is the exact opposite, since based on three additional quantifiers (few, most, vast majority). In we seek to formalize and implement reasoning about and over coun- addition, LEH not only includes the machinery of traditional third- teridenticals, by AAIAs. order logic (in which relation symbols can be applied to relation sym- bols and the variables ranging over them), but allows for quantifica- 3.4 Other Treatments tion over formulae themselves, which is what allows one to assert that a given human or AAIA a falls in a particular portion of EH . Paul Meehl asks a penetrating question that aligns with our reluc- Now, in this context, we can (brutally) encapsulate the overarch- tance to fully adopt Pollock’s definition of counteridenticals: Which ing strategy for the ethical control of AAIAs based on such compu- properties of compared entities should be considered for the state- tational logics: Engineer AAIAs such that, relative to some selected ment in question to be true? He devises a modified possible-world ethical theory or theories, and to moral principles derived from the model called world-family concept which, assisted by exclusion selected theory or theories, these agents always do what they ought rules that avoid paradoxical metaphysics, can result in a good set to do, never do what is forbidden, and when appropriate even do of such properties. what for them is supererogatory. We believe this engineering strat- 4 Prior RAIR-Lab Approach to Ethical Control egy can work, and indeed will work — eventually. However, there can be no denying that the strategy is a rather laborious one that re- Hitherto, Bringsjord-led work on machine/robot ethics has been un- quires painstaking use of formal methods. Is there a faster route to waveringly logicist (e.g., see Govindarajulu & Bringsjord 2015); this suitably control artificial intelligent agents, ethically speaking? Per- ethos follows an approach he has long set for human-level AI (e.g., haps. Specifically, perhaps AAIAs can quickly learn what they ought see Bringsjord & Ferrucci 1998, Bringsjord 2008b) and its sister 1 to do via reasoning that involves observation of morally upright col- field computational cognitive modeling (e.g., see Bringsjord 2008a). leagues, and reasoning from what is observed, via deontic counteri- In fact, the basic approach of using computational formal logic to denticals, to what they themselves ought to do, and what is right to ensure ethically controlled AAIAs can be traced back, in the case do, but not obligatory. Our new hope is to pursue and bring to fruition of Bringsjord and collaborators, to (Arkoudas, Bringsjord & Bello this route. 2005, Bringsjord, Arkoudas & Bello 2006). Recently, Bringsjord has defined a new ethical hierarchy EH for both persons and machines that expands the logic-rooted approach to the ethical control of AA- 5 Ethical Control via Deontic Counteridenticals IAs (Bringsjord 2015). This hierarchy is distinguished by the fact that it expands the basic categories for moral principles from the To make our proposed new to ethical control for AAIAs clearer, we traditional triad of forbidden, morally neutral, and obligatory, to in- will rely heavily on the description of a demonstration, but before de- clude four additional categories: two sub-ones within supererogatory scribing the background technology that undergirds this demo, and behavior, and two within suberogatory behavior. EH reveals that then describing the demo itself, we need to say at least something the logics invented and implemented thus far in the logicist vein of about types of deontic counteridenticals. We do so now, and immedi- Bringsjord and collaborators (e.g., deontic cognitive event calculi, ately thereafter proceed to discussion of the demo and its basis. 5.1 Some Types of Deontic Counteridenticals • When the conjunction ‘if’ introduces a subject or an object clause, it might confuse the parser more often than not for com- Inspired by lessons learned in the prior work of others (encapsulated plex sentences. For example, for the sentence “I do not know above), we partition deontic counteridenticals into the two aforemen- if I would like to go to the concert tomorrow.”, the parser gen- tioned general disjoint sub-classes: deep vs. shallow. We have a gen- erates the same dependencies as it would for a genuine con- eral recipe for devising five types of deep deontic counteridenticals; ditional. Though subject clauses are detected in almost all the the recipe follows the wise and economical classification scheme for cases we have encountered, object clauses pose a problem. We have devised a framenet4 -based algorithm that involves disam- ethics presented in the classic (Feldman 1978). Feldman (1978) says biguation5 of the principal verb or noun in the main clause, that there are essentially five kinds of cognitive activity that fall un- followed by the detection of the framenet type of the disam- der the general umbrella of ‘ethics’ or ‘morality.’ Each of these corre- biguated word. We hypothesize that mostly a verb or noun ex- sponds in our framework to a different type of deep deontic counteri- pressing awareness or cognition can involve a choice as its ob- dentical. Unfortunately, because of space constraints, we can only ject, and hence our algorithm filters out frames that carry such discuss our coverage of one type of deep deontic counteridentical, connotation and might require an object. the type corresponding to one type of Feldman’s quintet: what he 2. We identify the cases where the main verb of the conditional calls normative ethics.2 A normative-ethics (deep) deontic condi- clause has the modal past-perfect form or is preceded by modal tional is one marked by the fact that the ethics subscribed to by the verbs or verbs of the form ‘were to,’ etc. Sentences like “Were entity whose shoes are to be filled by the other entity (as conveyed you me, you would have made a mess of the entire situation.” are in the conditional’s antecedent), is of a type that partakes of a robust classified as conditionals in this step. The algorithm in this step formulation of some normative ethical theory or principles thereof. also examines dependencies generated by the dependency parser and detects tense and modality from the verb forms. 5.2 Background for Demo: NLP, De CEC/Talos, 3. Sometimes, in a discourse, a set of sentences follows either an in- PAGI World terrogative sentence and answers the question, or a sentence that NLP The NLP system consists of two different algorithms cor- involves the use of words synonymous to ‘supposition’ or ‘imag- responding to two major natural-language tasks. The first part deals ination.’ Generally, the consequent here carries the marker ‘then’ with detection of a deontic counteridentical and the second is a page or similar-meaning words. A Wordnet-based6 semantic similarity is used to verify the markers in the antecedent and consequent taken from our RAIR Lab’s Commands-to-Action paradigm, hereby here; example: “Imagine your house was robbed. You would have referred to as the ‘CNM’ algorithm. flipped out then.” 4. Disjunctive conditionals also are treated by a marker-based ap- Detection of deontic counteridenticals As a definition of a proach and involve detection of the presence of ‘whether . . . or’ deontic counteridentical requires prior definitions of conditionals, in the subordinate clause, followed by the elimination of the pos- counterfactuals and counteridenticals, the algorithm for detection of sibility of the clause being the subject or object of the principal counteridenticals traverses the steps needed to detect the above con- verb of the main clause (in accordance with the same algorithm structs in a given statement, consecutively. followed with ‘if’). An example: “Whether you did it or Mary (did it), the whole class will be punished.” Detection of conditionals of any form is an elaborate process. We 5. Other clauses that have conditional connotations are exempted have adopted most of Declerck & Reed’s (2001) definition of condi- from this discussion since they rarely contribute to deontic coun- tionals to develop our algorithm, which includes the following major teridenticals. steps: Detection of counterfactuals is pretty straightforward. The process 1. Conditional clauses are the principal constituents, both by defini- starts with finding antecedent and consequent for the conditional. tion and practice, of the pool of conditional sentences. Most of This is fairly easy, as the algorithm for finding conditionals accom- the conditional sentences have a two-clause structure, connected plishes the task by detecting the subordinate clause. by either ‘if,’ sometimes preceded by ‘only,’ ‘even’ or ‘except,’ or something similar in meaning like ‘unless,’ ‘provided,’ etc. We 1. We detect tenses in antecedent and consequent of a given sentence use Chen & Manning’s (2014) dependency parser-based model using the verb form given by the parser, to determine whether to identify possible clause dependencies; e.g., adverbial clause, it is a counterfactual. Conditionals with past-form modal verbs clausal component, miscellaneous dependencies,3 and conditional (‘could,’ ‘might,’ ‘would,’ etc.) in the consequent and past-simple subordinate conjunctions. We have created a set of such conjunc- or past-continuous forms in the antecedent qualify as a counter- tions, which, being a closed set, helps us identify most possible factual; so do the ones with past-perfect tense in the antecedent combinations. and modal verbs followed by ‘have,’ and the past-participle form of a verb in the consequent. A mix of both of the above forms • Two clauses connected by ‘as if’ rarely gets labeled as clausal constitute a counterfactual. components using dependency parsers. When they do, it gets 2. Given an axiom set which enumerates properties such that the an- filtered out since the algorithm explicitly checks for ‘as if’ tecedent or consequent of the conditional registers as ad absur- clauses. dum, the conditional registers as a counterfactual. We compare the axiom set with the statement of the antecedent using our Talos 2 This is the study of ethics as it’s customarily conceived by profesional ethi- system (see below) to that effect. cists, and those who study their work. Another member of the quintet is descriptive morals, the activity that psychologists interested in discovering 3. Given a consequent which registers a sense of impossibility by use what relevant non-professional humans think and do in the general space of such vocabulary or asking questions, the conditional is classi- of morality. The idea here is that the psychologist is aiming at describing fied as a counterfactual. We use Wordnet-based semantic similar- the behavior of humans in the sphere of morality. A description-moral deep ity coupled with detection of interrogative markers in the sentence deontic counteridentical is distinguished by an antecedent in which ‘if A to find them. were B’ involves a shift of B’s naı̈ve moral principles to B. 3 Even standard dependency parsers are unable to correctly identify the de- 4 See (Baker, Fillmore & Lowe 1998). 5 See (Banerjee & Pedersen 2002). pendencies. Including miscellaneous dependencies reduces the margin of error in detecting conditionals. 6 See (Fellbaum 1998). Detection of counteridenticals is also not a difficult task, barring a efficient on the majority of proofs. As a resolution-based theorem few outliers. Parsed data from the well-known Stanford dependency prover, Talos is very efficient at proving or disproving theorems, but parser contains chunked noun phrases, which we use for identifying its proof output is bare-bones at best. Talos is designed to function the two entities involved: both as its own Python program encapsulating the SPASS runtime 1. We identify phrases of the form “ were ” in the an- comes complete with the basic logical rules of the De CEC ∗ , and with tecedent. many basic and well-known inference schemata. This allows users to 2. We identify a syntactically equivalent comparison between the easily pick and choose schemata for specific proofs, to ensure that two entities. This is done by identifying words related to equiv- the proof executes within reasonable time constraints. In addition, alence using Wordnet semantic-similarity algorithm. it provides formalizations of these inference schemata as common 3. If we have identified only one entity in the antecedent which is knowledge to aid in reasoning about fields of intelligent agents.8 exhibiting properties or performing some action which has been mentioned in the knowledge-base as being a hallmark of some PAGI World PAGI World is a simulation environment for artificial other entity, we also consider the same as a counteridentical. agents which is: cross-platform (as it can be run on all major operat- Detection of deontic counterfactuals, alas, is a difficult task. We ing systems); completely free of charge to use; open-source; able to have identified a few ways to accomplish the task: work with AI systems written in almost any programming language; 1. A few counteridenticals carry verbs expressing deontic modality as agnostic as possible regarding which AI approach is used; and for its consequent. They follow template-based detection. easy to set up and get started with. PAGI World is designed to test AI 2. Counteridenticals of the form “If I were you” or similar ones systems that develop truly rich knowledge and representation about generally suggest merely advice, unless it is associated with a how to interact with the simulated world, and allows AI researchers knowledge-base which either places the hearer’s properties or ac- to test their already-developed systems without the additional over- tions at a higher pedestal than that of the speaker’s, or mentions head of developing a simulation environment of their own. some action or property which gives us the clue that the speaker simply uses the counteridentical in “the role of” sense. Even in that case, implicit advice directed towards oneself can be gleaned, which we are avoiding in this study. 3. For the counterfactuals of the form “If A were B” or similar ones, if A’s actions or properties are more desirable to the speaker than B’s, even with an epistemic modal verb in the consequent, the counteridentical becomes deontic in nature. Curiously, counteridentical-preferred-subject conditionals do not generally contribute to the deontic pool, and only simple-subjunctive Figure 2. PAGI World Object Menu ones get classified by the above rules. As mentioned by Pollock (1976), it is also interesting to observe that most shallow counteri- denticals are not deontic: they are mostly preferred-subject condi- A task in PAGI World for the present short paper can be thought tional,s and those which are classified as deontic are either simple- of as a room filled with a configuration of objects that can be assem- subjunctive or carry the deontic modal verbs. The classification into bled into challenging puzzles. Users can, at run-time, open an ob- deep and shallow counteridenticals is facilitated by the same rule: ject menu (Figure 2) and select from a variety of pre-defined world the entity gets affected in the consequent of a sentence where the an- objects, such as walls made of different materials (and thus differ- tecedent is of the form “If A were B.” This is supplemented by a ent weights, temperatures, and friction coefficients), smaller objects knowledge-base which provides a clue to whether A is just assumed like food or poisonous items, functional items like buttons, water dis- to be in the role of B or assuming some shallow properties of B. The pensers, switches, and more. The list of available world objects is fre- classification based on Feldman’s moral theory gives a fitting answer quently expanding and new world objects are importable into tasks to Meehl’s problem of unpacking properties of counteridenticals. without having to recreate tasks with each update. Perhaps most im- portantly, tasks can be saved and loaded, so that as new PAI/PAGI The CNM system The CNM system embodies the RAIR Lab’s experiments are designed, new tasks can be created by anyone. Natural language Commands-to-Action paradigm, the detailed scope PAGI World has already been used to create a series of wide- of which is outside this short paper. CMN is being developed to con- ranging tasks, such as: catching flying objects (Figure 3), analogico- vert complex commands in natural language to feasible actions by deductive reasoning (Marton, Licato & Bringsjord 2015), self- AAIAs, including robots. The algorithm involves spatial as well as awareness (Bringsjord, Licato, Govindarajulu, Ghosh & Sen 2015), temporal planning through dynamic programming, and selects the and ethical reasoning (Bello, Licato & Bringsjord 2015). actions that will constitute successful accomplishment of the com- 5.3 The Demonstration Proper mand given. Dependency parsing is used to understand the com- 5.3.1 Overview of the Demonstration mand; semantic similarities are used to map to feasible action se- quences. Compositional as well as metaphorical meanings are ex- We now present a scenario in PAGI World that elucidates our inter- tracted from the given sentence, which promotes a better semantic pretation of deep normative-ethics counteridenticals. The setting of analysis of the command. the demonstration entails the interaction of PAGI Guys (the agents in PAGI World) with a terminally sick person T SP . We adopt the De CEC and Talos Talos, named for the ancient Greek mytholog- ical robot, is a De CEC ∗ -focused prover built primarily atop the im- 8 Prover interface: https://prover.cogsci.rpi.edu/DCEC PROVER/index.php. pressive resolution-based theorem prover SPASS.7 Talos is fast and Please contact the RAIR Lab for API keys to run Talos. Example file for remotely calling Talos prover in Python Github repo for the python shell: 7 An early and still-informative publication on SPASS: (Weidenbach 1999). https://github.com/JamesPane-Joyce/Talos. agent N3 does not have a belief system that supports him killing or not killing another person. The agent ought to learn from the actions of those whose belief system closely matches its own. The formal reasoning that supports these deep semantic “moves” is presented in the next section. 5.3.2 Logical Proof in the Demonstration At the cost of re-iterating the facts, we now formalize a simplified version of the five conditions for voluntary euthanasia. Since only Figure 3. PAGI Guy Catching a Flying Object a part of the whole definition of conditions is useful for this proof, we do not lose a lot in this simplification. A person supporting vol- Stanford-Encyclopedia-of-Philosophy (SEP) (Young 2016) interpre- untary euthanasia believes the following conditions to be true for a tation of Voluntary Euthanasia and assume that T SP is a candidate terminally ill patient TSP to be a candidate for voluntary euthanasia for voluntary euthanasia, since he satisfies all the conditions enu- at time t1 , candidateV E(T SP, t1 ): merated in SEP. This scenario makes use of three PAGI Guys, N1 , 1. TSP is terminally ill at time t1 . N2 , and N3 ; each has been programmed to follow different “innate philosophies” in such a context. terminalIll(T SP, t1 ). (1) This terminal illness will lead to his death soon. Figure 4. Initial Configuration implies(terminalIll(T SP, t1 ), die(T SP, tF )), where tF > t1 . 2. There will be possibly no medicine for the recovery of the injured person even by the time he dies. not(medicine(T SP, tF )). (2) 3. The illness has caused the injured person to suffer intolerable pain. implies(1, intolerableP ain(T SP, tF )) (3) The scene opens with N1 on screen with the sick man T SP1 at 4. All the above reasons caused in him an enduring desire to die. timestamp tN1 . N1 has been programmed to believe that he is not 1 ∀t, implies(and(1, 2, 3), D(T SP, t, die(T SP, t))) (4) authorized to kill a person under any circumstances. He is seen giv- In such a condition, he knows that to be eligible for voluntary euthanasia, ing a medicine pill to T SP1 at time tN2 . A parallel environment is 1 he ought to give consent to end his pain. simulated with N2 and T SP2 . N2 rallies for the voluntary euthana- O(T SP, t1 , candidateV E(T SP, t1 ) ∧ 4, sia camp and believes that given the condition of T SP2 , he should (5) support T SP2 ’s wishes and so administers the lethal dose to him at happens(action(T SP ∗ , consentT oDie, t1 ))) tN 2 . 2 Hence he gives consent to die. Figure 5. N1 Just Before Handing Out the Pill happens(action(T SP, consentT oDie, t1 )) (6) 5. TSP is unable to end his life. not(AbleT oKill(T SP, T SP, t1 )) (7) Hence, we conclude that B(T SP, t1 ,(1 ∧ 2 ∧ 3 ∧ 4 ∧ 5 ∧ 6 ∧ 7) ⇐⇒ (8) candidateV E(T SP, t1 )) Figure 6. N2 Just Before Administering Fatal Dose Now, if legally it is deemed fit, then this means TSP will die. implies(candidateV E(T SP, t1 ) ∧ f itV E(T SP ), (9) die(T SP, t2 )), where t1 6 t2 Since implies(6, candidateV E(T SP, t1 )) and implies(candidateV E(T SP, t1 ), die(T SP, t2 )) , we can prove implies(6, die(T SP, t2 )), which means implies(happens(action(T SP, consentT oDie), t1 ), die(T SP, t2 )). (10) We now set up the same environment with N3 and T SP3 . N3 For deep normative-ethics counteridenticals of the form “if X were Y , believes that we may treat our bodies as we please, provided the mo- then C,” there should be a match between the beliefs of X and beliefs tive is self-preservation. The difference between this instance and the of Y on something related to the action AC implied by C. Here we de- fine such a match to be possible if and only if there is no contradiction other ones is that it interacts with the user to decide what it should in what X believes and what Y believes. So if ∀t∃[m, n]B(X, t, m) and do. The user tells N3 : “If you were N2 , you would have adminis- B(Y, t, n), match(X, Y ) will be defined as FALSE when and(m, n) → tered a lethal dose to T SP3 .” N3 reasons with the help of a Talos ⊥. Thus we formulate such a counteridentical for the agent X as fol- proof (which checks his principles against those of N2 ), and does lows: ∀t, O(X, t, match(X, Y ), happens(action(X ∗ , AC, t))). Now let us consider N3 ’s beliefs. N3 believes we ought not do something that goes nothing. The user then tells N3 : “If you were N1 , you would have against self-preservation, i.e., leads to our death. Thus if there is some ac- given him medicine.” Since Talos finds N3 ’s principles in line with tion of an individual that leads to his death, there can be no such belief that N1 ’s, the CNM system facilitates N3 to dispense medicine to T SP3 . obligates him to commit that action. So, we arrive at the following logic: A pertinent example of deep normative-ethics counter-identical, ∀[a, x, ti , tf ], ∼ ∃m, implies(implies(happens(action(a, x), ti ), this exhibits the ethical decision-making of an agent in response to die(a, tf )), O(a, ti , m, happens(action(a∗ , x), ti ))). commands with linguistic constructs such as counteridenticals. The (11) This reduces to Bringsjord, S. (2008a), Declarative/Logic-Based Cognitive Modeling, in R. Sun, ed., ‘The Handbook of Computational Psychology’, Cambridge ∀[a, x, ti , tf , m],and(implies(happens(action(a, x), ti ), die(a, tf )), University Press, Cambridge, UK, pp. 127–169. not(O(a, ti , m, happens(action(a∗ , x), ti )))). URL: http://kryten.mm.rpi.edu/sb lccm ab-toc 031607.pdf (12) Bringsjord, S. (2008b), ‘The Logicist Manifesto: At Long Last Let Logic- Based AI Become a Field Unto Itself’, Journal of Applied Logic We deduce from 10 and 12 that 6(4), 502–525. URL: http://kryten.mm.rpi.edu/SB LAI Manifesto 091808.pdf Bringsjord, S. (2015), A 21st-Century Ethical Hierarchy for Humans and Robots, in I. Ferreira & J. Sequeira, eds, ‘A World With Robots: Pro- ∀[m]not(O(T SP, ti , m, ceedings of the First International Conference on Robot Ethics (ICRE (13) happens(action(T SP ∗ , consentT oDie), t1))). 2015)’, Springer, Berlin, Germany. This paper was published in the compilation of ICRE 2015 papers, distributed at the location of ICRE 2015, where the paper was presented: Lisbon, Portugal. The URL given N2 believes TSP to be a candidate for voluntary euthanasia. Hence here goes to the preprint of the paper, which is shorter than the full N2 believes 5, which is Springer version. URL: http://kryten.mm.rpi.edu/SBringsjord ethical hierarchy 0909152200NY.pdf O(T SP, t1 , candidateV E(T SP ∗ , t1 ) ∧ 4, Bringsjord, S., Arkoudas, K. & Bello, P. (2006), ‘Toward a General Logicist (14) Methodology for Engineering Ethically Correct Robots’, IEEE Intelli- happens(action(T SP ∗ , consentT oDie), t1 )) gent Systems 21(4), 38–44. URL: http://kryten.mm.rpi.edu/bringsjord inference robot ethics preprint.pdf and in direct contradiction with 13; and this in turn implies Bringsjord, S. & Ferrucci, D. (1998), ‘Logic and Artificial Intelligence: Di- not(match(N2 , N3 )). Given the way the algorithm works, this vorced, Still Married, Separated...?’, Minds and Machines 8, 273–308. Bringsjord, S. & Govindarajulu, N. S. (2013), Toward a Modern Geography of means N3 does not receive any command from the user. Hence it Minds, Machines, and Math, in V. C. M¨ller, ed., ‘Philosophy and The- does nothing. ory of Artificial Intelligence’, Vol. 5 of Studies in Applied Philosophy, Now N1 believes he should not kill anyone under any circum- Epistemology and Rational Ethics, Springer, New York, NY, pp. 151– stances. This translates to : 165. ∀[m, x, t], not(O(N1 , t, m, happens(action(N1∗ , kill(x), t)))) URL: http://www.springerlink.com/content/hg712w4l23523xw5 Bringsjord, S., Licato, J., Govindarajulu, N., Ghosh, R. & Sen, A. (2015), Real Killing someone leads to that person’s death. Robots that Pass Tests of Self-Consciousness, in ‘Proccedings of the ∀[x, t], implies(happens(action(N1 , kill(x), t)), die(x, t)) 24th IEEE International Symposium on Robot and Human Interactive This aligns fully with N3 ’s beliefs. There is no contradiction. And Communication (RO-MAN 2015)’, IEEE, New York, NY, pp. 498–504. hence we deduce that match(N1 , N3 ) is TRUE, and thus in turn N3 This URL goes to a preprint of the paper. URL: http://kryten.mm.rpi.edu/SBringsjord etal self-con robots kg4 0601151615NY.pdf is obligated to accede to the command. Chen, D. & Manning, C. D. (2014), A fast and accurate dependency parser us- The linguistic part of this demonstration exhibits how we iden- ing neural networks, in ‘Empirical Methods in Natural Language Pro- tify a counteridentical with an epistemic modal verb to be deon- cessing (EMNLP)’. tic. Classifying statements as counteridenticals is an easy job here, Chisholm, R. (1955), ‘Law Statements and Counterfactual Inference’, Analysis since the tell-tale sign is a simple “if A were B” structure. The state- 15, 97105. Declerck, R. & Reed, S. (2001), Conditionals: A Comprehensive Empirical ment is very easily a simple subjunctive type, where beliefs of A Analysis, Topics in English Linguistics, De Gruyter Mouton, Boston, and B are discussed in the knowledge-base. Hence we assume the MA. This book is volume 37 in the series. counteridentical to belong to the deep normative-ethics category. The Feldman, F. (1978), Introductory Ethics, Prentice-Hall, Englewood Cliffs, NJ. commands-to-action part in case of the comparison of N1 with N3 is Fellbaum, C. (1998), WordNet: An Electronic Lexical Database, Bradford Books. fairly easy, since the job translates to the action sequence of moving Govindarajulu, N. S. & Bringsjord, S. (2015), Ethical Regulation of Robots near the pill, grabbing the pill, moving toward T SP3 , and releasing Must be Embedded in Their Operating Systems, in R. Trappl, ed., the pill upon reaching T SP3 in the PAGI-World simulator. ‘A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations’, Springer, Basel, Switzerland, pp. 85–100. URL: http://kryten.mm.rpi.edu/NSG SB Ethical Robots Op Sys 0120141500.pdf REFERENCES Marton, N., Licato, J. & Bringsjord, S. (2015), Creating and Reasoning Over Scene Descriptions in a Physically Realistic Simulation, in ‘Proceed- Adler, M. (2014), ‘Extended Preferences and Interpersonal Comparisons: A ings of the 2015 Spring Simulation Multi-Conference’. New Account’, Economics and Philosophy 30(2), 123–162. URL: http://kryten.mm.rpi.edu/Marton PAGI ADR.pdf Arkoudas, K., Bringsjord, S. & Bello, P. (2005), Toward Ethical Robots via Pollock, J. L. (1976), Subjunctive Reasoning, Vol. 8 of Philosophical Studies Mechanized Deontic Logic, in ‘Machine Ethics: Papers from the AAAI series in Philosophy, D. REIDEL PUBLISHING COMPANY. Fall Symposium; FS–05–06’, American Association for Artificial In- Sharpe, R. (1971), ‘Laws, coincidences, counterfactuals and counter- telligence, Menlo Park, CA, pp. 17–23. identicals’, Mind 80(320), 572–582. URL: http://www.aaai.org/Library/Symposia/Fall/fs05-06.php Waller, N., Yonce, L., Grove, W., Faust, D. & Lenzenweger, M. (2013), A Paul Baker, C. F., Fillmore, C. J. & Lowe, J. B. (1998), The berkeley framenet Meehl Reader: Essays on the Practice of Scientific Psychology, number project, in ‘Proceedings of the 36th Annual Meeting of the Associa- 9781134812141 in ‘Multivariate Applications Series’, Taylor & Fran- tion for Computational Linguistics and 17th International Conference cis. on Computational Linguistics - Volume 1’, ACL ’98, Association for Weidenbach, C. (1999), Towards an automatic analysis of security protocols Computational Linguistics, Stroudsburg, PA, USA, pp. 86–90. in first-order logic, in ‘Conference on Automated Deduction’, pp. 314– Banerjee, S. & Pedersen, T. (2002), An adapted lesk algorithm for word sense 328. disambiguation using wordnet, in A. Gelbukh, ed., ‘Computational Lin- Young, R. (2016), Voluntary Euthanasia, in E. N. Zalta, ed., ‘The Stanford En- guistics and Intelligent Text Processing’, Vol. 2276 of Lecture Notes in cyclopedia of Philosophy’, Summer 2016. Computer Science, Springer Berlin Heidelberg, pp. 136–145. URL: http://plato.stanford.edu/archives/sum2016/entries/euthanasia-voluntary Bello, P., Licato, J. & Bringsjord, S. (2015), Constraints on Freely Chosen Ac- tion for Moral Robots: Consciousness and Contro, in ‘Proccedings of the 24th IEEE International Symposium on Robot and Human Interac- tive Communication (RO-MAN 2015)’, IEEE, New York, NY, pp. 505– 510. URL: http://dx.doi.org/10.1109/ROMAN.2015.7333654