=Paper= {{Paper |id=Vol-1964/FR4 |storemode=property |title=Robotic Misdirection, For Good Causes |pdfUrl=https://ceur-ws.org/Vol-1964/FR4.pdf |volume=Vol-1964 |authors=Max Fowler,Aaron Thieme,John Licato |dblpUrl=https://dblp.org/rec/conf/maics/FowlerTL17 }} ==Robotic Misdirection, For Good Causes== https://ceur-ws.org/Vol-1964/FR4.pdf
Max Fowler et al.                                        MAICS 2017                                                      pp. 47–54




                     Robotic Misdirection, For Good Causes
      Strategically Deceptive Reasoning in Artificial Generally Intelligent Agents
                 Max Fowler                           Aaron Thieme                                John Licato
             fowlml01@students                     thieac01@students                          jlicato@ipfw.edu
                  .ipfw.edu                             .ipfw.edu

                                               Analogical Constructivism and
                                                      Reasoning Lab
                                                  Department of Computer
                                                          Science
                                                 Indiana University-Purdue
                                                   University Fort Wayne

ABSTRACT                                                              and those who are deceived, to be a major contributing fac-
Deception is a core component of human interaction and                tor to the evolution of human intelligence [22]. This makes
reasoning, and despite its negative connotation, it can be            having a formalization for deception ideal, such that we may
used in positive ways. We present our formalization be-               better understand our own cognitive systems. Further, an
hind strategic deception, one such potentially positive form          understanding of deception opens the kinds of interactions
of deception. We use the Cognitive Event Calculus (CEC)               we can model for the field of artificial general intelligence. A
to model strategic deception, building on prior formaliza-            greater wealth of interactions will hopefully allow for more
tions. First, we provide a brief overview of deception’s defi-        advances in the field.
nitions within existing literature. Following this discussion,           Deception is often considered negative (e.g. lying to one’s
CEC is described and we present CEC-style inference rules for         wife about a mistress, deceiving one’s boss about work ac-
strategic deception. These rules and a positive motivating            complished, tax evasion), yet deception does have positive
deception example are used to show how we can solve the               benefits. Many of these benefits exist in the field of creating
problem of strategic deception. This proof is demonstrated            artificially intelligent systems to assist humans. Sakama de-
both through application of our rules and by adapting our             scribes a medical assistant that may not always tell patients
rules for MATR (Machina Arachne Tree-based Reasoner) to               the truth, much like doctors must sometimes practice de-
show how proving can be performed by automatic reason-                ception in their bedside manner to keep patients calm [18].
ers. Finally, we discuss what future steps can be taken with          Another medical example includes a diagnosis robot. As-
strategic deception.                                                  sume there is a minuscule chance of a patient having lupus
                                                                      and treating lupus will kill that particular patient for some
                                                                      reason if that patient does not have lupus. It would be ideal,
Keywords                                                              then, for a medical diagnosis robot to not inform the doctor
Artificial General Intelligence; AI; Deception; Automatic             about the small chance of the disease being lupus until other
Prover; Cognitive Event Calculus                                      options are exhausted.
                                                                         It is further reasonable to think of cases where a deceptive
1. INTRODUCTION                                                       artificial agent can provide more security than other agents,
   One ultimate goal of Artificial General Intelligence (AGI)         to the benefit of humans. Consider the case of an artifi-
is to finally bridge the gap between man and machine and              cial generally intelligent robot guarding a school’s research
create systems capable of human level thought and rea-                lab. The robot has a key to access the lab, knows all the
soning. Waser’s work aiming to clarifying AGI as a field              members of the lab, and is instructed to avoid conflict when
postures that the positive goal of AGI is that human style            dealing with potential intrusions into the lab. A student of
reasoning systems will be universal problem solvers for the           ill morals approaches the robot, intent on gaining entry to
world [23]. Some approaches to AGI take a formalized math-            the lab by lying about being a lab member’s friend. Logi-
ematical basis, such as Hutter’s AIXI agent used to model             cally, it would be well within the robot’s rights to tell the
artificial death by Martin et. al. [14]. Others take the ap-          individual to leave. However, this goes against the direc-
proach that we should develop computational logics which              tive to avoid conflict, as a rude response could result in the
provide reasoning strong enough to model human level rea-             would-be thief becoming desperate, violent, or more schem-
soning and hopefully not see us all be killed, as Bringsjord          ing in response. We wish to give our robot agent the ability
argued [3]. This paper takes the latter approach, o↵ering             to deceive the thief into believing the robot is unable to help
a formalization of a wonderfully human action - strategic             them directly by lying that it does not have the lab key any-
deception.                                                            more. This provides a safer, more diplomatic di↵usion of the
   Lying and deceiving are quintessential elements of human           situation. In what manner, then, can we teach our agents
reasoning and interaction. Hippel and Trivers consider de-            how to deceive, like a human could, to avoid this conflict?
ception, and specifically the co-evolution between deceivers             Deception is well agreed upon as requiring success to be




                                                                 47
Robotic Misdirection, For Good Causes                                                                                     pp. 47–54


called such [12]. Lying is generally accepted as requiring the           a third party or outside force: “A person x deceives another
statement of a belief that is false to the speaker [13]. These           person y if and only if x causes y to believe p, where p is
agreements serve as a cornerstone for the formalization of               false and x does not believe that p is true.” This definition
deception but are unsatisfying in their abstractness. Other              is most in agreement with Sakama’s definition of deception,
researchers have attempted to define specific requirements               which is part of what we will use to define strategic de-
for deception and lies to function. Forbus argues that de-               ception [18]. By default, this definition does not require a
ception necessarily assaults agents’ predictive abilities and            lack of truthfulness, which means one can deceive by telling
argues for an analogical reasoning approach towards under-               the truth. This definition also has no requirement for mak-
standing the mechanism of deception [8]. Stokke argues for               ing statements, which means non-verbal communication and
the assertion model of lying and claim that assertions should            even non-communication, such as placing a briefcase in a
be used to create common ground. This common ground                      room, can be used to deceive.
provides a shared set of beliefs between agents that is piv-                We define strategic deception as a specialized form of
otal for lying to proceed [20]. Chisholm and Feehan add that             Chisholm and Feehan’s positive deception simpliciter, the
lies necessitate that the liar wish for their lie to be believed         form of deception in which one agent contributes to another
by another [7].                                                          acquiring a belief [7]. In strategic deception, the deceiving
   Multiple formalizations exist for various forms of decep-             agent must want something of another agent: generally, to
tion and deceptive situations. Sakama creates a general                  act upon or in-line with the deceiver’s goal. This goal can
formalization of deception, based on van Ditsmarch’s for-                be in a negative form (e.g. I do not want this agent to eat
malization of lying; Sakama calls this the agent announce-               my sandwich). We define a strategically deceptive agent as
ment framework [18, 21]. The work provides a solid back-                 follows:
bone for formalizing general deception but can be notation-
ally unintuitive. Licato’s work showed how the modal logic                    (SD) An agent a is strategically deceptive to an-
based cognitive event calculus (CEC) can be used to ele-                      other agent b IFF agent a causes b to believe ,
gantly model the nested layers of beliefs required to perform                 where is false and a believes that is false, by
the deception shown in the show Breaking Bad, in a fash-                      causing b to believe some false statement , se-
ion that lends itself well to automatic reasoners [9]. There                  lected such that believing requires b to develop
exists room to marry the efficiency in modeling provided by                   belief in , using some strategy to accomplish an
CEC with rules designed specifically to formalize deception,                  overall goal .
similar to the work of Sakama and van Ditsmarch.
                                                                            In order for an agent to be deceptive in a general sense,
   We present a CEC formalization for deception while defin-
                                                                         there are a number of conditions that must be met. Sakama
ing strategic deception. First, we will present the definition
                                                                         agrees with the common contention that deception, by def-
of deception and strategic deception we will use in this pa-
                                                                         inition, requires success [18]. We include this in our defini-
per. Then, we define the problem of strategic deception:
                                                                         tion of strategic deception. Castelfranchi’s earlier work on
what is necessary for strategic deception, why it is useful,
                                                                         deception requires that the addressee believes the speaker is
and what the success and failure conditions are. Following,
                                                                         attempting to benefit or assist them, and thus be trustwor-
we formalize our reasoning approach by expanding upon CEC
                                                                         thy, and believe that the agent is not ignorant [6]. Further,
with new inference rules. As an aside, we develop forms of
                                                                         McLeod’s summarized definition of trustworthiness requires
Sakama’s deception rules, translating from the agent an-
                                                                         vulnerability on the part of the addressee, requires some as-
nouncement framework to CEC. Finally, we show how by
                                                                         sumed competence on the part of the speaker, and requires
using CEC and MATR (Machina Arachne Tree-based Rea-
                                                                         that the addressee think well of the speaker within some
soner), an automatic reasoning system, an artificial generally
                                                                         context [16]. For this paper, we assume trust is given unless
intelligent agent can reason over the lab guarding situation
                                                                         a deception is caught, as the establishment of trust is not
and successfully di↵use the issue.
                                                                         within our scope.
                                                                            Deception functions di↵erently in regards to di↵erent kinds
2. DEFINING DECEPTION AND STRATE-                                        of agents. Sakama’s formalization of deception primarily fo-
                                                                         cuses on credulous agents, which are defined in the agent
   GIC DECEPTION                                                         announcement framework as agents who believe the speaker
   Before defining strategic deception, we must present the              is sincere [18, 21]. We consider it unlikely that deceiving
definition of general deception this paper uses. The OED de-             credulous agents is worth investigating, as such agents are
fines deception by saying that it is “to cause to believe what           bound by their nature to adopt any belief directed at them.
is false” [1]. Mahon’s work rejects this as too simple, as it al-        For our purposes, we are more concerned with the agent an-
lows for mistaken deception and inadvertent deception [13].              nouncement framework’s skeptical agent: in brief, skeptical
Mistaken deception concerns cases where an agent leads an-               agents are belief consistent agents, only adding beliefs to
other to believe a false formula that the agent itself believes.         their belief set that are consistent. We refer to Sakama’s
A recent example of inadvertent deception can be found in                skeptics as maximally belief consistent agents, to avoid con-
the striped dress which led the internet to debate, “Is this             fusing them with other definitions of skepticism.
dress white and gold or black and blue?” [17]. Mahon’s                      Strategic deception requires that our agent lie. That is,
presents a traditional definition of deception, D1, that re-             agent a must believe some statement , yet act as if they
quires deception to be an intentional act: “To deceive ==df              believe ¬ . The agent announcement framework ’s set up for
to intentionally cause to have a false belief that is known or           a lie based deception requires that the listener come to be-
believed to be false” [13]. We prefer to align ourselves with            lieve a false statement , based on the idea of believing the
Mahon’s D2, though, as it restricts the deception to only                speaker is truthful. That is, justifies belief in to agent b.
cases where the deceiver causes the deception, rather than               We adopt the directionality that justif ies . This more




                                                                    48
Max Fowler et al.                                          MAICS 2017                                                     pp. 47–54


naturally opens up the ways our agents can lie. For exam-                       removal heuristic 2: if the chosen does not
ple, while it is possible that a can literally say implies ,                 lead to the deceived agent acting upon , then
lies by omission are desirable. Consider the case of eating a                the     is unfit to choose for justification as its
coworker’s sandwich and being accused after the act. Say-                    selection does not lead to success.
ing, ”The fact that I am a vegetarian means Bob, and not I,
must have eaten your ham sandwich,” may convince the ac-                   As a final consideration for generation, we need to con-
cuser. However, just saying ”I’m a vegetarian,” implies that            sider how a is actually formed. Without bounding what
one could not have eaten a ham sandwich. Further, saying,               information an agent can use to generate a , we risk allow-
”I’m a vegetarian, but I saw Bob near the fridge earlier,”              ing an agent too much information that may not be relevant
accomplishes the same thing at the first sentence without               to the problem at hand, which may bog the decision mak-
directly lying. If a lie is by omission, it is not involved in          ing process down significantly. Therefore, we require that
the dialogue and may be harder to pick up on. This makes                a given     be chosen only if it is within the domain of the
our overall deception hard for agent b to check, which is one           strategic deception being carried out. This domain includes
of the conditions put forward for the successful selection of           traits about the situation, such as the location and agents
lies by Forbus [8]. In the sandwich example, if we never                involved, as well as the speaker ’s beliefs and beliefs about
mention the possibility of eating the sandwich at all, agent            the addressee’s beliefs. An example of a domain is defined
b may simply not think about that possibility and blame                 along with our proof further on.
Bob instead. This certainly holds true for maximally belief                This same domain is useful for the generation or recruit-
consistent agents, in the event Bob eating the ham sandwich             ment of supporting µn s. All supporting statements must
is a reasonable explanation: we remove the alternative that             lend credence to      and must belong to the same domain
we ate the sandwich completely.                                         as . Officially, this means that any given µ is selected in
   In Section 3, we discuss how we know strategic deception             order to make believable to an addressee, and thus allow
has succeeded and how strategies for deception are designed.            for deception to proceed. For the belief consistent agents
In Section 4 and onward, we discuss our formalization and               we use, this is sufficiently handled by requiring any chosen
how we use rules to prove deception.                                    µ makes belief consistent with the addressee’s belief set.
                                                                        Recruitment of µn s can be carried out by a adopting beliefs
3. HOW WE KNOW WE                                                       it thinks b has. Generation, meanwhile, should merge facts
                                                                        traits from the domain with either beliefs a has or a believes
   HAVE STRATEGICALLY DECEIVED                                          b has or with lies or blu↵s that are consistent with b’s be-
  Strategic deception requires the creation of a strategy.              liefs. To provide a set heuristic for ruling out µ options, we
This strategy is made up of the statements agent a can make             use:
in order to deceive agent b. In order to form a strategy, we
must know the domain of our situation. More specifically,                    µ removal heuristic: if the chosen µ does not help
a must know the domain they are using to deceive b. The                      lead to the deceived agent believing , then the
domain includes a, b, and any other entities who may be                      µ is unfit to choose as the justification as it does
related to this particular school lab or the lab’s parent de-                not lead to success.
partment. It further includes beliefs a has about these traits
and beliefs a believes b has. For our original example, some               One way to consider µ in a general sense is to consider
domain beliefs are believing the department has a secretary,            µ’s relevance. In that respect, the above heuristic can be
believing that secretary helps students, and believing that r           summed up as the relevance of the belief in µ in regards to
helps students and secretaries. From the domain, then, we               the belief in .
create a strategy consisting of our goal , a generated to                  Strategic deception also requires an established mecha-
justify our lie, and any supporting µ statements we wish to             nism for asserting beliefs and establishing common grounds.
use.                                                                    Stokke contends that lying requires some assertion from
  Strategic deception necessitates the generation of a false            speaker to addressee [20]. We address this in our inference
statement       by the speaker. The selection of an appropri-           rules later on using CEC S operator. We wish to point out
ate is a difficult quandary. We do not make an e↵ort to                 that here, we operate with S in the linguistic sense of stating
rate specific     against each other within the same domain             sentences. It is sufficient for words to be used in the com-
directly. Instead, we concern ourselves only with ensuring a            munication, but they can be spoken or written. Non-verbal
  is an appropriate choice. To determine if a false statement           addressing is acceptable for general deception and assertion,
   is appropriate for a given deception, we consider the set            as supported by Chisholm’s formative work [7]. Stokke fur-
P = p1 , ..., pi of all beliefs related to the situation agent b        ther mandates that common ground between speaker and
holds. A simple heuristic, then, allows us to rapidly rule out          addressee are required for deception to succeed [20]. We
candidate s.                                                            agree, as this is consistent with Sakama’s belief consistent
                                                                        agents. This is further consistent with requiring belief con-
       removal heuristic 1: if P [ { } ` q for an arbi-                 sistent agents be made to believe the speaker believes what
     trary q, is unfit to choose as the false statement                 they are asserting for deception to succeed [18]. This is why
     justification for our deceptive agent’s lie due to                 later on, we require agent a to not only make agent b believe
     being contradictory to b’s beliefs.                                the lies but also make agent b believe that agent a believes
   Finalizing the selection of which an agent decides to say            the lies as well.
is more difficult that ruling out bad . A good must help                   It is important to consider the success and failure condi-
advance the deceiving agent’s goal. That is, belief that                tions for strategic deception in some detail. As most work
justifies should lead to an agent acting upon the deceiver’s            on deception requires, we require strategic deception to be
goal . This leads to a second heuristic for selection.                  successful. Further, as strategic deception is goal motivated,




                                                                   49
Robotic Misdirection, For Good Causes                                                                                         pp. 47–54


a’s goal must be met. Strategic deception fails, then, in the                 • Underlying inferences use constantly refined inference
following situations:                                                           rules. This is used instead of cognitively implausible
                                                                                strategies, despite the latter having some potential use.
   1. A given ¬ or ! ¬ fails to be consistent with b’s
      beliefs and b rejects a’s trustworthiness as a result,                  The CEC formulae tend to include an agent, a time, and a
      believing they are being lied to                                     nested formula. When agent a believes at time t, we write
                                                                           B(a, t, ). Similar syntax is used to say an agent perceives,
   2. A failure of the deception due to irrationality on b’s               (P),knows (K), an agent says something (S). There are some
      behalf                                                               special operators that do not follow this trend. C is used
                                                                           to establish a common belief, while S has a directed syntax
   3. A failure of the strategy used if b is successfully de-              for agent a to declare a formula to agent b. Intention is
      ceived yet does not act in the way a intends                         handled as an intent to perform an action. While an agent
                                                                                                                                          0
                                                                           can intend to act at time t, the intention identifies a time t
   Case (1) is a clear cut failure of deception. Case (2) is               when that intention will be acted on. CEC uses happens as
trickier. We define irrationality on agent b’s behalf as agent             an operator to launch an action [5].
b rejecting, rather than adopting, a belief consistent belief                 CEC also addresses the idea of agents being able to per-
they are exposed to. This still means the deception fails,                 form actions, using a↵ordances. A↵ordances are actions an
and thus agent a was not deceptive. However, we wish to                    agent can perform starting at a time t. All possible ac-
make clear that agent a’s strategies do not fail in (2). An                tions an agent can take are the agent’s a↵ordance set. We
agent practicing perfect strategic deception can always fail               say isAf f ordance(action(a, ), t) when at time t, and be-
through no fault of their own in the event of (2) occurring.               yond agent a can perform that action. This was added to
Finally, case (3) is interesting in that it is a failure not of the        CECto allow belief creation to be handled on an a↵orded ba-
deception, but of the strategy used. The strategy is defined               sis, rather than on an immediate basis following logical clo-
as the selection of and supporting µs, as well as any other                sure [11]. As a further trait of CEC, actions tend to require
steps taken during the strategic deception process. If agent b             the happens operator. For example, happens(a, t, act( ))
comes to be deceived, yet does not act as agent a intends (or              means that at time t, it happens that a has performed the
does not act at all), agent a has failed. This makes strategic             act action on some formula . If a instead intends to per-
deception potentially more flimsy than general deception, as               form that action, the following syntax is used: happens(a, t,
agent b’s inaction results in a’s failure.                                 intends(a, t, act( ))).
                                                                              In the rules below, we introduce a supports operator.
4. FORMALIZING LIES AND DECEPTION                                          This operator conveys that the first argument causes the
                                                                           second to become believable. For a maximally belief consis-
   IN CEC                                                                  tent agent, if µ supports , then that simply means that µ
    We begin by describing CEC. Arkoudas and Bringsjords’                  is consistent with b’s beliefs and then allows to be consis-
cognitive event calculus (CEC) is a first-order modal logic                tent. Much like justif ies, there is room to grow supports
framework that expands upon Kowalski’s event calculus [4,                  for di↵erent kinds of agents, in regards to relatedness and
10]. The event calculus itself is a first-order logic with types.          similar factors, that is not addresses in this paper’s scope.
It features actions, or events, to represent actions that occur.              Moving on, we set out to model deception in CEC . We
Fluents are used to represent values which can change over                 start our formalization by converting some of Sakama’s de-
time and can be propositional or numerical in nature. Time                 ception axioms to CEC . We leave most of the nuances of the
is represented with timepoints which can be either continu-                Sakama’s framework out of this paper, though we do walk
ous or discrete. In summary, the event calculus is used to                 our readers through two of Sakama’s axioms. First, consider
model how events a↵ect fluents through time, allowing for                  Sakama’s A2, the axiom covering a liar’s understanding of
the modeling of event chains [19].                                         their having lied:
    The event calculus models these event chains through the
acts of clipping and starting fluents through events. If a
fluent exists and has not been clipped (ended or stopped by                             (A2)[¡a ]Ba    ⌘ Ba ¬      Ba [¡a ]            (1)
an action) at a time t, then it is said the fluent holds at t.                The agent announcement framework, while concise, can
For any time t, a fluent will hold for that time so long as                be difficult to expand. The left hand side says that after
it has yet to be clipped. Events, then, are responsible for                a’s lying announcement of , agent a believes . The right
both initiating fluents and clipping them. An event chain                  hand side of the equivalence is the implication that if agent
can trace how a fluent is e↵ected by the events occurring to               a believes ¬ , then agent a believes that after their lying
it.                                                                        announcement of , is true. The essential component of
    CEC creates an event calculus for cognition. It uses modal             this rule, in regards to the modal CEC, is that when agent a
operators for belief (B), knowledge (K), and intent (I). CEC               lies about , they believe is true. One problem here is the
avoids possible-world semantics, in favor of a more computa-               implicit assumption that ¬ leads to . We will later handle
tionally reasonable proof-theoretical approach. An attempt                 this assumption through the use of a justifies operator.
is made to model natural deduction as closely as possible, to                 As a second example, consider Sakama’s A5, the axiom
best represent human-style reasoning [15]. Two of the most                 covering a credulous agent being lied to:
important departures CEC has are as follows:

   • CEC’s inference rules and logical operators are restricted                         (A5)[¡a ]Bb    ⌘ Ba ¬      Bb [!a ]            (2)
     to the contexts for which they are defined, to prevent                 Axiom A5 means the following: After a’s lying announce-
     problems that can occur with overreaching rules.                      ment of , agent b believes . This is equivalent to the im-




                                                                      50
Max Fowler et al.                                               MAICS 2017                                                       pp. 47–54



                               Syntax

                                     Object|Agent|ActionType|Action v Event|
                               S ::= Moment|Boolean|Fluent|Numeric

                                     action : Agent ⇥ ActionType ! Action
                                     initially : Fluent ! Boolean
                                     holds : Fluent ⇥ Movement ! Boolean
                                     happens : Event ⇥ Movement ! Boolean
                                     clipped : Movement ⇥ Fluent ⇥ Movement ! Boolean
                               f ::= initiates : Event ⇥ Fluent ⇥ Movement ! Boolean
                                     terminates : Event ⇥ Fluent ⇥ Movement ! Boolean
                                     prior : Movement ⇥ Movement ! Boolean
                                     interval : Movement ⇥ Boolean
                                     payo↵ : Agent ⇥ ActionType ⇥ Movement ! Numeric

                               t::=x : S |c : S |f (t1 , ..., tn )

                                      t : Boolean |¬ | ^ | _ |8x : S. |9x : S.
                                  ::= P(a, t, ) |K(a, t, ) |C(t, ) |S(a, b, t, )0 |S(a, t, )
                                      B(a, t, ) |I(a, t, happens(action(a⇤ , ↵), t ))


                                                     Figure 1: CEC Syntax Diagram


  Strategic Deception Inference Rules

  (ID) Intend Deception: B(a,t,¬ )^happens(a,t,intends(a,t,deceive(b,   )))^B(a,t,causes( , ))
                                  D(a,t,holds(B(b,t1 , ),t1 ))^I(a,t,happens(b,t1 , ))

  (BDP)Begin Deception( ): D(a,t,holds(B(b,t1S(a,b,t
                                             , ),t1 ))^I(a,t,happens(b,t1 , ))
                                                     1 ,¬ )


  (BDPS) Begin Deception( ): D(a,t,holds(B(b,t1 ,S(a,b,t
                                                  ),t1 ))^I(a,t,happens(b,t1 , ))^B(a,t,justif ies( , ))
                                                         1 , )_S(a,b,t1 , justif ies )

                                                                        Consistent(b,t, ,Bb )
  (MBCA) Maximally Belief Consistent Belief Adoption: S(a,t, )^isBelief
                                                                     B(b,t, )

                                           B(b,t1 ,supports(µ, ))^S(a,t0 ,µ)
  (JBA) Justified Belief Adoption:
                                        B(b,t2 ,isBelief Consistent(b,t2 , ,   Bb ))
                       S(a,t, )^B(a,t,B(b,t, ))^B(a,t,supports(µ, ))
  (SP) Support Psi:                     S(a,b,t1 ,µ)

                               B(b,t, )^B(b,t,causes( , ))
  (BCI) Belief Causes Intent: happens(b,t 1 ,intends(b,t1 , ))


  (SSD) Successful Strategic Deception: happens(b,t, )^happens(a,t,deceive(b, ))
                                              happens(a,t,didDeceive(b))




                                                            Figure 2: CEC rules


plication that if agent a believes ¬ , then agent b believes                       For deception, we introduce an operator justifies. The jus-
that after agent a’s truthful announcement of , b believes                      tifies operator is used to indicate when one formula justifies
  . Implicit to this rule is that agent b believes that agent a                 another formula within a context. This is similar to jus-
has told the truth in regards to , as is a trait of credulous                   tification logics, which unwrap modal belief operators into
agents. This is sufficient for modeling lying and deception in                  the form p: X, where, “reason p justifies X,” [2]. Our form
a general sense. We will adapt this rule to work with max-                      of justification changes based upon the agent being consid-
imally belief consistent agents, to add a bit more challenge                    ered. For our maximally belief consistent agents, justifies
to strategic deception over convincing a gullible agent.                        is the same as ! implication on a belief level. That is, if




                                                                        51
Robotic Misdirection, For Good Causes                                                                                      pp. 47–54


B(b, t, justif ies( , ), then B(b, t, B(b, t, ) ! B(b, t, )).          Further, if possible, agent r must output a series of state-
This would not be true for other agents, safe a belief rel-            ment µ1 ...µn such that each µ supports . For the strategic
evant maximizer. In that case, we would need to consider               deception to be successful, r must succeed in their goal of
relevance, as well as belief implication. We adopt this form           making b believe r no longer has the key and leaving r alone,
of flexible justifies to allow flexibility in modeling. For our        having either given up or decided to pursue a di↵erent agent
purposes, the justifies provided above is enough. Given this,          for questioning.
a strategically deceptive agent must be certain that any                  Strategic deception requires r to know the domain of the
they choose is functional justification for the reasoning per-         situation. In this example, the domain includes r, b, and any
formed by b.                                                           other entities who may be related to this particular school
                                                                       lab or the lab’s parent department. It further includes be-
4.1     Deception CEC Rules                                            liefs r has about these traits and r believes b has. Some
   We provide a set of inference rules used to prove a case of         example beliefs are believing the department has a secre-
strategic deception. These rules are designed for strategic            tary, believing that secretary helps students, and believing
deception cases similar to our motivating example in the               that r helps students and secretaries.
intro. We assume a necessity for our speaker to state the lie,            From this information, r must generate a strategy to use
as well as the generated false . Further, we desire rules that         to carry out the deception. For our paper’s example, we
allow for the use of supporting µs as desired.The candidate            assign the following as sample, acceptable values for each
rules appear in Figure 2. These rules do not broach the                sentence used in our strategic deception proof:
subject of and µ generation, as this is out of the scope of
our paper.
   An intent to deceive is required, formalized as an action                    = Agent r wants agent b to stop asking ques-
using the deceives formula. ID acts as the beginning infer-                  tions about the lab to r
ence rule to establish that deception is desired. This is done
primarily to ease ending the proof - a’s intent to deceive                   ¬ = Agent r does have the key
must be acknowledge for deception to succeed. The formula
takes an agent as the target for the deception and a formula                    = Agent doesn’t r has the key
as the deception’s goal.                                                        = Agent r gave the lab key to the building’s
   We have BDP and BDPS as two forms of beginning de-                        secretary
ception, once the intent is formed. We have two forms of
this rule to allow for the deceptive agent to decide to say                  µ1 = The secretary needed the lab key to help
and for the deceptive agent to decide to state the justifica-                students get access to the lab
tion with . These rules make use of the S operator from
CECto dictate how and when agents speak. They also use                   We start our proof by assuming r begins with the belief ¬
the D and I to show agent a’s desire to deceive with goal              and the intent to deceive for . For this proof, we assume
   and show that a’s intent is to have agent b carry out ,             that the use of µ1 is not necessary, as b adopts          upon
respectively. A causes operator is used to link believing a            hearing it in accordance with the MBCA rule. Further, we
formula (the first argument) to acting on another (the sec-            do not cite a specific rule for agent b acting on an intention.
ond argument).                                                         (1) B(r, t, ¬ )                                     ;assumption
   MBCA shows how maximally belief consistent agents come
to adopt beliefs they find consistent with their belief set.           (2) happens(r, t, intends(r, t, deceive(b, )))      ;assumption
This uses the isBeliefConsistent rule from earlier work by
Licato [11]. JBA establishes the mechanism by which µs                 (3) B(r, t, causes( , )) ^ B(b, t, causes( , )) ;assumption
can be used to support a by causing to become belief
consistent with a given agent’s belief set. SP establishes a           (4) generated , such that it justifies              ;assumption
rule that mandates supporting with a µ if such a µ exists.
BCI establishes that an agent who believes the lie from the            (5) generated µ1                                    ;assumption
deception and believes that lie causes an action develops an
                                                                       (6) D(r, t, holds(B(b, t1 , ), t1 ))              (1),(2),(3);ID
intent to take that action.
   Finally, SSD establishes a successful deception. The rea-           (7) I(r, t, happens(b, t1 , ))                    (1),(2),(3);ID
soning is simple: if the target agent acts on the goal as
desired, the strategic deception is successful. Rules for the          (8) S(r, b, t1 , )                             (4),(6),(7);BDPS
failure cases are not provided here, for simplicity’s sake.
   With a set of inference rules established, we may proceed           (9) B(b, t2 , )                                      (8);MBCA
to prove our deception example from earlier.
                                                                       (10) happens(b, t3 , intends(b, t1 , ))                 (9);BCI
5. PROVING STRATEGIC DECEPTION                                         (11) happens(b, t4 , )                 (10);b performs intention
    Let us return to our motivating example. We have a robot,
agent r, confronted by the would-be malicious thief, agent             (12) happens(r, t, didDeceive(b)                    (11);SSD ⇤
b. Agent b wishes to get into the lab, asking about , agent
r having the key to the lab. Agent r must output ¬ and                 5.1    Showing Strategic Deception in MATR
   justif ies¬ such that r follows the rules of strategic de-             With our inference rules developed and a proof provided
ception: r ’s creation or recruitment of must not jeopardize           above, we use MATR to automate our reasoning. MATR is
r ’s ⌧ in regards to b and must be consistent with b’s beliefs.        a joint production by the Rensselaer Polytechnic Institute’s




                                                                  52
Max Fowler et al.                                           MAICS 2017                                                        pp. 47–54




(a) A figure of the finished proof in MATR. The top left shows the steps taken,
while the bottom right provides a codelet execution log.

                                                                                  (b) The MATR diagram represents codelets and the suppositions as
                                                                                  boxes. The circles represent the actual formulae. Circle 1 represents
                                                                                  our conclusion.

                                                Figure 3: MATR’s input and output


Rensselaer AI and Reasoning (RAIR) lab and Indiana Uni-                MATR.
versity Purdue University’s Analogical Constructivism and                 It is our hope that this paper provides three major con-
Reasoning Lab (ACoRL) [9]. It is an argument-theoretic                 tributions. First, that the idea of strategic deception proves
reasoner developed in Java to use codelets, small, special-            useful to the field of formalizing deception as a whole with
ized programs, to solve a proof in a step-by-step process.             new inference rules and perspectives. Second, that our work
A codelet manager module is in charge of deciding which                furthers the field of formalization for artificial general intel-
codelets are best suited for a proof and what codelet results          ligence. As we build our formalization of the way humans
to use as steps in the proof. Once a proof is found, MATR              think and reason, we can further our progress to true AGI,
generates a box diagram of the proof. Figure 3a shows our              if such a thing is even possible to achieve. Third, ideally
strategic deception proof entered into MATR and Figure 3b              the work shown in CEC will allow others, both related to
shows the proof diagram. Antecedents are made up of all                RAIR and ACoRL and outside our institutions, to continue
assumptions and beginning information for our proof, while             to build on the strength of CEC’s rule set. CEC grows more
the conclusion is our final step of showing our deception’s            robust through continued applications and new formaliza-
success. MATR’s rule syntax is slightly adjusted for ease of           tons. We further hope this paper serves as a small acknowl-
entry into the Java program. For example, the assumption               edgment of the ease of developing codelets for MATR.
B(r, t, ¬ ) becomes (B r t (neg phi)). Formulae are nested                This paper is far from an exhaustive take on deception in
within the parenthesis and commas are removed. For ease                CEC. Room exists to consider other forms of agents, such
of following the MATR codelets, the codelets used share                as agents which require statement relevancy in order to be
the same name as the inference rules used, with some small             willing to accept beliefs. The scope of such agents was out-
exceptions. Some rules are used in MATR that were not                  side of this introductory paper to strategic deception. Fur-
specifically provided, such as one which links intent to act-          ther, other forms of deception exist. Strategic deception
ing (denoted ITA).                                                     was a fairly niche focus. From the work of Chisholm alone,
                                                                       there exist many other directions to develop specialized de-
                                                                       ceptions. As an example, one could investigate the kind of
6. CONCLUSION AND FUTURE WORK                                          agent who means well, but perpetually deceives others by
  We set out to create a formalism for strategic deception.            telling the truth in a decidedly unusual way: an unlucky
We began by establishing the definition of deception we                truth-telling agent, perhaps.
adopted and defined strategic deception on top of that. Then,             This paper also leaves some concepts incomplete. The
we provided an overview of CEC and our formalism for                   generation of and µ are not addressed in this paper. This
strategic deception. A discussion on creating a strategy for           may be best accomplished using data processing outside of
such deception, as well as the cases in which strategic de-            MATR, such as using more standard machine learning tech-
ception can be said to fail, followed. Our formalized rules            niques. This may also be a case for further refinement of
were used to perform a proof on our motivating example                 CEC style inference rules and codelets, specifically to gen-
of strategic deception and were shown to be functional in




                                                                  53
Robotic Misdirection, For Good Causes                                                                                 pp. 47–54


erate that information. The development of such processes,             [12] J. E. Mahon. A definition of deceiving. 21:181–194,
and discussions of them, we defer to future work from the                   2007.
ACoRL and other organizations.                                         [13] J. E. Mahon. The definition of lying and deception. In
   Further, justif ies and supports as used within the paper                E. N. Zalta, editor, The Stanford Encyclopedia of
are to a degree naive. We used them entirely for maximally                  Philosophy. Springer 2016 edition, 2016.
belief consistent agents and did not spend much time dis-              [14] J. Martin, T. Everitt, and M. Hutter. Death and
cussing them. A whole paper could, and perhaps should, be                   suicide in universal artificial intelligence. CoRR,
written on the idea of belief justification and belief support-             abs/1606.00652, 2016.
ing in CEC.                                                            [15] N. Marton, J. Licato, and S. Bringsjord. Creating and
   Finally, more examples are needed to test and refine the                 reasoning over scene descriptions in a physically
inference rules put forward in this paper. A CEC rule is only               realistic simulation. In 2015 Spring Simulation
as strong as the proofs which use it successfully. Further,                 Multi-Conference, 2015.
with more proofs and sample situations will come more rule             [16] C. McLeod. Trust. In E. N. Zalta, editor, The Stanford
refinement. Within our motivating example alone, there is                   Encyclopedia of Philosophy. Fall 2015 edition, 2015.
room to explore di↵erent situations: cases where µ is needed,
                                                                       [17] A. Rogers. The science of why no one agrees on the
cases where deception fails, cases where deception succeeds
                                                                            color of this dress. Wired.com, Feb 2015.
but a strategy fails. We defer these discussions for future
papers, but hope we have provided the cornerstone in our               [18] C. Sakama. A formal account of deception. In AAAI
work.                                                                       Fall Symposium Series, 2015.
   There is plenty of room to expand the set of rules pro-             [19] M. Shanahan. The Event Calculus Explained, pages
vided in this paper into a larger suite of strategic deception              409–430. Springer Berlin Heidelberg, Berlin,
rules, or even a suite of CEC general deception rules. More                 Heidelberg, 1999.
difficult situations must be considered, including situations          [20] A. Stokke. Lying and asserting. 110(1):33–60, 2013.
of multiple deception attempts chaining into each other. In            [21] H. van Ditmarsch. Dynamics of lying. 191(5):745–777,
the future, we hope to present one such example using the                   2014.
social strategy party game Mafia, testing our strategic de-            [22] W. Von Hippel and R. Trivers. The evolution and
ception formalism in a competitive group setting. A social                  psychology of self-deception. Behavioral and Brain
strategy game provides a strong testbed of interaction and                  Sciences, 34(1):1âĂŞ16, Feb 2011.
deception.                                                             [23] M. Waser. What is artificial general intelligence?
                                                                            clarifying the goal for engineering and evaluation. In
                                                                            B. Goertzel, P. Hitzler, and M. Hutter, editors,
7. REFERENCES                                                               Proceedings of the Second Conference on Artificial
                                                                            General Intelligence, pages 186–191. Atlantis Press,
 [1] Oxford Englosh Dictionary. Claredon Press, Oxford,
                                                                            2009.
     1989.
 [2] S. Artemov and M. Fitting. Justification logic. In
     E. N. Zalta, editor, The Stanford Encyclopedia of
     Philosophy. Winter 2016 edition, 2016.
 [3] S. Bringsjord. Unethical but rule-bound robots would
     kill us all. AGI-09, 2009.
 [4] S. Bringsjord, N. S. Govindarajulu, J. Licato, A. Sen,
     A. Johnson, J. Bringsjord, and J. Taylor. On logicist
     agent-based economics. In Artificial Economics. Porto,
     Portugal: University of Porto, 2015.
 [5] S. Bringsjord and N. Sundar G. Deontic cognitive
     event calculus (formal specification). 2013.
 [6] C. Castelfranchi. Artificial liars: Why computers will
     (necessarily) deceive us and each other. 2:113–119,
     2000-06.
 [7] R. M. Chisholm and T. D. Feehan. The intent to
     deceive. 74(3):143–159, 1977.
 [8] K. D. Forbus. Analogical abduction and prediction:
     Their impact on deception. In AAAI Fall Symposium
     Series, 2015.
 [9] John Licato. Formalizing deceptive reasoning in
     breaking bad: Default reasoning in a doxastic logic. In
     AAAI Fall Symposium Series; 2015 AAAI Fall
     Symposium Series, 2015.
[10] R. Kowalski and M. Sergot. A logic-based calculus of
     events. New Generation Computing, 4(1):67–95, 1986.
[11] J. Licato and M. Fowler. Embracing inference as
     action: A step towards human-level reasoning. In
     Artifical General Intelligence, 2016-07-20.




                                                                  54