<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a Logical Analysis of Misleading and Trust Erosion</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Haythem O. Ismail</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Patrick Attia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, German University in Cairo</institution>
          ,
          <addr-line>Cairo</addr-line>
          ,
          <country country="EG">Egypt</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Engineering Mathematics, Cairo University, Department of Computer Science, German University in Cairo</institution>
          ,
          <addr-line>Cairo</addr-line>
          ,
          <country country="EG">Egypt</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Misleading is, regrettably, an integral part of the commonsense world. Though lying, deception, and similar malignant variants of misleading have been thoroughly investigated in ethics and social psychology, there is a rather slim related literature within the logicist AI tradition. In this paper, we present foundations for a logical theory of general misleading, with an eye on its effect on trust erosion. In particular, we define a bare-bones notion of misleading and identify four dimensions along which we distinguish eighty one variants of misleading. Given this analysis, we suggest that a logical theory of misleading for trust erosion should include an account of belief, desire, intention, and causality. A logical language LM is sketched and used to represent the identified assortment of misleading scenarios.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Lying, deception, and other forms of misleading are,
admittedly, part and parcel of the commonsense world. Whether
malignant, harmless, good-hearted, or outright altruistic, an
instance of misleading does not measure up to the high
standards set by a logic-based agent for the reliability of its
sources of information. Such an agent can be misled by
hostile, lying agents; by cooperative, yet mis-informed agents;
or even by fallible perception due to faulty sensors or
illusory environments
        <xref ref-type="bibr" rid="ref13 ref24">(Ismail and Kasrin 2010)</xref>
        .1 Since
commonsense reasoning is driven by observations made through
communication or perception, trust in information sources is
an important factor for directing belief revision should
contradictions arise. Said trust is, at least partially, dependent
on the history of misleading of information sources.
      </p>
      <p>
        Our long-term goal is to develop a theory of logic-based
agents which can reason about the erosion and recovery of
trust in information sources, where these sources may be
other agents or the reasoning agent’s own perception
processes. We believe that trust erosion in information sources
is primarily affected by incidents of misleading (where
misleading is construed in a very general sense). Our short-term
goal, in this paper, is four-fold: (i) to identify a common core
1Studies indicate that lying alone is quite pervasive, with an
American telling an average of one to two lies every day
        <xref ref-type="bibr" rid="ref9">(DePaulo
et al. 1996)</xref>
        . Most lies, however, are told by a small percentage of
the population
        <xref ref-type="bibr" rid="ref13 ref24 ref41">(Serota, Levine, and Boster 2010)</xref>
        .
of all varieties of misleading and a limited number of
dimensions along which we can distinguish them, (ii) to propose a
ranking of varieties of misleading with respect to the extent
to which they affect trust erosion, (iii) to pinpoint the
necessary ingredients of an ontology for a logic of misleading,
and (iv) to develop a logical language for reasoning about
misleading scenarios. Anticipating the future coupling of
misleading and trust, we are guided in achieving our four
goals by how suitable our analysis and constructions are for
a theory of trust erosion in information sources.
      </p>
      <p>
        There is an abundant literature on trust analysis, with
contributions from social and managerial psychology
        <xref ref-type="bibr" rid="ref12 ref13 ref24 ref26 ref39 ref47">(Schweitzer, Hershey, and Bradlow 2006; Elangovan,
AuerRizzi, and Szabo 2007; Haselhuhn, Schweitzer, and Wood
2010; Levine and Schweitzer 2015, for instance)</xref>
        ,
economics
        <xref ref-type="bibr" rid="ref5">(Cox 2004, for instance)</xref>
        , social robotics
        <xref ref-type="bibr" rid="ref26 ref47">(Wagner and Robinette 2015)</xref>
        , and multi-agent systems and
ecommerce
        <xref ref-type="bibr" rid="ref32 ref38">(Schillo, Funk, and Rovatsos 2000; Sabater and
Sierra 2005, for instance)</xref>
        . Most formal theories of trust are
probabilistic or game theoretic, but some logicist approaches
exist
        <xref ref-type="bibr" rid="ref11 ref21 ref3 ref7 ref8">(Demolombe 2009; Herzig et al. 2010; Amgoud and
Demolombe 2014; Demolombe 2015; Drawel, Bentahar,
and Shashuki 2017)</xref>
        . None of the logical theories, however,
establishes a link to misleading. Research on lying and
deception (but not misleading in general) is also quite varied,
drawing interest from psychology and human
communication
        <xref ref-type="bibr" rid="ref4">(Buller and Burgoon 1996)</xref>
        , economics
        <xref ref-type="bibr" rid="ref13 ref16 ref18 ref24">(Gneezy 2005;
Ettinger and Jehiel 2010; Gneezy, Rockenbach, and
SerraGracia 2013)</xref>
        , social robotics
        <xref ref-type="bibr" rid="ref46">(Wagner and Arkin 2011)</xref>
        , and
is an all-time favorite of philosophy
        <xref ref-type="bibr" rid="ref27">(Mahon 2016)</xref>
        .
      </p>
      <p>
        Within the logicist framework, however, analysis of
lying and deception is (to the best of our knowledge) limited
to the work of Sakama and colleagues
        <xref ref-type="bibr" rid="ref13 ref21 ref24 ref33 ref34 ref35 ref46">(Sakama, Caminada,
and Herzig 2010; Sakama 2011a; 2011b; 2015)</xref>
        . While we
attempt to base our constructions on the foundations
established by them, we do not limit ourselves to the lying and
deception varieties of misleading and we keep our analysis
motivated by issues of trust erosion.
      </p>
      <p>2</p>
    </sec>
    <sec id="sec-2">
      <title>A Bare-Bones Notion of Misleading</title>
      <p>
        Though a lot of work has been done on the analysis of
lying and deception, there is, much to our distress, almost no
systematic analysis of misleading in general. To identify a
bare-bones notion of misleading, we start with what is out
there: definitions of lying and deception. First, consider the
following adaptation of “the traditional definition of lying”
        <xref ref-type="bibr" rid="ref27">(Mahon 2016)</xref>
        :
      </p>
      <p>Cognitive agent S lies if and only if
(l1) S states proposition P to A.
(l2) S believes P to be false.
(l3) A is a cognitive agent.
(l4) S states P to A with the intention that A believes</p>
      <p>P to be true.</p>
      <p>
        We can attempt to generalize this definition to one of
misleading by considering each condition and either dropping
or generalizing it. But, first, consider what it is that we are
trying to define. For starters, we cannot just replace “lies”
with “misleads”, for we are not primarily interested in agent
S and their actions, but in agent A—the one being misled—
and what happens to them and how it affects their trust in
S. It will also not do to define what it means for A to be
misled. The reason is that “mislead” is an achievement verb
(cf.
        <xref ref-type="bibr" rid="ref45">(Vendler 1957)</xref>
        ) and we do not want to imply that A
is subjected to successful misleading; a potentially
successful misleading of A is sufficient to shake their trust in S.
We propose to replace the clause “Cognitive agent S lies”
by “Event E is judged as misleading by cognitive agent A”.
There are a couple of things to note here. First, we take that
which is misleading to be, not an agent nor a statement, but
an event. For example, a perception event can be
misleading though there is no misleader nor is there any form of
linguistic communication. Second, you can judge an event
to be misleading without being misled by it. For example,
a student’s untruthful claim to having spent the night
working on their dissertation is a misleading event, though their
major professor will never be misled by it if they had seen
the student at a party the night before. Third, what matters is
that A judges the event to be misleading, regardless of what
anybody else thinks.
      </p>
      <p>
        We now turn to conditions (l1)–(l4). As already stated,
misleading need not involve any form of linguistic
communication as mandated by (l1). However, we still need
to confine ourselves to misleading events in which some
information source S (which is not necessarily an agent)
conveys some proposition P . Examples include having a
perception with content P , reading a statement of P in
a newspaper, and, of course, person S’s stating P . For
(l2), we have already pointed out that S need not be an
agent at all and may, thus, have no beliefs. But even
in the prototypical case when S is a person stating P , S
may believe P but use it to conversationally implicate
another proposition which they do not believe
        <xref ref-type="bibr" rid="ref1 ref44">(Adler 1997;
Stokke 2016)</xref>
        . Thus, a general misleading event involves
two propositions: P and the contextually implicated Q. If S
is a cognitive agent, misleading occurs if they do not believe
Q. This is not necessary, however; misleading may still
occur if S believes Q but Q is false. On the other hand, if S
is not a cognitive agent, we contend that there cannot be any
misleading unless Q is false. Finally, both (l3) and (l4) may
simply be dropped: (l3) is presupposed by the left-hand side
of our definition and (l4) does not make sense if S is not an
agent.
0
?
1
      </p>
      <p>Hence, we adopt the following bare-bones notion of
misleading:
(M) Event E is judged to be misleading by cognitive agent
A if and only if:
(m1) E is an event of information source S’s (directly)
conveying proposition P .
(m2) S’s conveying of P together with a common ground</p>
      <p>C defeasibly imply Q.</p>
      <p>(m3) Q is false or S does not believe Q.</p>
      <p>There are a couple of points to note about (m1) and (m2).
It is out of the scope of this paper to provide a general
theoretical account of what it means for an event E to be one
of an information source S’s (directly) conveying a
proposition P . The simplest case is when E is the event of a person
S’s stating P . But other cases include sensor S’s producing
a signal interpreted as P by the sensing agent, or agent S’s
performing some action and thereby conveying the
proposition that “S has just performed .” We assume that
particular agent theories include statements indicating for some
relevant events that they are events of certain information
sources conveying certain propositions.</p>
      <p>
        By (m2), we model implicature
        <xref ref-type="bibr" rid="ref19">(Grice 1989)</xref>
        by
defeasible implication given some common ground C.
Following
        <xref ref-type="bibr" rid="ref43">(Stalnaker 2002)</xref>
        , we think of common ground as some
proposition which A believes to be common belief (in the
sense of
        <xref ref-type="bibr" rid="ref15">(Fagin et al. 1995)</xref>
        .) Again, we lay the
responsibility of specifying C on particular logical theories that may
choose to make use of our notion of misleading. In the
example of the deceitful graduate student, the common ground
includes the belief that speakers are honest. Thus, together
with the student’s claim of spending the night working on
their dissertation, the common ground implies that they
indeed did so. This is defeated, however, by the professor’s
witnessing the student partying all night.
      </p>
      <p>3</p>
    </sec>
    <sec id="sec-3">
      <title>The Many Scenarios of Misleading</title>
      <p>We distinguish different types of misleading using four
parameters: (i) whether S believes P (BP), (ii) whether S
intends to deceive A (ID), (iii) whether S intends to harm
A (IH), and (iv) whether being misled has a negative effect
on A (EQ). Each of these parameters may assume one of
three values: 0, ?, and 1. Tables 1 through 4 indicate the
conditions represented by each assignment of a value to a
parameter. We note the following:
1. BP, ID, and IH can take the values 0 or 1 only if S is a
cognitive agent. A value of ? may indicate that S is not a
cognitive agent in the first place.
0
?
1</p>
      <p>S intends to not deceive A
S intends to neither deceive A nor to not deceive A</p>
      <p>
        S intends to deceive A
3. IH and EQ are motivated by our long-term goal of
establishing a link between misleading and trust erosion.
After conducting a series of interesting studies, Levine and
Schweitzer
        <xref ref-type="bibr" rid="ref26 ref47">(Levine and Schweitzer 2015)</xref>
        conclude that
deception per se does not always harm trust, but
selfishness and willingness to harm do. This motivates including
something like IH as a dimension for classifying
misleading scenarios. Moreover, studies show that people are, in
general, less forgiving of lies which have more
damaging effects on the victim
        <xref ref-type="bibr" rid="ref18">(Gneezy 2005)</xref>
        .
        <xref ref-type="bibr" rid="ref16">(Gneezy,
Rockenbach, and Serra-Gracia 2013)</xref>
        reports on experiments
conducted to identify when people take the decision to lie.
One of the findings of the experiments is that the victims’
trust in the liers deteriorates more severely if, by
following the lie, they lose their monetary payoff. These results
suggest the appropriateness of EQ.
      </p>
      <p>With our four three-valued parameters, we can distinguish
eighty one different scenarios of misleading, M0–M80.
Each scenario is characterized by eight conditions: m1
through m4 and one condition from each of the Tables 1
through 4. Symbolically, we can encode the
misleadingvariants by using the standard ternary encoding of the
natural numbers 0–80 over the alphabet f0; ?; 1g. Table 5 tersely
displays the association between the labels (Mi) and the
strings.</p>
      <p>
        We rank misleading scenarios along the natural order of
the integers 0–80; the higher the number, the more
erosiveto-trust the scenario is. Thus, ceteris paribus, S’s
believ??
4
13
22
31
40
49
58
67
76
?1
5
14
23
32
41
50
59
68
77
10
6
15
24
33
42
51
60
69
78
1?
7
16
25
34
43
52
61
70
79
11
8
17
26
35
44
53
62
71
80
ing P is always better than their having no clue about it,
which is always better than their believing it to be false.
This is, in fact, the common view in ethics. (For
example, see
        <xref ref-type="bibr" rid="ref37">(Saul 2012)</xref>
        who, interestingly, argues against this
common view.) Likewise, a positive effect (on A) of a
successful misleading is, ceteris paribus, always better than a
neutral effect, which is always better than a negative
effect; this is consistent with the findings of
        <xref ref-type="bibr" rid="ref16 ref18">(Gneezy 2005;
Gneezy, Rockenbach, and Serra-Gracia 2013)</xref>
        . Similarly for
the intentions to deceive and harm. Globally, IH has the
strongest influence on trust erosion, followed by ID,
followed by BP, and finally by EQ. That EQ comes last makes
sense since the consequences of believing Q are generally
not under the control of S. On the other hand, IH comes
first signifying the damaging effect on trust that selfishness
and willingness to harm have
        <xref ref-type="bibr" rid="ref26 ref47">(Levine and Schweitzer 2015)</xref>
        .
      </p>
      <p>Now, it might be suspected that some of the scenarios
M0–M80 are not realistic. We have successfully constructed
real-life examples of each scenario and we present some of
the interesting/exotic ones below.</p>
      <p>Example 1. In what follows, we present eleven examples
of selected entries from Table 5. All examples are about our
two protagonists: the misleading information source Steve
(S) and the sharp, trusting agent Ashley (A).</p>
      <p>M80 (1111). Steve tells his colleague Ashley that there is
no meeting the next day (P ) although he believes that
there is indeed an important meeting. Steve does so with
the intention of deceiving Ashley and of hurting her
career at the company as a result of missing the meeting.
Believing Steve and missing the meeting, Ashley gets a
deduction and a notice.</p>
      <p>M79 (111?). The same M80 scenario above but the
meeting gets cancelled and nothing, good or bad, happens to
Ashley.</p>
      <p>M78 (1110). Same as M80 but, missing the meeting,
Ashley gets the chance to work more on her assigned tasks,
produces fantastic results, and ends up getting a raise.
M41 (???1). Steve tells Daphne and Ashley that there is a
theory of computation quiz the next day. However, he
has no idea whether there is a quiz or not, he just wants
Daphne to be nervous and does not care about Ashley who
just happens to be there. Believing Steve, Ashley
panics, spends the night studying theory of computation, and
forgets about the networks quiz which she, consequently,
fails.</p>
      <p>M39 (???0). Same as M41 above but it turns out there is a
pop quiz in theory of computation the next day. Ashley
does great since she spent the night studying.</p>
      <p>M26 (0111). Steve and Ashley apply for an internship. At
the interview they are told that only one person will get
the internship and will be notified by e-mail if they get
accepted. Steve gets the e-mail, but refrains from saying
so to Ashley when she asks to spare her feelings.
Consequently, Ashley waits for the e-mail and misses the chance
of applying for another great internship.</p>
      <p>M24 (0110). Same as M26 but the internship which
Ashley misses the chance of applying for would have been a
horrible experience.</p>
      <p>M8 (0011). Steve sarcastically tells Ashley that the theory
assignment is so easy that he solved it the moment he read
it; but he means the exact opposite since the assignment
is super difficult. However, Ashley naively believes him,
waits till the last minute, and fails to finish the assignment
on time.</p>
      <p>M7 (001?). Same as M8 but, although Ashley believes
Steve, she starts working early on the assignment
anyways.</p>
      <p>M6 (0010). Same as M8 but, after believing Steve and
spending the time finishing other important work, the
professor realizes that the assignment is too hard and cancels
it.</p>
      <p>M0 (0000). Here we adapt M41 above as follows. Steve
replaces Ashley and Sam replaces Steve. Further, right after
his encounter with Sam and Daphne, Steve meets Ashley
and good-heartedly informs her about the theory of
computation quiz. Believing Steve, Ashley panics, spends the
night studying theory of computation, and forgets about
the networks quiz which she, consequently, fails.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Foundations for a Logic of Misleading</title>
      <p>In this section, we lay the foundations for a logic of
misleading as per the analysis presented thus far.
4.1</p>
      <sec id="sec-4-1">
        <title>Ontology</title>
        <p>
          Reasoning about misleading, construed after the analysis of
Sections 2 and 3, rests upon a rather rich ontology. We take
our ontology to at least conform to the following.
1. As mandated by (M), the ontology includes agents,
eventualities, and propositions. Agents are distinguished
individuals who can have beliefs, intentions, and desires.
(More on these below.) We follow
          <xref ref-type="bibr" rid="ref23">(Hobbs 2005)</xref>
          in
assuming a category of eventualities which are, intuitively,
stretches of time characterized by some proposition’s
being true (or some state’s holding
          <xref ref-type="bibr" rid="ref25">(Ismail 2013)</xref>
          .)
Propositions are taken at face value, and assumed to be first-class
inhabitants of our ontology. This simplifies the language
and facilitates quantification over propositions. Such a
notion of propositions may be modeled using reified
fluents or, more generally, states, as suggested in
          <xref ref-type="bibr" rid="ref25">(Ismail
2013)</xref>
          .
2. To accommodate BP, ID, and IH, we follow the standard
analysis of belief and intention. Hence, the ontology
includes possible worlds, with belief and intention
accessibility relations.
3. An account of causality is necessary for reasoning about
the effects of misleading, as IH and EQ mandate. We
follow the treatment of causality presented in
          <xref ref-type="bibr" rid="ref23">(Hobbs 2005)</xref>
          ,
which presupposes eventualities and possible worlds.
4. Whether the effect of misleading is positive, negative, or
neutral is determined by the desirability of that effect.
Likewise, an intention to hurt by misleading is an
intention that misleading causes an undesirable effect. Hence,
for IH and EQ, our ontology should accommodate a
notion of desirability. To that end, we follow the theory of
relative desire presented in
          <xref ref-type="bibr" rid="ref10">(Doyle, Shoham, and Wellman
1991)</xref>
          . That theory posits a preorder on models which are
taken to be sets of literals of the logic. We opt for having
models as secondary ingredients of our ontology, defined
in terms of possible worlds.
5. Since beliefs and intentions, in general, vary over time,
we assume a global time-line across all possible worlds.
        </p>
        <p>
          To summarize, the ontology of misleading includes
agents, eventualities, propositions, possible worlds, and a
global clock. Moreover, for every agent a, a belief- and an
intention- accessibility relation, respectively RaB and RIa,
are defined: RaB relates pairs of worlds and pairs of times
and RIa relates pairs of worlds at a time. (More on this
below.) Every world has an associated set of eventualities
holding in it
          <xref ref-type="bibr" rid="ref23">(Hobbs 2005)</xref>
          ; a function E maps each world
w to its associated set E (w). Finally, a function M maps
a world w to its associated model—a subset of E (w) of the
eventualities of some propositional literals being true. Here
we allude to a particular logical language (like the one
presented below) to fix the set of literals. A relative desire
relation % for each agent , akin to that of
          <xref ref-type="bibr" rid="ref10">(Doyle, Shoham,
and Wellman 1991)</xref>
          , preorders the set of models.
4.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Sketch of a Language</title>
        <p>
          We present a sketch of a logical language LM for reasoning
about misleading scenarios. LM is a first-order language
amended with features for defeasible reasoning, symbolized
by a connective . We stay silent about exactly what those
features are; we may interpret as in
          <xref ref-type="bibr" rid="ref29">(McCarthy 1980)</xref>
          ,
          <xref ref-type="bibr" rid="ref31">(Reiter 1980)</xref>
          , or
          <xref ref-type="bibr" rid="ref30">(Nute 1994)</xref>
          , for instance. The vocabulary
and informal semantics of LM are outlined below.
Terms a, possibly subscripted, is an agent variable; e,
possibly subscripted, is an eventuality variable; t, possibly
subscripted, is a time variable; and w, possibly subscripted, is
a possible world variable. The set of fluent/state terms is
defined recursively as follows:
1. P 2 P is a fluent constant, where P is a set of
propositional constants disjoint from the rest of the alphabet.
2. p, possibly subscripted, is a fluent variable.
3. Bel( ; ; t) is a fluent functional term denoting agent
[[ ]]’s believing fluent [[ ]] to be true-at-time-[[t]].
4. Int( ; ) is a fluent functional term denoting agent [[ ]]’s
intending fluent [[ ]] to be true.
5. Conv( ; ; t) is a fluent functional term denoting agent
[[ ]]’s conveying fluent [[ ]]’s being true-at-t.
6. cause( 1; 2) is a fluent functional term denoting
eventuality [[ 1]]’s causing eventuality [[ 2]]
          <xref ref-type="bibr" rid="ref23">(Hobbs 2005)</xref>
          . This is
a fluent term not because causality between event tokens
varies over time—it certainly does not—but because we
would like such terms to appear as arguments of Bel and
Int.
7. DESIRE( ; ) is a functional fluent term denoting
fluent [[ ]]’s being desirable by agent [[ ]]
          <xref ref-type="bibr" rid="ref10">(Doyle, Shoham,
and Wellman 1991)</xref>
          . This roughly means that, ceteris
paribus, [[ ]] is more preferred over [[: ]].2
8. If and are fluent terms, then so are : and
Here we are overloading the sentential connectives.
^ .
        </p>
        <p>Fluent terms of the first seven forms and their negations are
the literals of the language.</p>
      </sec>
      <sec id="sec-4-3">
        <title>Predicates</title>
        <p>
          LM has four groups of predicate symbols:
1. RB( ; !1; !2; t; t0) is true if ([[!1]]; [[!2]]; [[t]]; [[t0]]) 2
R[B ] ; intuitively, at t0 in !1, believes !1-at-t to be
identical to !2-at-t.
2. RI ( ; !1; !2; t) is true if ([[!1]]; [[!2]]; [[t]]) 2 R[I ] ;
intuitively, ’s intentions at t in !1 are true in !2 at some time
no earlier than t.
3. holds(e; w; t) is true whenever [[e]] 2 E ([[w]]) at time [[t]]
and Rexists(e; t) is true whenever [[e]] 2 E (r) at time [[t]]
where r is the real world
          <xref ref-type="bibr" rid="ref23">(Hobbs 2005)</xref>
          .
4. bef ore(t1; t2) is true if [[t1]] precedes [[t2]] on the global
time-line.
5. Ev( ; ) is true whenever [[ ]] is an eventuality of [[ ]]’s
being true.
        </p>
        <p>
          An LM theory contains the following groups of
Axioms
axioms.
1. Appropriate axioms for RB and RI . For example, we
can borrow the axioms in (Sakama, Caminada, and Herzig
2A note for readers familiar with
          <xref ref-type="bibr" rid="ref10">(Doyle, Shoham, and
Wellman 1991)</xref>
          : Since the semantics of desire in that beautiful theory
is based on models, which we do not have, we replace a model m
with a possible world w. In the formal machinery, each mention of
m is replaced by M(w).
2010) as is, modulo the translation from their modal
language to our first-order LM and accounting for
temporality. These axioms restrict belief to a KD45 modality and
intention to a KD modality, with two axioms for
interaction between the two modalities.
2. Axioms requiring bef ore to be irreflexive, asymmetric,
and transitive.
3. Axioms characterizing cause from
          <xref ref-type="bibr" rid="ref23">(Hobbs 2005)</xref>
          .3
4. Finally, Ev is characterized by the following axioms
which, we believe, are self-explanatory. Henceforth, we
write holds(x; y; z; t) as a short hand for Ev(x; y) ^
holds(x; z; t) and Rholds(x; y; t) as a short hand for
Ev(x; y) ^ Rexists(x; t). Unless otherwise indicated, all
variables are universally quantified with widest scope.
AEv1. 9e[Ev(e; p)]
AEv2. 9e[holds(e; Bel(a; p; t); w; t0)]
        </p>
        <p>8w1[RB(a; w; w1; t; t0) ) 9e1[holds(e1; p; w1; t)]]
AEv3. 9e[holds(e; Int(a; p); w; t)] ,
8w1[RI (a; w; w1; t) ) 9e1; t1[holds(e1; p; w1; t1) ^
:bef ore(t1; t)]]
AEV4. 9e; t[holds(e; cause(e1; e2); w; t)]</p>
        <p>8t9e[holds(e; cause(e1; e2); w; t)]
AEv5. 9e[holds(e; :p; w; t)] , :9e[holds(e; p; w; t)]
AEv6. 9e[holds(e; p1 ^ p2; w; t)]
9e1; e2[holds(e1; p1; w; t) ^ holds(e2; p2; w; t)]
,</p>
      </sec>
      <sec id="sec-4-4">
        <title>Formalizing Misleading Scenarios</title>
        <p>As indicated in Section 3, each of M0–M80 is a
conjunction of (M) together with four statements, one from each of
Tables 1–4. Thus, it suffices to formalize (M) together with
the twelve statements in the tables. The representation of
scenario Mi with encoding " has the following general
form
9a; p1; p2; t; t0; t00[</p>
        <p>Rholds(E; Conv(a; p1; t00); t)^
[Rholds(E;Conv(a; p1; t00); t) ^ RC(A; t)</p>
        <p>9e1[Rholds(e1; p2; t0)]]^
9e2[(Rholds(e2; :p2; t0) _ Rholds(e2; :Bel(a; p2; t0); t)
] ^ ( ; ; ; ")]
Here E is a place holder for the eventuality judged as
misleading by agent A and RC(A; t) stands for whatever A
takes to be common ground in the real world at time t.</p>
        <p>( ; ; ; ") = IH( ) ^ ID( ) ^ BP ( ) ^ EQ(")
represents the conjunction of statements corresponding to ; ;
and " from Tables 1–4, respectively.</p>
        <p>I H( ) =def 9p3; e1; e2; e3; e4[</p>
        <p>Rholds(e1; ( ); t)^
Ev(e2; Bel(A; p2; t0)) ^ Ev(e3; p3)^
Rholds(e4; Bel(a;cause(e2; e3)</p>
        <p>
          ^DESIRE(A; :p3); t0); t)]
3Again, this requires some adjustment. Hobbs
          <xref ref-type="bibr" rid="ref23">(Hobbs 2005)</xref>
          takes worlds to be sets of eventualities. Thus, a world w in our
ontology does not correspond to a world in Hobbs’s; the set E(w)
does.
( )
( )
0
?
1
0
?
1
        </p>
        <p>Int(a; :cause(E; e2))</p>
        <p>: (0) ^ : (1)</p>
        <p>Int(a; cause(E; e2))
Example 1. As an illustration, we formalize case M80 of
Example 1 from Section 3. For readability, some variables
have been replaced by more mnemonic (Skolem) constants.
Rholds(E; Conv(Steve; :M eeting; t0); t)^
Rholds(E;Conv(Steve; :M eeting; t0); t)</p>
        <p>9e1[Rholds(e1; :M eeting; t0)]^
Rholds(e2; Bel(Steve; M eeting; t0); t)^
Rholds(e3; Int(Steve; cause(E; e4)); t)^
Ev(e4; Bel(Ashley; :M eeting; t0)) ^ Ev(e5; Deduct)^
Rholds(e6; Bel(Steve; cause(e4; e5)^</p>
        <p>DESIRE(Ashley; :Deduct); t0); t)^
Rholds(e7; Int(Steve; Bel(Ashley; :M eeting; t0)); t)^
0
?
1</p>
        <p>( )
Bel(a; p1; t00)
: (0) ^ : (1)</p>
        <p>Bel(a; :p1; t00)
We presented an account of misleading as a catalyst to trust
erosion in information sources. A suitable bare-bones
notion of what it means for an agent to judge an
eventuality as misleading is the corner stone of our account.
According to it, all misleading scenarios involve an
information source’s conveying some proposition which, given what
the agent takes to be common ground, defeasibly imply
another proposition that is either false or not believed to be
true by the information source. We have identified eighty
one variants of misleading as generated by four three-valued
parameters: whether the source believes what they convey,
whether they intend to deceive the agent, whether they
intend to harm the agent, and whether misleading results in an
undesirable effect to the agent. If this analysis is correct, a
logical theory of misleading for trust erosion necessarily
includes theories of belief, desire, intention, and causality. We
have sketched a first-order language LM to represent
scenarios of misleading.</p>
        <p>Future research can go in at least three fruitful directions.
First, we need to go to the lab and conduct various
experiments on human subjects to validate the details of our
analysis of misleading. Second, a more thorough investigation
of LM and its properties is called for. Finally, we should
turn to our long-term goal and introduce an account of trust
erosion to LM .</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Adler</surname>
            ,
            <given-names>J. E.</given-names>
          </string-name>
          <year>1997</year>
          .
          <article-title>Lying, deceiving, or falsely implicating</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>Journal of Philosophy</source>
          <volume>94</volume>
          :
          <fpage>435</fpage>
          -
          <lpage>452</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Amgoud</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Demolombe</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>An argumentationbased approach for reasoning about trust in information sources</article-title>
          .
          <source>Argument &amp; Computation</source>
          <volume>5</volume>
          (
          <issue>2</issue>
          -3):
          <fpage>191</fpage>
          -
          <lpage>215</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Buller</surname>
            ,
            <given-names>D. B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Burgoon</surname>
            ,
            <given-names>J. K.</given-names>
          </string-name>
          <year>1996</year>
          .
          <article-title>Interpersonal deception theory</article-title>
          .
          <source>Communication Theory</source>
          <volume>6</volume>
          (
          <issue>3</issue>
          ):
          <fpage>203</fpage>
          -
          <lpage>242</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Cox</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>How to identify trust and reciprocity</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>Games and Economic Behavior</source>
          <volume>46</volume>
          :
          <fpage>260</fpage>
          -
          <lpage>281</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Demolombe</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Graded trust</article-title>
          .
          <source>In Proceedings of the Workshop on Trust in Agent Societies (TRUST'09</source>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Demolombe</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Analytical decomposition of trust in terms of mental and social attitudes</article-title>
          .
          <source>In The Cognitive Foundations of Group Attitudes and Social Interaction</source>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>DePaulo</surname>
            ,
            <given-names>B. M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kashy</surname>
            ,
            <given-names>D. A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kirkendol</surname>
            ,
            <given-names>S. E.</given-names>
          </string-name>
          ; and Wyer,
          <string-name>
            <surname>M. M.</surname>
          </string-name>
          <year>1996</year>
          .
          <article-title>Lying in everyday life</article-title>
          .
          <source>Journal of Personality and Social Psychology</source>
          <volume>70</volume>
          (
          <issue>5</issue>
          ):
          <fpage>979</fpage>
          -
          <lpage>995</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Doyle</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Shoham,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          ; and Wellman,
          <string-name>
            <surname>M. P.</surname>
          </string-name>
          <year>1991</year>
          .
          <article-title>A logic of relative desire</article-title>
          . In
          <string-name>
            <surname>Ras</surname>
            ,
            <given-names>Z. W.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Zemankova</surname>
          </string-name>
          , M., eds.,
          <source>Methodologies for Intelligent Systems: Proceedings of the 6th International Symposium</source>
          , ISMIS '
          <fpage>91</fpage>
          . Berlin, Heidelberg: Springer.
          <fpage>16</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Drawel</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bentahar</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>Shashuki</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Reasoning about trust and time in a system of agents</article-title>
          .
          <source>Procedia Computer Science</source>
          <volume>109c</volume>
          :
          <fpage>632</fpage>
          -
          <lpage>639</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Elangovan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Auer-Rizzi</surname>
          </string-name>
          , W.; and
          <string-name>
            <surname>Szabo</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Why don't I trust you now? An attributional approach to erosion of trust</article-title>
          .
          <source>Journal of Managerial Psychology</source>
          <volume>22</volume>
          (
          <issue>1</issue>
          ):
          <fpage>4</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Ettinger</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Jehiel</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>A theory of deception</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <issue>Microeconomics 2</issue>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Fagin</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ; Halpern,
          <string-name>
            <surname>J.</surname>
          </string-name>
          ; Moses,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          ; and Vardi,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>1995</year>
          .
          <article-title>Reasoning about Knowledge</article-title>
          . Cambridge, Massachusetts: The MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Gneezy</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Rockenbach</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Serra-Gracia</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <article-title>Measuring lying aversion</article-title>
          .
          <source>In Journal of Economic Behavior &amp; Organization</source>
          , volume
          <volume>93</volume>
          .
          <fpage>293</fpage>
          -
          <lpage>300</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Gneezy</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Deception: The role of consequences</article-title>
          .
          <source>In The American Economic Review</source>
          , volume
          <volume>95</volume>
          .
          <fpage>384</fpage>
          -
          <lpage>394</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Grice</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>1989</year>
          .
          <article-title>Studies in the Way of Words</article-title>
          . Cambridge, MA: Harvard University Press.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          2010.
          <article-title>How implicit beliefs influence trust recovery</article-title>
          .
          <source>Psychological Science</source>
          <volume>21</volume>
          (
          <issue>5</issue>
          ):
          <fpage>645</fpage>
          -
          <lpage>648</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Herzig</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lorini</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ; Hu¨bner, J. F.; and
          <string-name>
            <surname>Vercouter</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <article-title>A logic of trust and reputation</article-title>
          .
          <source>Logic Journal of the IGPL</source>
          <volume>18</volume>
          (
          <issue>1</issue>
          ):
          <fpage>214</fpage>
          -
          <lpage>244</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>Hobbs</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Toward a useful concept of causality for lexical semantics</article-title>
          .
          <source>Journal of Semantics</source>
          <volume>22</volume>
          (
          <issue>2</issue>
          ):
          <fpage>181</fpage>
          -
          <lpage>209</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Ismail</surname>
            ,
            <given-names>H. O.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kasrin</surname>
            ,
            <given-names>N. S.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Focused belief revision as a model of fallible relevance-sensitive perception</article-title>
          .
          <source>In Proceedings of the 33rd German AI Conference (KI</source>
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Ismail</surname>
            ,
            <given-names>H. O.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Stability in a commonsense ontology of states</article-title>
          .
          <source>In Proceedings of the Eleventh International Symposium on Logical Formalization of Commonsense Reasoning (COMMONSENSE</source>
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Levine</surname>
            ,
            <given-names>E. E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Schweitzer</surname>
            ,
            <given-names>M. E.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Prosocial lies: When deception breeds trust</article-title>
          .
          <source>Organizational Behavior and Human Decision Processes</source>
          <volume>126</volume>
          :
          <fpage>88</fpage>
          -
          <lpage>106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Mahon</surname>
            ,
            <given-names>J. E.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>The definition of lying and deception</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <given-names>In</given-names>
            <surname>Zalta</surname>
          </string-name>
          , E. N., ed.,
          <source>The Stanford Encyclopedia of Philosophy</source>
          . Metaphysics Research Lab, Stanford University, winter
          <year>2016</year>
          edition.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>McCarthy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>1980</year>
          .
          <article-title>Circumscription-a form of nonmonotonic reasoning</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>13</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>27</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <string-name>
            <surname>Nute</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1994</year>
          .
          <article-title>Defeasible logic</article-title>
          .
          <source>In Handbook of logic in artificial intelligence and logic programming</source>
          , volume
          <volume>3</volume>
          :
          <article-title>Nonmonotonic reasoning and uncertain reasoning</article-title>
          . New York, NY: Oxford University Press.
          <fpage>353</fpage>
          -
          <lpage>395</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>Reiter</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>1980</year>
          .
          <article-title>A logic for default reasoning</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>13</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>81</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <string-name>
            <surname>Sabater</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Sierra</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Review on computational trust and reputation models</article-title>
          .
          <source>Artificial Intelligence Review</source>
          <volume>24</volume>
          (
          <issue>1</issue>
          ):
          <fpage>33</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <string-name>
            <surname>Sakama</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Caminada</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Herzig</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>A logical account of lying</article-title>
          .
          <source>Proceedings of the 12th European Conference on Logics in Artificial Intelligence (JELIA</source>
          <year>2010</year>
          )
          <volume>286</volume>
          -
          <fpage>299</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>Sakama</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2011a</year>
          .
          <article-title>Dishonest reasoning by abduction</article-title>
          .
          <source>In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11)</source>
          ,
          <fpage>1063</fpage>
          -
          <lpage>1068</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <string-name>
            <surname>Sakama</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2011b</year>
          .
          <article-title>Logical definitions of lying</article-title>
          .
          <source>In Proceedings of the 14th International Workshop on Trust in Agent Societies (TRUST11).</source>
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <string-name>
            <surname>Sakama</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>A formal account of deception</article-title>
          .
          <source>In Proceedings of the AAAI Fall 2015 Symposium on Deceptive and Counter-Deceptive Machines</source>
          ,
          <fpage>34</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <string-name>
            <surname>Saul</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2012</year>
          .
          <article-title>Just go ahead and lie</article-title>
          .
          <source>Analysis</source>
          <volume>72</volume>
          :
          <fpage>3</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <string-name>
            <surname>Schillo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Funk</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and Rovatsos,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2000</year>
          .
          <article-title>Using trust for detecting deceitful agents in artificial societies</article-title>
          .
          <source>Applied Artificial Intelligence</source>
          <volume>14</volume>
          (
          <issue>8</issue>
          ):
          <fpage>825</fpage>
          -
          <lpage>848</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <string-name>
            <surname>Schweitzer</surname>
            ,
            <given-names>M. E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hershey</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Bradlow</surname>
            ,
            <given-names>E. T.</given-names>
          </string-name>
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <article-title>Promises and lies: Restoring violated trust</article-title>
          .
          <source>Organizational Behavior and Human Decision Processes</source>
          <volume>101</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <string-name>
            <surname>Serota</surname>
            ,
            <given-names>K. B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Levine</surname>
            ,
            <given-names>T. R.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Boster</surname>
            ,
            <given-names>F. J.</given-names>
          </string-name>
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <article-title>The prevalence of lying in America: Three studies of selfreported lies</article-title>
          .
          <source>Human Communication Research</source>
          <volume>36</volume>
          :
          <fpage>2</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <string-name>
            <surname>Stalnaker</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2002</year>
          .
          <article-title>Common ground</article-title>
          .
          <source>Linguistics and Philosophy</source>
          <volume>25</volume>
          :
          <fpage>701</fpage>
          -
          <lpage>721</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <string-name>
            <surname>Stokke</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Lying and misleading in discourse</article-title>
          .
          <source>The Philosophical Review</source>
          <volume>125</volume>
          (
          <issue>1</issue>
          ):
          <fpage>83</fpage>
          -
          <lpage>134</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <string-name>
            <surname>Vendler</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <year>1957</year>
          .
          <article-title>Verbs and times</article-title>
          .
          <source>The Philosophical Review</source>
          <volume>66</volume>
          (
          <issue>2</issue>
          ):
          <fpage>143</fpage>
          -
          <lpage>160</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <string-name>
            <surname>Wagner</surname>
            ,
            <given-names>A. R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Arkin</surname>
            ,
            <given-names>R. C.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Acting deceptively: Providing robots with the capacity for deception</article-title>
          .
          <source>International Journal of Social Robotics</source>
          <volume>3</volume>
          :
          <fpage>5</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <string-name>
            <surname>Wagner</surname>
            ,
            <given-names>A. R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Robinette</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Towards robots that trust: Human subject validation of the situational conditions for trust</article-title>
          .
          <source>Interaction Studies</source>
          <volume>16</volume>
          (
          <issue>1</issue>
          ):
          <fpage>89</fpage>
          -
          <lpage>117</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>