<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards a Logical Analysis of Misleading and Trust Erosion</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Haythem</forename><forename type="middle">O</forename><surname>Ismail</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Engineering Mathematics Cairo University Department of Computer Science</orgName>
								<orgName type="institution">German University in</orgName>
								<address>
									<settlement>Cairo, Cairo</settlement>
									<country key="EG">Egypt</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Patrick</forename><surname>Attia</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">German University in</orgName>
								<address>
									<settlement>Cairo, Cairo</settlement>
									<country key="EG">Egypt</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards a Logical Analysis of Misleading and Trust Erosion</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">6BAFFFE70E5FA8D23DFB51D2FB584518</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:00+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Misleading is, regrettably, an integral part of the commonsense world. Though lying, deception, and similar malignant variants of misleading have been thoroughly investigated in ethics and social psychology, there is a rather slim related literature within the logicist AI tradition. In this paper, we present foundations for a logical theory of general misleading, with an eye on its effect on trust erosion. In particular, we define a bare-bones notion of misleading and identify four dimensions along which we distinguish eighty one variants of misleading. Given this analysis, we suggest that a logical theory of misleading for trust erosion should include an account of belief, desire, intention, and causality. A logical language LM is sketched and used to represent the identified assortment of misleading scenarios.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Lying, deception, and other forms of misleading are, admittedly, part and parcel of the commonsense world. Whether malignant, harmless, good-hearted, or outright altruistic, an instance of misleading does not measure up to the high standards set by a logic-based agent for the reliability of its sources of information. Such an agent can be misled by hostile, lying agents; by cooperative, yet mis-informed agents; or even by fallible perception due to faulty sensors or illusory environments <ref type="bibr" target="#b17">(Ismail and Kasrin 2010)</ref>. <ref type="foot" target="#foot_0">1</ref> Since commonsense reasoning is driven by observations made through communication or perception, trust in information sources is an important factor for directing belief revision should contradictions arise. Said trust is, at least partially, dependent on the history of misleading of information sources.</p><p>Our long-term goal is to develop a theory of logic-based agents which can reason about the erosion and recovery of trust in information sources, where these sources may be other agents or the reasoning agent's own perception processes. We believe that trust erosion in information sources is primarily affected by incidents of misleading (where misleading is construed in a very general sense). Our short-term goal, in this paper, is four-fold: (i) to identify a common core of all varieties of misleading and a limited number of dimensions along which we can distinguish them, (ii) to propose a ranking of varieties of misleading with respect to the extent to which they affect trust erosion, (iii) to pinpoint the necessary ingredients of an ontology for a logic of misleading, and (iv) to develop a logical language for reasoning about misleading scenarios. Anticipating the future coupling of misleading and trust, we are guided in achieving our four goals by how suitable our analysis and constructions are for a theory of trust erosion in information sources.</p><p>There is an abundant literature on trust analysis, with contributions from social and managerial psychology <ref type="bibr" target="#b29">(Schweitzer, Hershey, and Bradlow 2006;</ref><ref type="bibr" target="#b9">Elangovan, Auer-Rizzi, and Szabo 2007;</ref><ref type="bibr" target="#b14">Haselhuhn, Schweitzer, and Wood 2010;</ref><ref type="bibr" target="#b19">Levine and Schweitzer 2015</ref>, for instance), economics <ref type="bibr" target="#b3">(Cox 2004</ref>, for instance), social robotics (Wagner and Robinette 2015), and multi-agent systems and ecommerce <ref type="bibr" target="#b28">(Schillo, Funk, and Rovatsos 2000;</ref><ref type="bibr" target="#b23">Sabater and Sierra 2005</ref>, for instance). Most formal theories of trust are probabilistic or game theoretic, but some logicist approaches exist <ref type="bibr" target="#b4">(Demolombe 2009;</ref><ref type="bibr" target="#b15">Herzig et al. 2010;</ref><ref type="bibr" target="#b1">Amgoud and Demolombe 2014;</ref><ref type="bibr" target="#b5">Demolombe 2015;</ref><ref type="bibr" target="#b8">Drawel, Bentahar, and Shashuki 2017)</ref>. None of the logical theories, however, establishes a link to misleading. Research on lying and deception (but not misleading in general) is also quite varied, drawing interest from psychology and human communication <ref type="bibr" target="#b2">(Buller and Burgoon 1996)</ref>, economics <ref type="bibr" target="#b12">(Gneezy 2005;</ref><ref type="bibr" target="#b10">Ettinger and Jehiel 2010;</ref><ref type="bibr">Gneezy, Rockenbach, and Serra-Gracia 2013)</ref>, social robotics <ref type="bibr" target="#b34">(Wagner and Arkin 2011)</ref>, and is an all-time favorite of philosophy <ref type="bibr" target="#b20">(Mahon 2016)</ref>.</p><p>Within the logicist framework, however, analysis of lying and deception is (to the best of our knowledge) limited to the work of Sakama and colleagues <ref type="bibr" target="#b24">(Sakama, Caminada, and Herzig 2010;</ref><ref type="bibr" target="#b24">Sakama 2011a;</ref><ref type="bibr" target="#b25">2011b;</ref><ref type="bibr">2015)</ref>. While we attempt to base our constructions on the foundations established by them, we do not limit ourselves to the lying and deception varieties of misleading and we keep our analysis motivated by issues of trust erosion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">A Bare-Bones Notion of Misleading</head><p>Though a lot of work has been done on the analysis of lying and deception, there is, much to our distress, almost no systematic analysis of misleading in general. To identify a bare-bones notion of misleading, we start with what is out there: definitions of lying and deception. First, consider the following adaptation of "the traditional definition of lying" <ref type="bibr" target="#b20">(Mahon 2016)</ref>:</p><p>Cognitive agent S lies if and only if (l1) S states proposition P to A. (l2) S believes P to be false. (l3) A is a cognitive agent. (l4) S states P to A with the intention that A believes P to be true. We can attempt to generalize this definition to one of misleading by considering each condition and either dropping or generalizing it. But, first, consider what it is that we are trying to define. For starters, we cannot just replace "lies" with "misleads", for we are not primarily interested in agent S and their actions, but in agent A-the one being misledand what happens to them and how it affects their trust in S. It will also not do to define what it means for A to be misled. The reason is that "mislead" is an achievement verb (cf. <ref type="bibr" target="#b33">(Vendler 1957</ref>)) and we do not want to imply that A is subjected to successful misleading; a potentially successful misleading of A is sufficient to shake their trust in S. We propose to replace the clause "Cognitive agent S lies" by "Event E is judged as misleading by cognitive agent A". There are a couple of things to note here. First, we take that which is misleading to be, not an agent nor a statement, but an event. For example, a perception event can be misleading though there is no misleader nor is there any form of linguistic communication. Second, you can judge an event to be misleading without being misled by it. For example, a student's untruthful claim to having spent the night working on their dissertation is a misleading event, though their major professor will never be misled by it if they had seen the student at a party the night before. Third, what matters is that A judges the event to be misleading, regardless of what anybody else thinks.</p><p>We now turn to conditions (l1)-(l4). As already stated, misleading need not involve any form of linguistic communication as mandated by (l1). However, we still need to confine ourselves to misleading events in which some information source S (which is not necessarily an agent) conveys some proposition P . Examples include having a perception with content P , reading a statement of P in a newspaper, and, of course, person S's stating P . For (l2), we have already pointed out that S need not be an agent at all and may, thus, have no beliefs. But even in the prototypical case when S is a person stating P , S may believe P but use it to conversationally implicate another proposition which they do not believe <ref type="bibr" target="#b0">(Adler 1997;</ref><ref type="bibr" target="#b32">Stokke 2016</ref>). Thus, a general misleading event involves two propositions: P and the contextually implicated Q. If S is a cognitive agent, misleading occurs if they do not believe Q. This is not necessary, however; misleading may still occur if S believes Q but Q is false. On the other hand, if S is not a cognitive agent, we contend that there cannot be any misleading unless Q is false. Finally, both (l3) and (l4) may simply be dropped: (l3) is presupposed by the left-hand side of our definition and (l4) does not make sense if S is not an agent. (m1) E is an event of information source S's (directly) conveying proposition P . (m2) S's conveying of P together with a common ground</p><formula xml:id="formula_0">C defeasibly imply Q. (m3) Q is false or S does not believe Q.</formula><p>There are a couple of points to note about (m1) and (m2). It is out of the scope of this paper to provide a general theoretical account of what it means for an event E to be one of an information source S's (directly) conveying a proposition P . The simplest case is when E is the event of a person S's stating P . But other cases include sensor S's producing a signal interpreted as P by the sensing agent, or agent S's performing some action α and thereby conveying the proposition that "S has just performed α." We assume that particular agent theories include statements indicating for some relevant events that they are events of certain information sources conveying certain propositions.</p><p>By (m2), we model implicature <ref type="bibr" target="#b13">(Grice 1989</ref>) by defeasible implication given some common ground C. Following <ref type="bibr" target="#b31">(Stalnaker 2002)</ref>, we think of common ground as some proposition which A believes to be common belief (in the sense of <ref type="bibr" target="#b11">(Fagin et al. 1995)</ref>.) Again, we lay the responsibility of specifying C on particular logical theories that may choose to make use of our notion of misleading. In the example of the deceitful graduate student, the common ground includes the belief that speakers are honest. Thus, together with the student's claim of spending the night working on their dissertation, the common ground implies that they indeed did so. This is defeated, however, by the professor's witnessing the student partying all night.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">The Many Scenarios of Misleading</head><p>We distinguish different types of misleading using four parameters: (i) whether S believes P (BP), (ii) whether S intends to deceive A (ID), (iii) whether S intends to harm A (IH), and (iv) whether being misled has a negative effect on A (EQ). Each of these parameters may assume one of three values: 0, ?, and 1. Tables <ref type="table" target="#tab_5">1 through 4</ref> indicate the conditions represented by each assignment of a value to a parameter. We note the following:</p><p>1. BP, ID, and IH can take the values 0 or 1 only if S is a cognitive agent. A value of ? may indicate that S is not a cognitive agent in the first place.    <ref type="bibr" target="#b12">(Gneezy 2005)</ref>. (Gneezy, Rockenbach, and Serra-Gracia 2013) reports on experiments conducted to identify when people take the decision to lie. One of the findings of the experiments is that the victims' trust in the liers deteriorates more severely if, by following the lie, they lose their monetary payoff. These results suggest the appropriateness of EQ.</p><p>With our four three-valued parameters, we can distinguish eighty one different scenarios of misleading, M0-M80.</p><p>Each scenario is characterized by eight conditions: m1 through m4 and one condition from each of the Tables <ref type="table" target="#tab_5">1  through 4</ref>. Symbolically, we can encode the misleadingvariants by using the standard ternary encoding of the natural numbers 0-80 over the alphabet {0, ?, 1}. Table <ref type="table">5</ref> tersely displays the association between the labels (Mi) and the strings.</p><p>We rank misleading scenarios along the natural order of the integers 0-80; the higher the number, the more erosiveto-trust the scenario is. Thus, ceteris paribus, S's believ-  Table <ref type="table">5</ref>: The mapping between labels and string encodings of misleading scenarios. Each cell indicates the index i corresponding to label Mi. The string encoding is constructed by appending the column label to the row label ing P is always better than their having no clue about it, which is always better than their believing it to be false. This is, in fact, the common view in ethics. (For example, see <ref type="bibr" target="#b27">(Saul 2012</ref>) who, interestingly, argues against this common view.) Likewise, a positive effect (on A) of a successful misleading is, ceteris paribus, always better than a neutral effect, which is always better than a negative effect; this is consistent with the findings of <ref type="bibr" target="#b12">(Gneezy 2005;</ref><ref type="bibr">Gneezy, Rockenbach, and Serra-Gracia 2013)</ref>. Similarly for the intentions to deceive and harm. Globally, IH has the strongest influence on trust erosion, followed by ID, followed by BP, and finally by EQ. That EQ comes last makes sense since the consequences of believing Q are generally not under the control of S. On the other hand, IH comes first signifying the damaging effect on trust that selfishness and willingness to harm have <ref type="bibr" target="#b19">(Levine and Schweitzer 2015)</ref>. Now, it might be suspected that some of the scenarios M0-M80 are not realistic. We have successfully constructed real-life examples of each scenario and we present some of the interesting/exotic ones below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Example 1.</head><p>In what follows, we present eleven examples of selected entries from Table <ref type="table">5</ref>. All examples are about our two protagonists: the misleading information source Steve (S) and the sharp, trusting agent Ashley (A). M80 (1111). Steve tells his colleague Ashley that there is no meeting the next day (P ) although he believes that there is indeed an important meeting. Steve does so with the intention of deceiving Ashley and of hurting her career at the company as a result of missing the meeting.</p><p>Believing Steve and missing the meeting, Ashley gets a deduction and a notice. M79 (111?). The same M80 scenario above but the meeting gets cancelled and nothing, good or bad, happens to Ashley. M78 (1110). Same as M80 but, missing the meeting, Ashley gets the chance to work more on her assigned tasks, produces fantastic results, and ends up getting a raise. M41 (???1). Steve tells Daphne and Ashley that there is a theory of computation quiz the next day. However, he has no idea whether there is a quiz or not, he just wants Daphne to be nervous and does not care about Ashley who just happens to be there. Believing Steve, Ashley panics, spends the night studying theory of computation, and forgets about the networks quiz which she, consequently, fails. M39 (???0). Same as M41 above but it turns out there is a pop quiz in theory of computation the next day. Ashley does great since she spent the night studying. M26 (0111). Steve and Ashley apply for an internship. At the interview they are told that only one person will get the internship and will be notified by e-mail if they get accepted. Steve gets the e-mail, but refrains from saying so to Ashley when she asks to spare her feelings. Consequently, Ashley waits for the e-mail and misses the chance of applying for another great internship. M24 (0110). Same as M26 but the internship which Ashley misses the chance of applying for would have been a horrible experience. M8 (0011). Steve sarcastically tells Ashley that the theory assignment is so easy that he solved it the moment he read it; but he means the exact opposite since the assignment is super difficult. However, Ashley naively believes him, waits till the last minute, and fails to finish the assignment on time. M7 (001?). Same as M8 but, although Ashley believes Steve, she starts working early on the assignment anyways. M6 (0010). Same as M8 but, after believing Steve and spending the time finishing other important work, the professor realizes that the assignment is too hard and cancels it. M0 (0000). Here we adapt M41 above as follows. Steve replaces Ashley and Sam replaces Steve. Further, right after his encounter with Sam and Daphne, Steve meets Ashley and good-heartedly informs her about the theory of computation quiz. Believing Steve, Ashley panics, spends the night studying theory of computation, and forgets about the networks quiz which she, consequently, fails.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Foundations for a Logic of Misleading</head><p>In this section, we lay the foundations for a logic of misleading as per the analysis presented thus far.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Ontology</head><p>Reasoning about misleading, construed after the analysis of Sections 2 and 3, rests upon a rather rich ontology. We take our ontology to at least conform to the following. 1. As mandated by (M), the ontology includes agents, eventualities, and propositions. Agents are distinguished individuals who can have beliefs, intentions, and desires.</p><p>(More on these below.) We follow <ref type="bibr" target="#b16">(Hobbs 2005)</ref> in assuming a category of eventualities which are, intuitively, stretches of time characterized by some proposition's being true (or some state's holding <ref type="bibr" target="#b18">(Ismail 2013</ref>).) Propositions are taken at face value, and assumed to be first-class inhabitants of our ontology. This simplifies the language and facilitates quantification over propositions. Such a notion of propositions may be modeled using reified fluents or, more generally, states, as suggested in <ref type="bibr" target="#b18">(Ismail 2013)</ref>.</p><p>2. To accommodate BP, ID, and IH, we follow the standard analysis of belief and intention. Hence, the ontology includes possible worlds, with belief and intention accessibility relations.</p><p>3. An account of causality is necessary for reasoning about the effects of misleading, as IH and EQ mandate. We follow the treatment of causality presented in <ref type="bibr" target="#b16">(Hobbs 2005</ref>), which presupposes eventualities and possible worlds.</p><p>4. Whether the effect of misleading is positive, negative, or neutral is determined by the desirability of that effect.</p><p>Likewise, an intention to hurt by misleading is an intention that misleading causes an undesirable effect. Hence, for IH and EQ, our ontology should accommodate a notion of desirability. To that end, we follow the theory of relative desire presented in <ref type="bibr" target="#b7">(Doyle, Shoham, and Wellman 1991)</ref>. That theory posits a preorder on models which are taken to be sets of literals of the logic. We opt for having models as secondary ingredients of our ontology, defined in terms of possible worlds.</p><p>5. Since beliefs and intentions, in general, vary over time, we assume a global time-line across all possible worlds.</p><p>To summarize, the ontology of misleading includes agents, eventualities, propositions, possible worlds, and a global clock. Moreover, for every agent a, a belief-and an intention-accessibility relation, respectively R B a and R I a , are defined: R B a relates pairs of worlds and pairs of times and R I a relates pairs of worlds at a time. (More on this below.) Every world has an associated set of eventualities holding in it <ref type="bibr" target="#b16">(Hobbs 2005)</ref>; a function E maps each world w to its associated set E(w). Finally, a function M maps a world w to its associated model-a subset of E(w) of the eventualities of some propositional literals being true. Here we allude to a particular logical language (like the one presented below) to fix the set of literals. A relative desire relation α for each agent α, akin to that of <ref type="bibr" target="#b7">(Doyle, Shoham, and Wellman 1991)</ref>, preorders the set of models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Sketch of a Language</head><p>We present a sketch of a logical language L M for reasoning about misleading scenarios. L M is a first-order language amended with features for defeasible reasoning, symbolized by a connective . We stay silent about exactly what those features are; we may interpret as in <ref type="bibr" target="#b21">(McCarthy 1980)</ref>, <ref type="bibr" target="#b22">(Reiter 1980)</ref>, or (Nute 1994), for instance. The vocabulary and informal semantics of L M are outlined below.</p><p>Terms a, possibly subscripted, is an agent variable; e, possibly subscripted, is an eventuality variable; t, possibly subscripted, is a time variable; and w, possibly subscripted, is a possible world variable. The set of fluent/state terms is defined recursively as follows:</p><p>1. P ∈ P is a fluent constant, where P is a set of propositional constants disjoint from the rest of the alphabet.</p><p>2. p, possibly subscripted, is a fluent variable.</p><p>3. Bel(α, φ, t) is a fluent functional term denoting agent</p><formula xml:id="formula_1">[[α]]'s believing fluent [[φ]] to be true-at-time-[[t]]. 4. Int(α, φ) is a fluent functional term denoting agent [[α]]'s intending fluent [[φ]</formula><p>] to be true.</p><p>5. Conv(α, φ, t) is a fluent functional term denoting agent <ref type="bibr">2005)</ref>. This is a fluent term not because causality between event tokens varies over time-it certainly does not-but because we would like such terms to appear as arguments of Bel and Int. Here we are overloading the sentential connectives.</p><formula xml:id="formula_2">[[α]]'s conveying fluent [[φ]]'s being true-at-t. 6. cause( 1 , 2 ) is a fluent functional term denoting eventu- ality [[ 1 ]]'s causing eventuality [[ 2 ]] (<label>Hobbs</label></formula><p>Fluent terms of the first seven forms and their negations are the literals of the language.</p><p>Predicates L M has four groups of predicate symbols:</p><formula xml:id="formula_3">1. R B (α, ω 1 , ω 2 , t, t ) is true if ([[ω 1 ]], [[ω 2 ]], [[t]], [[t ]]) ∈ R B [[α]]</formula><p>; intuitively, at t in ω 1 , α believes ω 1 -at-t to be identical to ω 2 -at-t.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">R</head><formula xml:id="formula_4">I (α, ω 1 , ω 2 , t) is true if ([[ω 1 ]], [[ω 2 ]], [[t]]) ∈ R I [[α]]</formula><p>; intuitively, α's intentions at t in ω 1 are true in ω 2 at some time no earlier than t.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">holds(e, w, t) is true whenever [[e]] ∈ E([[w]]) at time [[t]]</head><p>and Rexists(e, t) is true whenever</p><formula xml:id="formula_5">[[e]] ∈ E(r) at time [[t]]</formula><p>where r is the real world <ref type="bibr" target="#b16">(Hobbs 2005)</ref>.</p><formula xml:id="formula_6">4. bef ore(t 1 , t 2 ) is true if [[t 1 ]] precedes [[t 2 ]] on the global time-line. 5. Ev( , φ) is true whenever [[ ]] is an eventuality of [[φ]]'s being true.</formula><p>Axioms An L M theory contains the following groups of axioms.</p><p>1. Appropriate axioms for R B and R I . For example, we can borrow the axioms in <ref type="bibr" target="#b24">(Sakama, Caminada, and Herzig 2010)</ref> as is, modulo the translation from their modal language to our first-order L M and accounting for temporality. These axioms restrict belief to a KD45 modality and intention to a KD modality, with two axioms for interaction between the two modalities. 2. Axioms requiring bef ore to be irreflexive, asymmetric, and transitive. 3. Axioms characterizing cause from <ref type="bibr" target="#b16">(Hobbs 2005</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Formalizing Misleading Scenarios</head><p>As indicated in Section 3, each of M0-M80 is a conjunction of (M) together with four statements, one from each of Tables <ref type="table" target="#tab_5">1-4</ref>. Thus, it suffices to formalize (M) together with the twelve statements in the tables. </p><formula xml:id="formula_7">, t ), t) ] ∧ Φ(η, δ, β, ε)]</formula><p>Here E is a place holder for the eventuality judged as misleading by agent A and RC(A, t) stands for whatever A takes to be common ground in the real world at time t.      </p><formula xml:id="formula_8">Φ(η, δ, β, ε) = IH(η) ∧ ID(δ) ∧ BP (β) ∧ EQ(ε)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>We presented an account of misleading as a catalyst to trust erosion in information sources. A suitable bare-bones notion of what it means for an agent to judge an eventuality as misleading is the corner stone of our account. According to it, all misleading scenarios involve an information source's conveying some proposition which, given what the agent takes to be common ground, defeasibly imply another proposition that is either false or not believed to be true by the information source. We have identified eighty one variants of misleading as generated by four three-valued parameters: whether the source believes what they convey, whether they intend to deceive the agent, whether they intend to harm the agent, and whether misleading results in an undesirable effect to the agent. If this analysis is correct, a logical theory of misleading for trust erosion necessarily includes theories of belief, desire, intention, and causality. We have sketched a first-order language L M to represent scenarios of misleading. Future research can go in at least three fruitful directions. First, we need to go to the lab and conduct various experiments on human subjects to validate the details of our analysis of misleading. Second, a more thorough investigation of L M and its properties is called for. Finally, we should turn to our long-term goal and introduce an account of trust erosion to L M .</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>7. DESIRE(α, φ) is a functional fluent term denoting fluent [[φ]]'s being desirable by agent [[α]] (Doyle, Shoham, and Wellman 1991). This roughly means that, ceteris paribus, [[φ]] is more preferred over [[¬φ]]. 2 8. If φ and ψ are fluent terms, then so are ¬φ and φ ∧ ψ.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Values and their meanings for BP Event E is judged to be misleading by cognitive agent A if and only if:</figDesc><table><row><cell>Value</cell><cell>Meaning</cell></row><row><cell>0</cell><cell>S believes P</cell></row><row><cell>?</cell><cell>S believes neither P nor ¬P</cell></row><row><cell>1</cell><cell>S believes ¬P</cell></row><row><cell cols="2">Hence, we adopt the following bare-bones notion of mis-</cell></row><row><cell>leading:</cell><cell></cell></row><row><cell>(M)</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc>Values and their meanings for ID</figDesc><table><row><cell>Value</cell><cell>Meaning</cell></row><row><cell>0</cell><cell>S intends to not harm A</cell></row><row><cell>?</cell><cell>S intends to neither harm A nor to not harm A</cell></row><row><cell>1</cell><cell>S intends to harm A</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 :</head><label>3</label><figDesc>Values and their meanings for IH</figDesc><table><row><cell>2. That BP and ID are parameters distinguishing varieties</cell></row><row><cell>of misleading is already a common practice in analyses</cell></row><row><cell>of lying and deception (Mahon 2016; Sakama, Caminada,</cell></row><row><cell>and Herzig 2010; Sakama 2011a; 2011b; 2015).</cell></row></table><note>3. IH and EQ are motivated by our long-term goal of establishing a link between misleading and trust erosion. After conducting a series of interesting studies, Levine and Schweitzer<ref type="bibr" target="#b19">(Levine and Schweitzer 2015)</ref> conclude that deception per se does not always harm trust, but selfishness and willingness to harm do. This motivates including something like IH as a dimension for classifying misleading scenarios. Moreover, studies show that people are, in general, less forgiving of lies which have more damaging effects on the victim</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 4 :</head><label>4</label><figDesc>Values and their meanings for EQ</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>BP EQ</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="9">00 0? 01 ?0 ?? ?1 10 1? 11</cell></row><row><cell></cell><cell>00</cell><cell>0</cell><cell>1</cell><cell>2</cell><cell>3</cell><cell>4</cell><cell>5</cell><cell>6</cell><cell>7</cell><cell>8</cell></row><row><cell></cell><cell>0?</cell><cell cols="9">9 10 11 12 13 14 15 16 17</cell></row><row><cell></cell><cell cols="10">01 18 19 20 21 22 23 24 25 26</cell></row><row><cell></cell><cell cols="10">?0 27 28 29 30 31 32 33 34 35</cell></row><row><cell>IH ID</cell><cell cols="10">?? 36 37 38 39 40 41 42 43 44</cell></row><row><cell></cell><cell cols="10">?1 45 46 47 48 49 50 51 52 53</cell></row><row><cell></cell><cell cols="10">10 54 55 56 57 58 59 60 61 62</cell></row><row><cell></cell><cell cols="10">1? 63 64 65 66 67 68 69 70 71</cell></row><row><cell></cell><cell cols="10">11 72 73 74 75 76 77 78 79 80</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head></head><label></label><figDesc>[R B (a, w, w 1 , t, t ) ⇒ ∃e 1 [holds(e 1 , p, w 1 , t)]] AEv3. ∃e[holds(e, Int(a, p), w, t)] ⇔ ∀w 1 [R I (a, w, w 1 , t) ⇒ ∃e 1 , t 1 [holds(e 1 , p, w 1 , t 1 ) ∧ ¬bef ore(t 1 , t)]] AEV4. ∃e, t[holds(e, cause(e 1 , e 2 ), w, t)] ⇔ ∀t∃e[holds(e, cause(e 1 , e 2 ), w, t)] AEv5. ∃e[holds(e, ¬p, w, t)] ⇔ ¬∃e[holds(e, p, w, t)] AEv6. ∃e[holds(e, p 1 ∧ p 2 , w, t)] ⇔ ∃e 1 , e 2 [holds(e 1 , p 1 , w, t) ∧ holds(e 2 , p 2 , w, t)]</figDesc><table><row><cell>). 3</cell><cell></cell></row><row><cell cols="2">4. Finally, Ev is characterized by the following axioms</cell></row><row><cell cols="2">which, we believe, are self-explanatory. Henceforth, we</cell></row><row><cell cols="2">write holds(x, y, z, t) as a short hand for Ev(x, y) ∧</cell></row><row><cell cols="2">holds(x, z, t) and Rholds(x, y, t) as a short hand for</cell></row><row><cell cols="2">Ev(x, y) ∧ Rexists(x, t). Unless otherwise indicated, all</cell></row><row><cell>variables are universally quantified with widest scope.</cell><cell></cell></row><row><cell>AEv1. ∃e[Ev(e, p)]</cell><cell></cell></row><row><cell>AEv2. ∃e[holds(e, Bel(a, p, t), w, t )]</cell><cell>⇔</cell></row><row><cell>∀w 1</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head></head><label></label><figDesc>The representation of scenario Mi with encoding ηδβε has the following general form ∃a, p 1 , p 2 , t, t , t [ ∃e 2 [(Rholds(e 2 , ¬p 2 , t ) ∨ Rholds(e 2 , ¬Bel(a, p 2</figDesc><table /><note>Rholds(E, Conv(a, p 1 , t ), t)∧ [Rholds(E,Conv(a, p 1 , t ), t) ∧ RC(A, t)∃e 1 [Rholds(e 1 , p 2 , t )]]∧</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_9"><head>Table 6 :</head><label>6</label><figDesc>Different forms of Θ(η)</figDesc><table><row><cell>δ</cell><cell>∆(δ)</cell></row><row><cell cols="2">0 Int(a, ¬Bel(A, p2, t ))</cell></row><row><cell>?</cell><cell>¬∆(0) ∧ ¬∆(1)</cell></row><row><cell>1</cell><cell>Int(a, Bel(A, p2, t ))</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_10"><head>Table 7 :</head><label>7</label><figDesc>Different forms of ∆(δ)</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_11"><head></head><label></label><figDesc>Table6shows the different forms of η. IH(η) says that information source a has (or lacks) some intention regarding E's causing A to believe p 2 , as indicated by η, and that this intention (or lack thereof) is simultaneous with E. Further, A's believing p 2 is believed by a to have an undesirable effect on A. In what follows, ∆(δ), Ψ(β), and Υ(ε) are as tabulated in Tables7-9. Rholds(e 2 , Bel(Steve, M eeting, t ), t)∧ Rholds(e 3 , Int(Steve, cause(E, e 4 )), t)∧ Ev(e 4 , Bel(Ashley, ¬M eeting, t )) ∧ Ev(e 5 , Deduct)∧ Rholds(e 6 , Bel(Steve, cause(e 4 , e 5 )∧ DESIRE(Ashley, ¬Deduct), t ), t)∧ Rholds(e 7 , Int(Steve, Bel(Ashley, ¬M eeting, t )), t)∧</figDesc><table><row><cell cols="2">Rholds(e 1 , cause(E, e 2 ), t)∧</cell></row><row><cell cols="2">Ev(e 2 , Bel(A, p 2 , t ))∧</cell></row><row><cell>Υ(ε)</cell><cell></cell></row><row><cell cols="2">Example 1. As an illustration, we formalize case M80 of</cell></row><row><cell cols="2">Example 1 from Section 3. For readability, some variables</cell></row><row><cell cols="2">have been replaced by more mnemonic (Skolem) constants.</cell></row><row><cell>β</cell><cell>Ψ(β)</cell></row><row><cell>0</cell><cell>Bel(a, p1, t )</cell></row><row><cell cols="2">? ¬Ψ(0) ∧ ¬Ψ(1)</cell></row><row><cell>1</cell><cell>Bel(a, ¬p1, t )</cell></row></table><note>ID(δ) = def ∃e 1 , e 2 , e 3 [ Rholds(e 1 , ∆(δ), t)∧ Ev(e 2 , Bel(A, p 2 , t ))∧ Rholds(e 3 , Bel(a, cause(E, e 2 ) ∧ ¬p 2 , t ), t)] BP (β) = def ∃e[Rholds(e, Ψ(β), t)] EQ(ε) = def ∃e 1 , e 2 [ Rholds(E, Conv(Steve, ¬M eeting, t ), t)∧ Rholds(E,Conv(Steve, ¬M eeting, t ), t) ∃e 1 [Rholds(e 1 ¬M eeting, t )]∧</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_12"><head>Table 8 :</head><label>8</label><figDesc>Different forms of Ψ(β) , e 3 , e 4 , t [Rholds(e 3 , Cause(e 2 , e 4 ), t) ∧ Rholds(e 4 , p3, t ) ⇒ ∃e 5 Rholds(e 5 , DESIRE(A, p 3 ), t ) ? ¬Υ(0) ∧ ¬Υ(1) 1 ∃p 3 , e 3 , e 4 , e 5 , t [Rholds(e 3 , Cause(e 2 , e 4 ), t) ∧ Rholds(e 4 , p3, t ) ∧Rholds(e 5 , DESIRE(A, ¬p 3 ), t )]</figDesc><table><row><cell>ε</cell><cell>Υ(ε)</cell></row><row><cell>0 ∀p 3</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_13"><head>Table 9 :</head><label>9</label><figDesc>Different forms of Υ(ε)Rholds(e 8 , Bel(Steve, cause(E, e 4 )), t)∧ Rholds(e 9 , DESIRE(Ashley, ¬Deduct), t )∧ Rholds(e 10 , cause(E, e 4 ), t)∧ Rholds(e 11 , cause(e 4 , e 5 ), t )</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Studies indicate that lying alone is quite pervasive, with an American telling an average of one to two lies every day<ref type="bibr" target="#b6">(DePaulo et al. 1996)</ref>. Most lies, however, are told by a small percentage of the population(Serota, Levine, and Boster  </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2010" xml:id="foot_1">).</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_2">A note for readers familiar with<ref type="bibr" target="#b7">(Doyle, Shoham, and Wellman 1991)</ref>: Since the semantics of desire in that beautiful theory is based on models, which we do not have, we replace a model m with a possible world w. In the formal machinery, each mention of m is replaced by M(w).</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_3">Again, this requires some adjustment. Hobbs<ref type="bibr" target="#b16">(Hobbs 2005</ref>) takes worlds to be sets of eventualities. Thus, a world w in our ontology does not correspond to a world in Hobbs's; the set E(w) does.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Lying, deceiving, or falsely implicating</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Adler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Philosophy</title>
		<imprint>
			<biblScope unit="volume">94</biblScope>
			<biblScope unit="page" from="435" to="452" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">An argumentationbased approach for reasoning about trust in information sources</title>
		<author>
			<persName><forename type="first">L</forename><surname>Amgoud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Demolombe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Argument &amp; Computation</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">2-3</biblScope>
			<biblScope unit="page" from="191" to="215" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Interpersonal deception theory</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Buller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Burgoon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communication Theory</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="203" to="242" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">How to identify trust and reciprocity</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Cox</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Games and Economic Behavior</title>
		<imprint>
			<biblScope unit="volume">46</biblScope>
			<biblScope unit="page" from="260" to="281" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Graded trust</title>
		<author>
			<persName><forename type="first">R</forename><surname>Demolombe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Workshop on Trust in Agent Societies (TRUST&apos;09)</title>
				<meeting>the Workshop on Trust in Agent Societies (TRUST&apos;09)</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Analytical decomposition of trust in terms of mental and social attitudes</title>
		<author>
			<persName><forename type="first">R</forename><surname>Demolombe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Cognitive Foundations of Group Attitudes and Social Interaction</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="59" to="74" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Lying in everyday life</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">M</forename><surname>Depaulo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Kashy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">E</forename><surname>Kirkendol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Wyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Personality and Social Psychology</title>
		<imprint>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="979" to="995" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A logic of relative desire</title>
		<author>
			<persName><forename type="first">J</forename><surname>Doyle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shoham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Wellman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Methodologies for Intelligent Systems: Proceedings of the 6th International Symposium, ISMIS &apos;91</title>
				<editor>
			<persName><forename type="first">Z</forename><forename type="middle">W</forename><surname>Ras</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Zemankova</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1991">1991</date>
			<biblScope unit="page" from="16" to="31" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Reasoning about trust and time in a system of agents</title>
		<author>
			<persName><forename type="first">N</forename><surname>Drawel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bentahar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Shashuki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia Computer Science</title>
		<imprint>
			<biblScope unit="volume">109</biblScope>
			<biblScope unit="page" from="632" to="639" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Why don&apos;t I trust you now? An attributional approach to erosion of trust</title>
		<author>
			<persName><forename type="first">A</forename><surname>Elangovan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Auer-Rizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Szabo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Managerial Psychology</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="4" to="24" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A theory of deception</title>
		<author>
			<persName><forename type="first">D</forename><surname>Ettinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Jehiel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Microeconomics</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="20" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Reasoning about Knowledge</title>
		<author>
			<persName><forename type="first">R</forename><surname>Fagin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Halpern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Moses</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vardi</surname></persName>
		</author>
		<author>
			<persName><surname>Gneezy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Economic Behavior &amp; Organization</title>
		<imprint>
			<biblScope unit="volume">93</biblScope>
			<biblScope unit="page" from="293" to="300" />
			<date type="published" when="1995">1995. 2013</date>
			<publisher>The MIT Press</publisher>
		</imprint>
	</monogr>
	<note>Measuring lying aversion</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Deception: The role of consequences</title>
		<author>
			<persName><forename type="first">U</forename><surname>Gneezy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The American Economic Review</title>
		<imprint>
			<biblScope unit="volume">95</biblScope>
			<biblScope unit="page" from="384" to="394" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Grice</surname></persName>
		</author>
		<title level="m">Studies in the Way of Words</title>
				<meeting><address><addrLine>Cambridge, MA</addrLine></address></meeting>
		<imprint>
			<publisher>Harvard University Press</publisher>
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">How implicit beliefs influence trust recovery</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Haselhuhn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Schweitzer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Wood</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological Science</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="645" to="648" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A logic of trust and reputation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Herzig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Lorini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Hübner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Vercouter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Logic Journal of the IGPL</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="214" to="244" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Toward a useful concept of causality for lexical semantics</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Hobbs</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Semantics</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="181" to="209" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Focused belief revision as a model of fallible relevance-sensitive perception</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">O</forename><surname>Ismail</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">S</forename><surname>Kasrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 33rd German AI Conference</title>
				<meeting>the 33rd German AI Conference<address><addrLine>KI</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Stability in a commonsense ontology of states</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">O</forename><surname>Ismail</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eleventh International Symposium on Logical Formalization of Commonsense Reasoning</title>
				<meeting>the Eleventh International Symposium on Logical Formalization of Commonsense Reasoning<address><addrLine>COMMONSENSE</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Prosocial lies: When deception breeds trust</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">E</forename><surname>Levine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Schweitzer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Organizational Behavior and Human Decision Processes</title>
		<imprint>
			<biblScope unit="volume">126</biblScope>
			<biblScope unit="page" from="88" to="106" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">The definition of lying and deception</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Mahon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Stanford Encyclopedia of Philosophy</title>
				<editor>
			<persName><forename type="first">E</forename><forename type="middle">N</forename><surname>Zalta</surname></persName>
		</editor>
		<meeting><address><addrLine>winter</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
		<respStmt>
			<orgName>Metaphysics Research Lab, Stanford University</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Circumscription-a form of nonmonotonic reasoning</title>
		<author>
			<persName><forename type="first">J</forename><surname>Mccarthy</surname></persName>
		</author>
		<author>
			<persName><surname>Nute</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Defeasible logic. In Handbook of logic in artificial intelligence and logic programming</title>
				<meeting><address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="1980">1980. 1994</date>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="353" to="395" />
		</imprint>
	</monogr>
	<note>Nonmonotonic reasoning and uncertain reasoning</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">A logic for default reasoning</title>
		<author>
			<persName><forename type="first">R</forename><surname>Reiter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="81" to="132" />
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Review on computational trust and reputation models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Sabater</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sierra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence Review</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="33" to="60" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Dishonest reasoning by abduction</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sakama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Caminada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Herzig</surname></persName>
		</author>
		<author>
			<persName><surname>Sakama</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11)</title>
				<meeting>the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11)<address><addrLine>C</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010. 2010. 2011a</date>
			<biblScope unit="page" from="1063" to="1068" />
		</imprint>
	</monogr>
	<note>Proceedings of the 12th European Conference on Logics in Artificial Intelligence (JELIA</note>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Logical definitions of lying</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sakama</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th International Workshop on Trust in Agent Societies</title>
				<meeting>the 14th International Workshop on Trust in Agent Societies</meeting>
		<imprint>
			<date type="published" when="2011">2011b. TRUST11</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A formal account of deception</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sakama</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Fall 2015 Symposium on Deceptive and Counter-Deceptive Machines</title>
				<meeting>the AAAI Fall 2015 Symposium on Deceptive and Counter-Deceptive Machines</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="34" to="41" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Just go ahead and lie</title>
		<author>
			<persName><forename type="first">J</forename><surname>Saul</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Analysis</title>
		<imprint>
			<biblScope unit="volume">72</biblScope>
			<biblScope unit="page" from="3" to="9" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Using trust for detecting deceitful agents in artificial societies</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Funk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rovatsos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="825" to="848" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Promises and lies: Restoring violated trust</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Schweitzer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Hershey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">T</forename><surname>Bradlow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Organizational Behavior and Human Decision Processes</title>
		<imprint>
			<biblScope unit="volume">101</biblScope>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">The prevalence of lying in America: Three studies of selfreported lies</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">B</forename><surname>Serota</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Levine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Boster</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Human Communication Research</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="2" to="25" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Common ground</title>
		<author>
			<persName><forename type="first">R</forename><surname>Stalnaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Linguistics and Philosophy</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="701" to="721" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Lying and misleading in discourse</title>
		<author>
			<persName><forename type="first">A</forename><surname>Stokke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Philosophical Review</title>
		<imprint>
			<biblScope unit="volume">125</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="83" to="134" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Verbs and times</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Vendler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Philosophical Review</title>
		<imprint>
			<biblScope unit="volume">66</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="143" to="160" />
			<date type="published" when="1957">1957</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Acting deceptively: Providing robots with the capacity for deception</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Wagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Arkin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="5" to="26" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Towards robots that trust: Human subject validation of the situational conditions for trust</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Wagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Robinette</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Interaction Studies</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="89" to="117" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
