<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Game-Theoretic Analysis on the Use of Indirect Speech Acts</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Mengyuan Zhao School of Social Sciences Shanghai</institution>
          ,
          <addr-line>China 200093</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Shanghai for Science and Technology</institution>
        </aff>
      </contrib-group>
      <fpage>103</fpage>
      <lpage>115</lpage>
      <abstract>
        <p>In this paper we will discuss why in some circumstances people express their intentions indirectly: the use of Indirect Speech Acts (ISA). Based in Parikh's game of part information and Franke's IBR model, we develop a game-theoretic model of ISA, which is divided into two categories, namely non-conventional ISA and conventional ISA. We assume that non-conventional ISA involves two types of communication situations: communication under certain cooperation and that under uncertain cooperation. We will analyze the cases of ironical request and implicit bribery as typical instances of non-conventional ISA of each situation type, respectively. We then apply our model to analyze the use of conventional ISA from an evolutionary perspective, which is inspired by Lewisian convention theory. Our model yields following predictions: the use of non-conventional ISA under certain cooperation relies on the sympathy between interlocutors, which blocks its evolution towards conventional ISA; in uncertain cooperation situations, people are more likely to use ISA, which helps its conventionalization.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Yesterday, my husband and I went out for lunch. I could not reach the chopstick box, so I talked to my husband:
“ I do like eating noodle with a spoon! ” My husband stared at me, laughed, and passed me the chopsticks.</p>
      <p>I did not ask my husband to pass me the chopsticks directly, but intended to make a request in an ironical
way. And he understood my intention correctly.</p>
      <p>
        Like the example above, we often express our intention indirectly rather than mean what the utterance
literally says. According to the speech act theory, which is introduced by Austin (1962) and developed by Searle
(1969, 1975), this kind of pragmatic phenomenon is called indirect speech act (ISA). Seale (1975) proposes an
explanation to the use of ISA, that is, an apparatus based in Gricean principles of cooperative conversation
        <xref ref-type="bibr" rid="ref6">(see
Grice 1975)</xref>
        . Then here comes the puzzle of indirect speech
        <xref ref-type="bibr" rid="ref18">(Terkourafi 2011)</xref>
        : as Gricean principle suggests,
cooperative interlocutors should communicate with informative, truthful, relevant and succinct message, but
why is indirectness commonly used in our daily communication?
      </p>
      <p>According to Brown and Levinson (1987), ISA is a strategy of politeness. In their politeness theory,
people would like to adopt some strategies to save each other’s face when their communication involves
facethreatening acts, such as criticism, insults, disagreement, suggestions, refusal, requests etc. Of all the four
strategies, they distinguish between on-record and off-record ISA, which roughly correspond to conventional ISA
and non-conventional ISA, respectively. Clark (1996) also argues that the main reason for the use of ISA is to
mitigate the threat to face and then to maintain social equity between interlocutors.</p>
      <p>
        However, Pinker and his colleagues
        <xref ref-type="bibr" rid="ref13 ref8">(Pinker et al., 2008; Lee and Pinker, 2010)</xref>
        point out that neither Searle’s
theory nor politeness theory is comprehensive enough to account for the motivation for use of ISA: they both
presuppose pure cooperation in human communication, which is not always the case during instances of ISA
(e.g. sexual comes-ons, veiled threats and implicit briberies). They propose the theory of strategic speaker : in
communication games under uncertain cooperation, the speaker chooses the strategy of ISA because it allows for
plausible deniability facing an uncooperative hearer. Rather than appealing to a social ritual, the theory offers a
strategic rationale to the use of non-conventional ISA by introducing static game model and by building decision
functions to represent plausible deniability of ISA.
      </p>
      <p>
        Actually, Pinker’s work is originated in a tradition of game-theoretic pragmatics
        <xref ref-type="bibr" rid="ref13">(see Jäger 2008 for a selective
review)</xref>
        . The idea of using game as a model of language communication goes back to Wittgenstein (1953). Inspired
by this, many attempt to construct game-theoretic models of communication, among which Lewis (1969) started
the tradition that communication is taken as at least partially cooperative effort in the model. He not only builds
signaling games to solve coordination problem in communication, but gives a game-theoretic interpretation on
convention as well: Convention is Nash equilibrium in a special sense. Lewisian convention theory explains the
rationale of how meaning is assigned to natural language through their conventional use. Following Lewisian
tradition, Parikh (2001, 2007) constructs game of partial information, in which he introduces literal meaning of
message and makes two types of distinctions, namely, distinction between literal meaning and speaker’s intention
and that between literal meaning and hearer’s interpretation. Van Rooij (2003, 2004, 2008) analyzes on-record
indirect request and Horn’s strategy (see Horn 1984) in terms of signaling game. By introducing the concept of
risk dominance as equilibrium selection standard and through the introduction of super conventional signaling
game model, Sally (2003) and van Rooij (2006) study how sympathy between interlocutors may affect the use
of indirectness such as irony. Franke (2009) criticizes that equilibrium as the traditional solution concept does
not correspond to actual reasoning process during communication. He introduces iterated best response (IBR)
reasoning, which formally illustrate how interlocutors departing from literal meaning of messages pick out their
strategies based in their belief in each other’s rationality and through a process of iterated reasoning. Blume &amp;
Board (2014) analyze off-record indirectness through evolutionary game-theoretic model. By adopting vagueness
dynamics, they solve the game and explain why interest conflict may encourage speaker to use indirectness.
Mialon &amp; Mialon (2013) study analytical conditions of ISA such as terseness, irony and implicit bribery through
construction of signaling game and solution of perfect Bayesian equilibrium (PBE).
      </p>
      <p>This paper develops a game-theoretic model for ISA. Our model is basically composed of two parts, namely,
describing communication situations in terms of signaling games and solving the situations through a reasoning
framework. We introduce higher-order belief as the quantification of sympathy between interlocutors, and
thus study how sympathy affect player’s choice on ISA strategy. The next section describes our model for
two situation types: communication under certain cooperation (in the basic model) and communication under
uncertain cooperation (in the extended model). For each type, a signaling game is first built, then it is solved.
At the end of Section 2, we compare our model to other related models proposed in game theoretic pragmatics.
In Section 3 we apply our model to analyze the cases of ironical request and implicit bribery. Based in an
evolutionary consideration of our model, Section 4 predicts conventionalization of ISA under the two situation
types, respectively. Section 5 provides a summary and suggestions for future work.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>The Model</title>
      <sec id="sec-2-1">
        <title>Game of Basic Model</title>
        <p>In the game of basic model, we assume that the interlocutors are under certain cooperation, that is, both of
them hope that the hearer correctly understands the speaker’s intention. Given this, we further assume that a
successful communication using ISA will bring extra gain, say ε(&gt; 0), to both interlocutors. A speaker, S, may
have two possible intentions, say {t1, t2} ∈ T , which he would like to express to a hearer, H. When S has t1,
he may utter a direct message m1 or an indirect message m; when S has t2, he may send a direct message m2
or m. We assume that m has a literal meaning of m2. We also assume that H is a sophisticated hearer whose
strategy conforms to the following rule: He performs action a(t1) while hearing m1, a(t2) while m2 and either
a(t1) or a(t2) while m.</p>
        <p>S and H are under certain cooperation where both prefer the action a(ti) to be taken with the correspondent
intention ti, where i = 1, 2. Taking this along with our assumption of ε above, we define interlocutors’ payoff
functions as follows.</p>
        <p>Def inition 1 In basic model, let UN (ti, m, a(tj )) be payoff of N ∈ {S, H} given ti, m and a(tj ), where i, j = 1, 2.</p>
        <p>UN (ti, m, a(tj )) =
⎧
⎨
⎩
i = j, m ∈ {m1, m2}
i = j, m = m
i ̸= j, m = m</p>
        <p>Definition 1 suggests: Both interlocutors will gain 1 using direct speech; both will earn 1 + ε, if indirectness is
involved and communication goes through successfully; both will get 0, if the use of ISA leads to
misunderstanding. We denote by p1 ∈ (0, 1) H’s prior belief that S has an intention of t1. Figure 1 illustrates the extensive
form of this signaling game.</p>
      </sec>
      <sec id="sec-2-2">
        <title>P -added IBR Reasoning Framework</title>
        <p>To solve the game of basic model, we introduce the P -added IBR reasoning framework. The framework contains
two parallel reasoning sequences, namely, the S0-sequence and the H0-sequence. We will define the scaffolding
of P -added IBR reasoning framework by induction.</p>
        <sec id="sec-2-2-1">
          <title>Base: Level-Zero Players</title>
          <p>S0-sequence starts from a naïve speaker S0. We assume that S0 arbitrarily plays an intentionally consistent
strategy, which is defined as follows.</p>
          <p>Def inition 2 A speaker strategy s(s(t) = m) is intentionally consistent iff
(I) When s(t) ∈ {m1, m2, · · · mi, . . . , mn}, t = [[s(t)]]. [[·]] : M → T is called denotation function that maps the
literal meaning of a message to speaker’s intention. mi denotes direct message.
(II) When s(t) = m, t ∈ {t1, t2, · · · ti, · · · , tn}. m denotes indirect message.</p>
          <p>Definition 2 suggests that indirect message may be used to express all possible intentions in the context. It
is also noted that S0 is not rational in the sense that she chooses the strategy not for it guarantees her a better
payoff, but for it corresponds to the general rule of language use given the literal meaning of a message and the
context.</p>
          <p>H0-sequence starts from a naïve hearer H0. We assume that H0 chooses an arbitrary strategy that will offer
her the highest expected payoff given an intentionally consistent interpretation of message.
Def inition 3 An intentionally consistent interpretation is a posterior belief µ0(t|m) = P r(t|m), which results
from updating prior beliefs with the intentionally consistent meaning of the observed message. t is the
intentionally consistent meaning of m iff
(I) When m ∈ {m1, m2, ·mi, ·, mn}, t = [[m]]. mi denotes direct message.
(II) When m = m, t ∈ {t1, t2, ·ti, ·, tn}. m denotes indirect message.</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>S tep: Higher Level Types</title>
          <p>We assume that player type of level k + 1 gives best responses to their belief in their opponent type of level k.</p>
          <p>Let us take S0-sequence first. After S0 sends a message m, H1, who is strategically one-level higher than S0,
will act according to her posterior belief µ1(t|m). We assume that hearers will adapt their posterior belief in a
sophisticated way, that is, µ1(t|m) is consistent with her belief in speaker’s behavior, say s(t) = m, as well as
her prior belief in t, say P r(t):</p>
          <p>H1 will choose the strategy h(m) that offers her the highest expected payoff EUH1 (a(t)), which depends on
her posterior belief, µ(t|m), and the corresponding payoff, UH (t, m, a(t)):
µk+1(tj |mi) = $t′∈T P r(t′) × Sk(mi|t′)</p>
          <p>P r(tj ) × Sk(mi|tj )
EUH (a(t)|m) = % µ(ti|m) × UH (ti, m, a(t))
h(m) = BR(µ) ∈ arg max EUH (a(t)|m)</p>
          <p>a(t)∈A
H1 =
&amp; H11</p>
          <p>if p1 ∈ A1</p>
          <p>H12 if p1 ∈ !U A1
EUS(s(t) = m) =
(p2 ×</p>
          <p>% ρ12(a(ti)|m)) × US(t, m, a(ti))
s(t) =
+((1 − p2) ×
% ρ22(a(ti)|m)) × US (t, m, a(ti))
ti∈T
BR(ρ) ∈ arg max EUS(s(t) = m)
m∈M
(1)
(2)
(3)
(4)
(5)</p>
          <p>From (1), (2) and (3), H1’s strategy is dependent on P r(t), given s(t) = m and UH (t, m, a(t)). Let p1 = P r(t)
and A1 (A1 ⊆ U = {0, 1}) be some interval. H1’s strategy is a mixed strategy of H11 and H12:</p>
          <p>We denote by p2 ∈ (0, 1) the probability that p1 falls in A1. S2, who is strategically one-level higher than
H1, will play s(t) that guarantees her the highest expected payoff EUS2 (s(t) = m), given her belief in H1,
ρ2 = ⟨H1, p2⟩, and the corresponding payoff, US(t, m, a(t)):
ti∈T
ti∈T
From (4) and (5), S2’s strategy is dependent on p2, given H1 and US(t, m, a(t)):</p>
          <p>S2 =
' S1, if p2 ∈ A2</p>
          <p>2
S22, if p2 ∈ !U A2
, where A2 ⊆ U = {0, 1}
.</p>
          <p>Similarly, H3’s strategy is dependent on p3, which denotes the probability distribution of p2. Inductively, in
S0-sequence, S2k+2 = {s ∈ S|∃ρ : ρ = ⟨H2k+1, p2k+2⟩, s ∈ BR(ρ)} and H2k+1 = {h ∈ H|∃µ : µ is consistent with
P r and σ, σ = ⟨S2k, p2k+1⟩, h ∈ BR(µ)}, where k &gt; 0.</p>
          <p>H0-sequence follows exactly the same rule as that for S0-sequence. The only difference is that we denote
by p′2k+2 ∈ (0, 1) the probability on which S2k+1 depends and p′2k+3 ∈ (0, 1) the probability on which H2k+2
depends. Then S2k+1 = {s ∈ S|ρ : ρ = ⟨H2k, p′2k+2⟩, s ∈ BR(ρ)} and H2k+2 = {h ∈ H|∃µ : µ is consistent with
P r and σ, σ = ⟨S2k+1, p′2k+3⟩, h ∈ BR(µ)}, where k &gt; 0.</p>
          <p>In the inductive steps above, pk (or p′k)represents k-order belief in P r of N (∈ {S, H}).We define higher-order
belief in P r as follows.</p>
          <p>Def inition 4 P is higher-order belief in P r iff
(I) In S0-sequence, H’s prior belief in T = {t1, t2, · · · , tn} is P r = p1. p2k+2 (or p2k+1) that determines S2k+2’s
(or H2k+1’s) belief in H2k+1’s (S2k’s) behavior represents the probability distribution of p2k+1 (or p2k),
P = {p1, p2, · · · , pn}.</p>
          <p>Def inition 5 In P -added IBR reasoning framework, S and H share sympathy λ ∈ (0, 1) towards each other.
When S has intention ti, λ = pi(P r(ti)), where pi ∈ P .</p>
          <p>Definition (5) suggests that as interlocutor’ higher-order belief in speaker’s real intention gets close to 1, their
sympathy towards each other increases.</p>
        </sec>
        <sec id="sec-2-2-3">
          <title>Limit</title>
          <p>Since we assume finitely many pure player strategies for finite sets of T , M and A in the game of basic model, the
P -added IBR sequence is bounded to repeat itself. We define the idealized solution of the reasoning framework
as follows.</p>
          <p>Def inition 6 The idealized solution of P -added IBR reasoning framework are all infinitely repeated strategies
S∗ and H∗:</p>
          <p>S∗
H∗
=
=
{s ∈ S|∀i∃j &gt; i : s ∈ Sj }
{h ∈ H|∀i∃j &gt; i : h ∈ Hj }</p>
          <p>The idealized solution can be explained in two senses: first, it represents the reasoning result of individual
interlocutors with idealized rationality; second, it marks the final reasoning result of a group of infinite
interlocutors after pair-wise plays. The latter has something to do with evolution in that it assumes players improve
their strategy types level by level with increasing plays.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>Solution Analysis</title>
        <p>The following proposition shows a complete characterization of idealized solution to the game of basic model in
terms of P -added IBR reasoning framework. Proofs are in the Appendix.</p>
        <p>Proposition 1 Suppose ε ∈ (0, 1) and pi (or p′i) ∈ (0, 1)
⎪⎪⎪⎪⎪⎪⎩</p>
        <p>Figure 2 illustrates functions of p = 1/(1 + ε) and p = ε/(1 + ε). Proposition 1 suggests that when the
coordinates of ⟨ε, p⟩ falls in the area above the black curve or that below the red curve, interlocutors will use
ISA, or else they will communicate explicitly. It is shown that with increase of ε, the area between black curve
and red curve gets smaller. Furthermore, with p getting close to 1, we need smaller ε to satisfy p &gt; 1 +1ε .</p>
        <p>As illustrated in Figure 2, the following corollary follows immediately from Proposition 1:</p>
        <sec id="sec-2-3-1">
          <title>Corollary 2 In game of basic model, where S and H are under certain cooperation, with N ’s higher-order belief</title>
          <p>in S’s real intention p(ti) getting close to 1, there needs smaller stimulation from ε for N to play ISA strategy,
namely s(ti) = m¯ and h( m¯) = a(ti).</p>
          <p>Ceteris paribus, interlocutors are more likely to use ISA when their higher-order belief in speaker’s real
intention is more certain, which means that they know each other better, and they share more sympathy towards
each other.
2.3</p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>Game of Extended Model</title>
        <p>the game of basic model, we assume that S and H are totally unknown to each other. With Definition 4 and
Definition 5, we denote by pi = 12 their higher-order belief in t1. S may have two asymmetric intentions t1 and t2
in the sense that she gets extra gain, ε, if H acts in favor of t1 rather than in the case of t2. We also assume that
the interlocutors are under uncertain cooperation while facing t1, that is, S is not sure whether H acts in favor
of or adversely to her when H understands t1. Given this, we introduce that H is one of two types, α ∈ {α1, α2}:
A non-cooperative type, α1, who acts adversely to her belief in t1, i.e. chooses a¯(t1); and a cooperative type,
say α2, who acts in favor of her belief in t1, i.e. chooses a(t1). We also assume that H is cooperative with her
belief in t2. We denote by q ∈ (0, 1) S’s belief in H’s type of α1. The assumption on m¯ is almost the same as in
the basic model: In both t1 and t2, S may utter m¯. The only difference is that in extended model, we assume
that S may deny her original intention of t1 with a cost, say ε′, when she utters m¯ and later finds out H’s type
of α1. However, t1 expressed by the direct message m1 is undeniable, so S’s explicit strategy s(t1) = m1 will
lead to S’s poor payoff, −ε′ with a¯(t1). We assume that S’s loss is greater in the case where α1 performs a¯(t1)
towards m1 than in that S denies t1 after uttering m¯ , that is, −ε′′ &lt; ε′. α1 earns 0 when S successfully denies
t1, otherwise, α1 earns 1. α2 earns 1 + ε when she performs a(t1) in t1. α2 earns 0 when she misunderstands m¯
in either t, otherwise, α2 earns 1. Figure 3 illustrates the extensive form of this signaling game.
2.4</p>
      </sec>
      <sec id="sec-2-5">
        <title>Solution to the Game of Extended Model</title>
        <p>The following proposition shows a complete characterization of idealized solution to the game of extended model
in terms of P -added IBR reasoning framework. Proofs are in the Appendix.</p>
        <p>Proposition 3 Suppose ε ∈ (0, 1), −ε′′ &lt; ε′ &lt; 0 and q ∈ (0, 1).</p>
        <sec id="sec-2-5-1">
          <title>Corollary 4 In game of extended model, where S and H are under uncertain cooperation, S will play ISA</title>
          <p>strategy with t1 and she will play explicit strategy with t2.</p>
          <p>Ceteris paribus, the speaker is more likely to use ISA when she has an intention that may induce adverse
action from an uncooperative hearer. The non-cooperative hearer will not act adversely towards ISA, which is
plausibly deniable.
2.5</p>
        </sec>
      </sec>
      <sec id="sec-2-6">
        <title>Model Comparison</title>
      </sec>
      <sec id="sec-2-7">
        <title>Parikhian Game of Partial Information</title>
        <p>The main differences between our game model and Parikhian model (2001, 2007) are as follows. First, we assume
that t(∈ T ) represents speaker’s intention, while Parikh assumes that t represents the game situation. Second,
we add the collection variable P as a quantification of sympathy between players. P is the higher-order belief
in speaker’s intention, t. More specifically, p1 (∈ P ) is hearer’s first-order belief or her prior belief in t, p2 is
the speaker’s belief in hearer’s prior belief, namely, p2 is the speaker’s second-order belief in t, etc. In Parikhian
model, p denotes probability distribution on situation set T . Third, we introduce idealized solution of P -added
IBR reasoning framework as the solution to the game, while Parikh adopts equilibrium as the solution concept
and solve the game through equilibrium selection by introducing Pareto dominance as the selection standard.</p>
        <p>Comparing to Parikhian model, our model consider how sympathy may affect use of ISA strategy, and we
also take pragmatic reasoning as an iterated reasoning process.</p>
      </sec>
      <sec id="sec-2-8">
        <title>Sally’s Sympathy Theory</title>
        <p>
          Sally (2003) studies pragmatic phenomena such as irony and indirectness in the term of his sympathy theory.
The core idea of Sally’s sympathy theory is that social interaction and intimacy between game players may affect
the solution by influencing their payoff
          <xref ref-type="bibr" rid="ref14">(Sally, 2000, 2001, 2002)</xref>
          . More specifically, for player i and j, if ui⟨si, sj ⟩
indicates i’s payoff independent of j, and λij designates sympathy between i and j, then the final payoff of i
is ui⟨si, sj ⟩ + λij uj ⟨si, sj ⟩. Sally (2001, 2003) suggests that λij depends on physical and psychological distance
between players: λij is 0 or negative for enemies or strangers, and is close to 1 for family or close friends.However,
Sally’s approach that models sympathy as the degree of common interest of players does not fit signaling games.
The main reason is that signaling games involve multiple situations, which leads to multiple payoff matrixes.
Different situations make equilibrium less practical as a solution, and thus block Sally’s sympathy model.
        </p>
        <p>In our model, collection variable P as higher-order belief in speaker’s intention is used to reflect sympathy
between interlocutors. We assume that people who share more sympathy know better about each other and thus
have a greater chance to make a correct prediction of each other’s belief.</p>
      </sec>
      <sec id="sec-2-9">
        <title>Franke’s IBR Model</title>
        <p>The main differences between our P -added IBR reasoning framework and Franke’s IBR model(2009) are as
follows. First, Franke assumes that the reasoning starts from a focal point of message semantic meaning. He
assumes that the naïve speaker S0 may choose arbitrarily true message to express her intention and the naïve
hearer H0 may react towards her literal interpretation of an observed message. In our model, we assume that
S0 plays intentionally consistent strategy (Definition 2) and H0 makes best response to her belief updated
by intentionally consistent interpretation of observed message (Definition 3). Second, Franke assumes that the
player type of level k +1 gives best response to their unbiased belief in their opponent type of level k. Specifically,
Nk+1(N ∈ {S, H}) will average over all possible actions she believes that Nk may take at every iterated reasoning
step. In our model, we introduce higher-order belief, p, that represents the probability distribution on type of
level k + 1 player’s belief in type of level k player’s behavior. In other words, Franke simply assumes that p = 12
for the corresponding p in our model.</p>
        <p>Comparing to Franke’s model, our model has a stronger pragmatic explanation power in two senses: His
assumption of semantic meaning as a focal point blocks the way to analyze pragmatic phenomenon that involves
use of messages going against their literal meaning (e.g. irony and metaphor),while our model gives up this
assumption and allows ISA to express all possible intentions consistent with the context; unlike Franke’s model,
our introduction of higher-order belief enable our model to analyze how sympathy between interlocutors affect
use of ISA strategy.</p>
      </sec>
      <sec id="sec-2-10">
        <title>Mialon &amp; Mialon’s Model</title>
        <p>Mialon &amp; Mialon (2013) builds a signaling game model which yields analytical conditions for ISA, and they applyit
to an analysis of terseness, irony and implicit bribery. They discuss the use of ISA strategy in the cases where
successful communication provides greater benefit to the hearer than to the speaker. In comparison, weassumes
symmetric payoff under certain cooperation situation (as in Definition 1) and in uncertain cooperation case, we
consider how payoff may be affected by plausible deniability of ISA. In addition, Mialon &amp; Mialon distinguish
two hearer types, namely naïve type and sophisticated type, while we simply consider the sophisticated hearer
type. Finally and most importantly, Mialon &amp; Mialon adopts the traditional solution concept of PBE, while we
use idealized solution in terms of our P -added IBR reasoning framework, which offers a more intuitive solution
as discussed above.
3
3.1</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Applications</title>
      <sec id="sec-3-1">
        <title>Ironical Request</title>
        <p>We now employ the basic model to provide a systematic analysis on a typical instance of ISA, ironical request.
Example Yesterday, my husband and I went out for lunch. I could not reach the chopstick box, so I talked to
my husband: “I do like eating noodle with a spoon!”</p>
      </sec>
      <sec id="sec-3-2">
        <title>Correspondence with the Basic Model</title>
        <p>My husband believes that I may have two possible intentions: I request him to pass me the chopsticks, say t1, or
I sincerely express my preference to eating noodle with a spoon, say t2. When I have t1, I may explicitly utter,
“Pass me the chopstick”, say m1, or ironically,“I do like eating noodle with a spoon”, say m¯ . When I have t2, I
may explicitly utter “I plainly like eating noodle with a spoon”, say m2, or m¯. It is obvious that m¯ has the literal
meaning of m2. My husband will pass me the chopsticks when he hears m1, say he performs a(t1), and he will
not pass me the chopsticks when he hears m2, say he performs a(t2). If he hears m¯ , he may perform either a(t1)
or a(t2). My husband and I love each other and we both prefer that he understands my real intention and act
accordingly. If I express explicitly, my husband will act according to my intention for sure and we both gain a
plain payoff, say 1. If I express implicitly and my husband interprets it correctly, both of us will gain a better
payoff, say 1 + ε, where ε &gt; 0. The values assigned are based in the following considerations: When I want my
husband to pass me the chopsticks and express ironically, my husband’s correct interpretation make us feel close
to each other for he knows me well; when I just want to express my special preference and he does not interpret
it ironically, we also feel happy for he is one of few people that knows about my preference. In contrast, if my
husband misunderstands my implicit word, neither of us is happy, and thus we gain tiny, say 0.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Analysis</title>
        <p>My husband and I are so close that we share high degree of sympathy with each other, say λ = 1. That means
we have known each other for a long time, so we are more likely to correctly guess each other’s intention in a
certain context. He has a large chance to correctly get my intention, I have a large chance to correctly guess
that he can correctly understand my intention, and so on. Namely, our higher-order belief in my real intention
is certain, say p = 1. Thus according to Corollary 1, we are more like to use ISA strategy in the case of ironical
request."
3.2</p>
      </sec>
      <sec id="sec-3-4">
        <title>Implicit Bribery</title>
        <p>We now employ the extended model to provide a systematic analysis on another instance of ISA, implicit bribery.
The example originally comes from Pinker et al. (2008) and Lee and Pinker (2010).</p>
        <p>Example Bob is stopped by a police officer for running a red light. When the police officer asks him to show
his driving license, Bob takes out his wallet and says, “Gee, officer, is there some way we could take care of the
ticket here?” (Pinker et al, 2008:833)</p>
      </sec>
      <sec id="sec-3-5">
        <title>Correspondence with the Extended Model</title>
        <p>Bob never saw this police officer before, so they are totally unknown to each other. The officer guesses Bob
may have two possible intentions: Bob intends to bribe him, say t1, and Bob has no intention to bribe, say t2.
Both know that if Bob bribe successfully, Bob will gain more than he pays the ticket. Bob has no idea whether
he is caught by an honest officer who does not accept bribery, say type α1 officer, or by a corrupt officer, say
type α2 officer. When Bob intends to bribe, he may offer explicit bribery by saying, “I’ll give you $50 if you
let me go”, say m1, or he may bribe implicitly by uttering, “Gee, officer, is there some way we could take care
of the ticket here?”, say m¯. When Bob does not intend to bribe, he may just say, “I’m sorry and I’ll be more
careful next time”, say m2, or he may use m¯. While hearing m1, an honest officer will arrest Bob for bribery,
say performing a¯(t1), which leads to a very low payoff for Bob, say −ε′′, and a plain payoff for himself, say 1.
But a corrupt officer will accept the bribery and let Bob go, say performing a(t1), which offers a good payoff
for both Bob and himself, say 1 + ε. While hearing m2, both honest and corrupt officers will ask Bob to pay the
ticket, say performing a(t2), which gives both a plain payoff. When the honest officer hears m¯, if he interprets
it as a bribery, Bob will deny it, whether or not he actually intended to bribe, which not only gives Bob a
relatively lower payoff with the effort of denial, say −ε′, but also gives himself a relatively lower payoff with a
cost of accepting the denial. And if the honest officer interprets m¯ as non-bribery, he will simply ask Bob to
pay the ticket, which results in plain payoff for both Bob and himself. If corrupt officer correctly interprets m¯
as bribery, he will accept it, which results in a good payoff for both. If he misunderstands m¯ as bribery, though
he is ready to accept it, he gets nothing, and both just get the plain payoff. If the corrupt officer correctly
interprets m¯ as non-bribery, Bob will pay the ticket and both get plain payoff. If corrupt officer misunderstands
m¯ as non-bribery, both gain less for they lose the chance of getting more, say they get 0.</p>
      </sec>
      <sec id="sec-3-6">
        <title>Analysis</title>
        <p>Bob and the police officer do not know each other, so for Bob’s intention of bribery, he is not certain whether
the officer will cooperate or not. According to Corollary 2, Bob will play ISA strategy with intention of bribery,
and he will not play ISA strategy if he does not intend to bribe."
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>An Evolutionary View</title>
      <p>We now develop our model in an evolutionary view: We combine our model with Lewisian convention theory to
analyze how conventional ISA evolves.
4.1</p>
      <sec id="sec-4-1">
        <title>Analysis on Convention</title>
        <p>Lewis (1969) gives a game-theoretic explanation of convention:</p>
        <p>A regularity R in the behavior of members of a population P when they are agents in a recurrent
situation S is a convention if and only if it is true that, and it is common knowledge in P that, in any
instance of S among members of P ,
(1) everyone conforms to R;
(2) everyone expects everyone else to conform to R;
(3) everyone has approximately the same preferences regarding all possible combinations of actions;
(4) everyone prefers that everyone conform to R, on condition that at least all but one conforms to R;
(5) everyone would prefer that everyone conform to R′, on condition that at least all but one conforms
to R′,
where R′ is some possible regularity in the behavior of members of P in S, such that no one in any
instance of S among members of P could conform both to R′ and to R. (Lewis, 1969:76)</p>
        <p>Lewisian definition of convention suggests that the formation of convention originates in people’s expectation
towards each other and in their reasoning dependent on their own preference. He proposes that this expectation
comes from precedence: In previous cases, if people respect a regularity that they express some intention by a
specific message, and they expect that others prefer to conform to this regularity with the same expectation as
themselves do, then they are prone to continuously conform to this regularity in order to maximize their common
interest.</p>
        <p>However, Lewis does not explain where precedence comes from. We propose that even this precedence comes
from people’s rationality and their belief in rationality. In process of iterated reasoning, people as a group will
evolve towards reaching idealized rationality. Our P -added IBR reasoning framework, which starts from
intentionally consistent use and interpretation of message offers an approach to model the formation of precedence.
Figure 4 shows the schema of how convention forms in an evolutionary perspective of our model combining
Lewisian convention theory.
In the case of conventional ISA, people generally use it without considering about its literal meaning. For
instance, when I say, “Can you pass the salt”, you will take it as a request without reasoning on whether or
not I ask about your ability to pass the salt. We predict that the formation of conventional use of ISA has
the following rationale: In communication games, people follow a reasoning pattern that can be modeled as our
P -added IBR reasoning framework; after repeated play, their strategies gradually evolve towards the model’s
idealized solution, which illustrates systematic conditions of use of ISA with ideal rationality; once the solution
becomes a precedence, it satisfies self-perpetuating process in the continuous games, and the corresponding use
of ISA strategy becomes a convention. Corollary 1 and Corollary 2 of our model come from idealized solution to
the game of basic model and that of extended model. The following predictions follow immediately from those
corollaries:
(I) The use of non-conventional ISA under certain cooperation relies on the sympathy between interlocutors,
which blocks its evolution towards conventional ISA.</p>
        <p>(II) In uncertain cooperation situations, people are more likely to use ISA, which helps its conventionalization.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Summary And Future Work</title>
      <p>In this paper, we develops a game-theoretic model to analyze the rationale of ISA. The model provides analytical
conditions for the use of ISA and predicts conventionalization of ISA in an evolutionary perspective. We propose
that in situations under certain cooperation interlocutors who share more sympathy are more likely to use ISA,
while in uncertain cooperation situations people are more likely to use ISA for its plausible deniability. We apply
our model to the analysis of typical instances of non-conventional ISA, namely ironical request and implicit
bribery. The solution of our model predicts that ISA used under uncertain cooperation (e.g. implicit bribery)
is more likely to be conventionalized than that used under certain cooperation, because the latter depends on
request on interlocutors’ sympathy.</p>
      <p>Our model can be further developed in at least the following three ways. First, it might be interesting to
compare our predictions with research results from corpus study in ISA. Second, it might be fruitful to test
the justification of our assumption that the use of ISA has something to do with our rationality in the area
of neuroscience. For instance, fMRI experiments can be designed and performed to test whether there exists
activation of the neuroanatomic regions related to decision making during our processing of ISA. Third, it might
be meaningful to explore computer simulation of our model within the research area of artificial intelligence.</p>
    </sec>
    <sec id="sec-6">
      <title>Appendix</title>
      <sec id="sec-6-1">
        <title>Proof of Proposition 1</title>
        <p>First look at S0-sequence. Given Definition 2,</p>
        <p>Given (1), µ1 = (t1| m¯) = p1 and µ1 = (t2|m¯ ) = 1 − p1.
ε) and EUH1 (a(t2)|m¯ ) = (1 − p1) × (1 + ε). Given (3),
Given (2), EUH1 (a(t1)| m¯) = p1 × (1 +
.</p>
        <p>H1 =
H3 =
⎧ ⎧
⎪⎨⎪⎪⎪⎪⎪ ⎩⎨</p>
        <p>⎧
⎪⎪⎪⎩⎪⎪⎪ ⎩⎨
⎧ ⎧
⎪⎪⎪⎪⎪⎨⎪ ⎨⎩</p>
        <p>⎧
⎪⎪⎪⎩⎪⎪⎪ ⎩⎨</p>
        <p>S0 =
' s(t1) = m1, m¯</p>
        <p>s(t2) = m2, m¯
Let p2 = p(p1 &gt; 12 ) and given (4), EUS2 ( m¯|t1) = p2 × (1 + ε) and EUS2 ( m¯|t2) = (1 − p2) × (1 + ε). Given (5),
S2 =
⎧
⎪⎨⎪⎪⎪⎪⎪
⎩⎪⎪⎪⎪⎪⎪
Let p3 = p( pp((pp22((tt11))&lt;&gt; 11++1εεε )) &gt; 1) and given (2) and (3),</p>
        <p>Notably, S0-sequence starts repetition from H3. Then H3 = H∗ and S2 = S∗. Similarly, H0-sequence leads
to the same solution."</p>
      </sec>
      <sec id="sec-6-2">
        <title>Proof of Proposition 2</title>
        <p>First look at S0-sequence. Given Definition 2,</p>
        <p>Given(1), α1(µ1(t1| m¯)) = α1(µ1(t2| m¯)) = 12 and α2(µ1(t1| m¯)) = α2(µ1(t2| m¯)) = 21 . Given (2),
EUα1(H1)(a¯(t1)| m¯) = 0, EUα1(H1)(a(t2)| m¯) = 1, EUα2(H1)(a(t1)|m¯ ) = 12 (1 + ε) and EUα2(H1)(a(t2)| m¯) = 12 .
Given (3),
⎨⎪⎧ h(m1) = a¯(t1)
⎪⎩ h( m¯) = a(t2)
α1(H∗) =
h(m2) = a(t2) , α2(H∗) =
⎪⎨⎧ h(m1) = a(t1)</p>
        <p>h(m2) = a(t2)
⎪⎩ h( m¯) = a(t1)</p>
        <p>Given (4), EUS2 (s(t1) = m¯ ) = q + (1 − q) × (1 + ε), EUS2 (s(t1) = m1) = q × (−ε′′) + (1 − q) × (1 + ε),
EUS2 (s(t2) = m¯ = q, EUS2 (s(t2) = m2) = 1. Given (5),</p>
        <p>Obviously, S0-sequence starts repetition from S2. Then H1 = H∗ and S2 = S∗. Similarly, H0-sequence leads
to the same solution."</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Aus62]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Austin</surname>
          </string-name>
          .
          <article-title>How to Do Things with Words- 2nd Edition</article-title>
          . Harvard University Press,
          <year>1962</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [BB14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Blume</surname>
          </string-name>
          and
          <string-name>
            <given-names>O.</given-names>
            <surname>Board</surname>
          </string-name>
          .
          <article-title>Intentional vagueness</article-title>
          .
          <source>Erkenn</source>
          ,
          <volume>79</volume>
          (
          <issue>4 Supplement)</issue>
          :
          <fpage>855</fpage>
          -
          <lpage>899</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [BL87]
          <string-name>
            <given-names>P.</given-names>
            <surname>Brown</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Levinson</surname>
          </string-name>
          .
          <source>Politeness: Some Universals in Language Usage</source>
          . Cambridge University Press,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Cla96]
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Clark</surname>
          </string-name>
          .
          <source>Using Language</source>
          . Cambridge University Press,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [Fra09]
          <string-name>
            <given-names>M.</given-names>
            <surname>Frank</surname>
          </string-name>
          . Signal to Act: Game Theory in Pragmatics. University of Amsterdam Institute for Logic,
          <source>Language and Computation</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [Gri75]
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Grice</surname>
          </string-name>
          .
          <article-title>Logic and conversation</article-title>
          . In P. Cole &amp;
          <string-name>
            <surname>J. L</surname>
          </string-name>
          . Morgan (Eds.),
          <source>Syntax and Semantics - Volume</source>
          <volume>3</volume>
          / Speech Acts. Academic Press,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [Jäg08]
          <string-name>
            <given-names>G.</given-names>
            <surname>Jäger</surname>
          </string-name>
          .
          <article-title>Applications of game theory in linguistics</article-title>
          .
          <source>Language and Linguistics Compass</source>
          ,
          <volume>2</volume>
          (
          <issue>3</issue>
          ):
          <fpage>406</fpage>
          -
          <lpage>421</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [LP10]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Lee</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Pinker</surname>
          </string-name>
          .
          <article-title>Rationales for indirect speech: The theory of the strategic speaker</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>117</volume>
          (
          <issue>3</issue>
          ):
          <fpage>785</fpage>
          -
          <lpage>807</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Lew69]
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          . Convention:
          <string-name>
            <given-names>A Philosophical</given-names>
            <surname>Study</surname>
          </string-name>
          . Harvard University Press,
          <year>1969</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>[MM13] H. M. Mialon</surname>
            and
            <given-names>S. H.</given-names>
          </string-name>
          <string-name>
            <surname>Mialon</surname>
          </string-name>
          .
          <article-title>Go figure: The strategy of nonliteral speech</article-title>
          .
          <source>American Economic Journal Microeconomics</source>
          ,
          <volume>5</volume>
          (
          <issue>2</issue>
          ):
          <fpage>186</fpage>
          -
          <lpage>212</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Par01]
          <string-name>
            <given-names>P.</given-names>
            <surname>Parikh</surname>
          </string-name>
          .
          <source>The Use of Language. CSLI Publications</source>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [Par07]
          <string-name>
            <given-names>P.</given-names>
            <surname>Parikh</surname>
          </string-name>
          . Situations, rules, and
          <article-title>conventional meaning: Some uses of games of partial information</article-title>
          .
          <source>Journal of Pragmatics</source>
          ,
          <volume>39</volume>
          (
          <issue>5</issue>
          ):
          <fpage>917</fpage>
          -
          <lpage>933</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [PNL08]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pinker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Nowak</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Lee</surname>
          </string-name>
          .
          <article-title>The logic of indirect speech</article-title>
          .
          <source>Proceedings of the National Academy of Sciences</source>
          ,
          <volume>105</volume>
          (
          <issue>105</issue>
          ):
          <fpage>833</fpage>
          -
          <lpage>838</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [Sal01]
          <string-name>
            <given-names>D.</given-names>
            <surname>Sally</surname>
          </string-name>
          .
          <article-title>On sympathy and games</article-title>
          .
          <source>Journal of Economic Behavior and Organization</source>
          ,
          <volume>44</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [Sal03]
          <string-name>
            <given-names>D.</given-names>
            <surname>Sally</surname>
          </string-name>
          .
          <article-title>Risky speech: Behavioral game theory and pragmatics</article-title>
          .
          <source>Journal of Pragmatics</source>
          ,
          <volume>35</volume>
          (
          <issue>8</issue>
          ):
          <fpage>1223</fpage>
          -
          <lpage>1245</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [Sea69]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Searle</surname>
          </string-name>
          .
          <article-title>Speech Acts: An Essay in the Philosophy of Language</article-title>
          . Cambridge University Press,
          <year>1969</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [Sea75]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Searle</surname>
          </string-name>
          .
          <article-title>Indirect speech acts</article-title>
          . In P. Cole &amp;
          <string-name>
            <surname>J. L</surname>
          </string-name>
          . Morgan (Eds.),
          <source>Syntax and Semantics - Volume</source>
          <volume>3</volume>
          / Speech Acts. Academic Press,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [Ter11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Terkourafi</surname>
          </string-name>
          .
          <article-title>Why direct speech is not a natural default: Rejoinder to Steven Pinker's “Indirect speech, politeness, deniability, and relationship negotiation”</article-title>
          .
          <source>Journal of Pragmatics</source>
          ,
          <volume>43</volume>
          (
          <issue>11</issue>
          ):
          <fpage>2869</fpage>
          -
          <lpage>2871</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>[Roo03] R. van Rooij</surname>
          </string-name>
          .
          <article-title>Being polite is a handicap: Towards a game theoretical analysis of polite linguistic behavior</article-title>
          .
          <source>Proceedingsof the 9th Conference on Theoretical Aspects of Rationality and Knowledge</source>
          . Los Angeles,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>[Roo04] R. van Rooij</surname>
          </string-name>
          .
          <article-title>Signalling games select Horn strategies</article-title>
          .
          <source>Linguistics and Philosophy</source>
          ,
          <volume>27</volume>
          (
          <issue>4</issue>
          ):
          <fpage>493</fpage>
          -
          <lpage>527</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>[Roo08] R. van Rooij</surname>
          </string-name>
          .
          <article-title>Games and quantity implicatures</article-title>
          .
          <source>Journal of Economic Methodology</source>
          ,
          <volume>15</volume>
          (
          <issue>3</issue>
          ):
          <fpage>261</fpage>
          -
          <lpage>274</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [Wit53]
          <string-name>
            <given-names>L. Wittgensten. Philosophical</given-names>
            <surname>Investigations</surname>
          </string-name>
          . Blackwell,
          <year>1953</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>