=Paper= {{Paper |id=Vol-1811/paper9 |storemode=property |title=A Game-Theoretic Analysis on the Use of Indirect Speech Acts |pdfUrl=https://ceur-ws.org/Vol-1811/paper9.pdf |volume=Vol-1811 |authors=Mengyuan Zhao |dblpUrl=https://dblp.org/rec/conf/clar/Zhao16 }} ==A Game-Theoretic Analysis on the Use of Indirect Speech Acts== https://ceur-ws.org/Vol-1811/paper9.pdf
                  A Game-Theoretic Analysis on the Use of
                          Indirect Speech Acts

                                                 Mengyuan Zhao
                                             School of Social Sciences
                                             Shanghai, China 200093
                                            mengyuan.zhao@usst.edu.cn
                                  University of Shanghai for Science and Technology




                                                        Abstract
                       In this paper we will discuss why in some circumstances people ex-
                       press their intentions indirectly: the use of Indirect Speech Acts (ISA).
                       Based in Parikh’s game of part information and Franke’s IBR model,
                       we develop a game-theoretic model of ISA, which is divided into two
                       categories, namely non-conventional ISA and conventional ISA. We as-
                       sume that non-conventional ISA involves two types of communication
                       situations: communication under certain cooperation and that under
                       uncertain cooperation. We will analyze the cases of ironical request
                       and implicit bribery as typical instances of non-conventional ISA of
                       each situation type, respectively. We then apply our model to analyze
                       the use of conventional ISA from an evolutionary perspective, which
                       is inspired by Lewisian convention theory. Our model yields following
                       predictions: the use of non-conventional ISA under certain cooperation
                       relies on the sympathy between interlocutors, which blocks its evolution
                       towards conventional ISA; in uncertain cooperation situations, people
                       are more likely to use ISA, which helps its conventionalization.




1    Introduction
Yesterday, my husband and I went out for lunch. I could not reach the chopstick box, so I talked to my husband:
“ I do like eating noodle with a spoon! ” My husband stared at me, laughed, and passed me the chopsticks.
    I did not ask my husband to pass me the chopsticks directly, but intended to make a request in an ironical
way. And he understood my intention correctly.
    Like the example above, we often express our intention indirectly rather than mean what the utterance
literally says. According to the speech act theory, which is introduced by Austin (1962) and developed by Searle
(1969, 1975), this kind of pragmatic phenomenon is called indirect speech act (ISA). Seale (1975) proposes an
explanation to the use of ISA, that is, an apparatus based in Gricean principles of cooperative conversation (see
Grice 1975). Then here comes the puzzle of indirect speech (Terkourafi 2011): as Gricean principle suggests,
cooperative interlocutors should communicate with informative, truthful, relevant and succinct message, but
why is indirectness commonly used in our daily communication?

          c by the paper’s authors. Copying permitted for private and academic purposes.
Copyright ⃝
In: T. Ågotnes, B. Liao, Y.N. Wang (eds.): Proceedings of the first Chinese Conference on Logic and Argumentation (CLAR 2016),
Hangzhou, China, 2-3 April 2016, published at http://ceur-ws.org




                                                         103
   According to Brown and Levinson (1987), ISA is a strategy of politeness. In their politeness theory, peo-
ple would like to adopt some strategies to save each other’s face when their communication involves face-
threatening acts, such as criticism, insults, disagreement, suggestions, refusal, requests etc. Of all the four
strategies, they distinguish between on-record and off-record ISA, which roughly correspond to conventional ISA
and non-conventional ISA, respectively. Clark (1996) also argues that the main reason for the use of ISA is to
mitigate the threat to face and then to maintain social equity between interlocutors.
   However, Pinker and his colleagues (Pinker et al., 2008; Lee and Pinker, 2010) point out that neither Searle’s
theory nor politeness theory is comprehensive enough to account for the motivation for use of ISA: they both
presuppose pure cooperation in human communication, which is not always the case during instances of ISA
(e.g. sexual comes-ons, veiled threats and implicit briberies). They propose the theory of strategic speaker : in
communication games under uncertain cooperation, the speaker chooses the strategy of ISA because it allows for
plausible deniability facing an uncooperative hearer. Rather than appealing to a social ritual, the theory offers a
strategic rationale to the use of non-conventional ISA by introducing static game model and by building decision
functions to represent plausible deniability of ISA.
   Actually, Pinker’s work is originated in a tradition of game-theoretic pragmatics (see Jäger 2008 for a selective
review). The idea of using game as a model of language communication goes back to Wittgenstein (1953). Inspired
by this, many attempt to construct game-theoretic models of communication, among which Lewis (1969) started
the tradition that communication is taken as at least partially cooperative effort in the model. He not only builds
signaling games to solve coordination problem in communication, but gives a game-theoretic interpretation on
convention as well: Convention is Nash equilibrium in a special sense. Lewisian convention theory explains the
rationale of how meaning is assigned to natural language through their conventional use. Following Lewisian
tradition, Parikh (2001, 2007) constructs game of partial information, in which he introduces literal meaning of
message and makes two types of distinctions, namely, distinction between literal meaning and speaker’s intention
and that between literal meaning and hearer’s interpretation. Van Rooij (2003, 2004, 2008) analyzes on-record
indirect request and Horn’s strategy (see Horn 1984) in terms of signaling game. By introducing the concept of
risk dominance as equilibrium selection standard and through the introduction of super conventional signaling
game model, Sally (2003) and van Rooij (2006) study how sympathy between interlocutors may affect the use
of indirectness such as irony. Franke (2009) criticizes that equilibrium as the traditional solution concept does
not correspond to actual reasoning process during communication. He introduces iterated best response (IBR)
reasoning, which formally illustrate how interlocutors departing from literal meaning of messages pick out their
strategies based in their belief in each other’s rationality and through a process of iterated reasoning. Blume &
Board (2014) analyze off-record indirectness through evolutionary game-theoretic model. By adopting vagueness
dynamics, they solve the game and explain why interest conflict may encourage speaker to use indirectness.
Mialon & Mialon (2013) study analytical conditions of ISA such as terseness, irony and implicit bribery through
construction of signaling game and solution of perfect Bayesian equilibrium (PBE).
   This paper develops a game-theoretic model for ISA. Our model is basically composed of two parts, namely,
describing communication situations in terms of signaling games and solving the situations through a reasoning
framework. We introduce higher-order belief as the quantification of sympathy between interlocutors, and
thus study how sympathy affect player’s choice on ISA strategy. The next section describes our model for
two situation types: communication under certain cooperation (in the basic model) and communication under
uncertain cooperation (in the extended model). For each type, a signaling game is first built, then it is solved.
At the end of Section 2, we compare our model to other related models proposed in game theoretic pragmatics.
In Section 3 we apply our model to analyze the cases of ironical request and implicit bribery. Based in an
evolutionary consideration of our model, Section 4 predicts conventionalization of ISA under the two situation
types, respectively. Section 5 provides a summary and suggestions for future work.

2     The Model
2.1   Game of Basic Model
In the game of basic model, we assume that the interlocutors are under certain cooperation, that is, both of
them hope that the hearer correctly understands the speaker’s intention. Given this, we further assume that a
successful communication using ISA will bring extra gain, say ε(> 0), to both interlocutors. A speaker, S, may
have two possible intentions, say {t1 , t2 } ∈ T , which he would like to express to a hearer, H. When S has t1 ,
he may utter a direct message m1 or an indirect message m; when S has t2 , he may send a direct message m2
or m. We assume that m has a literal meaning of m2 . We also assume that H is a sophisticated hearer whose




                                                     104
strategy conforms to the following rule: He performs action a(t1 ) while hearing m1 , a(t2 ) while m2 and either
a(t1 ) or a(t2 ) while m.
   S and H are under certain cooperation where both prefer the action a(ti ) to be taken with the correspondent
intention ti , where i = 1, 2. Taking this along with our assumption of ε above, we define interlocutors’ payoff
functions as follows.

Def inition 1 In basic model, let UN (ti , m, a(tj )) be payoff of N ∈ {S, H} given ti , m and a(tj ), where i, j = 1, 2.
                                                    ⎧
                                                    ⎨      1     i = j, m ∈ {m1 , m2 }
                            UN (ti , m, a(tj )) =      1+ε       i = j, m = m
                                                    ⎩
                                                           0     i ̸= j, m = m

   Definition 1 suggests: Both interlocutors will gain 1 using direct speech; both will earn 1 + ε, if indirectness is
involved and communication goes through successfully; both will get 0, if the use of ISA leads to misunderstand-
ing. We denote by p1 ∈ (0, 1) H’s prior belief that S has an intention of t1 . Figure 1 illustrates the extensive
form of this signaling game.




                             Figure 1: Game of Basic Model under Certain Cooperation



2.2    Solution to the Game of Basic Model
P -added IBR Reasoning Framework
To solve the game of basic model, we introduce the P -added IBR reasoning framework. The framework contains
two parallel reasoning sequences, namely, the S0 -sequence and the H0 -sequence. We will define the scaffolding
of P -added IBR reasoning framework by induction.

Base: Level-Zero Players
S0 -sequence starts from a naïve speaker S0 . We assume that S0 arbitrarily plays an intentionally consistent
strategy, which is defined as follows.

Def inition 2 A speaker strategy s(s(t) = m) is intentionally consistent iff

 (I) When s(t) ∈ {m1 , m2 , · · · mi , . . . , mn }, t = [[s(t)]]. [[·]] : M → T is called denotation function that maps the
     literal meaning of a message to speaker’s intention. mi denotes direct message.

(II) When s(t) = m, t ∈ {t1 , t2 , · · · ti , · · · , tn }. m denotes indirect message.

   Definition 2 suggests that indirect message may be used to express all possible intentions in the context. It
is also noted that S0 is not rational in the sense that she chooses the strategy not for it guarantees her a better
payoff, but for it corresponds to the general rule of language use given the literal meaning of a message and the
context.
   H0 -sequence starts from a naïve hearer H0 . We assume that H0 chooses an arbitrary strategy that will offer
her the highest expected payoff given an intentionally consistent interpretation of message.




                                                           105
Def inition 3 An intentionally consistent interpretation is a posterior belief µ0 (t|m) = P r(t|m), which results
from updating prior beliefs with the intentionally consistent meaning of the observed message. t is the inten-
tionally consistent meaning of m iff
    (I) When m ∈ {m1 , m2 , ·mi , ·, mn }, t = [[m]]. mi denotes direct message.
(II) When m = m, t ∈ {t1 , t2 , ·ti , ·, tn }. m denotes indirect message.

S tep: Higher Level Types
We assume that player type of level k + 1 gives best responses to their belief in their opponent type of level k.
   Let us take S0 -sequence first. After S0 sends a message m, H1 , who is strategically one-level higher than S0 ,
will act according to her posterior belief µ1 (t|m). We assume that hearers will adapt their posterior belief in a
sophisticated way, that is, µ1 (t|m) is consistent with her belief in speaker’s behavior, say s(t) = m, as well as
her prior belief in t, say P r(t):

                                                                 P r(tj ) × Sk (mi |tj )
                                        µk+1 (tj |mi ) = $                 ′             ′
                                                                                                                  (1)
                                                                t′ ∈T P r(t ) × Sk (mi |t )
   H1 will choose the strategy h(m) that offers her the highest expected payoff EUH1 (a(t)), which depends on
her posterior belief, µ(t|m), and the corresponding payoff, UH (t, m, a(t)):
                                                        %
                                     EUH (a(t)|m) =             µ(ti |m) × UH (ti , m, a(t))                      (2)
                                                        ti ∈T

                                        h(m) = BR(µ) ∈ arg max EUH (a(t)|m)                                       (3)
                                                                    a(t)∈A

  From (1), (2) and (3), H1 ’s strategy is dependent on P r(t), given s(t) = m and UH (t, m, a(t)). Let p1 = P r(t)
and A1 (A1 ⊆ U = {0, 1}) be some interval. H1 ’s strategy is a mixed strategy of H11 and H12 :
                                                 & 1
                                                   H1     if p1 ∈ A1
                                           H1 =
                                                   H12 if p1 ∈ !U A1
   We denote by p2 ∈ (0, 1) the probability that p1 falls in A1 . S2 , who is strategically one-level higher than
H1 , will play s(t) that guarantees her the highest expected payoff EUS2 (s(t) = m), given her belief in H1 ,
ρ2 = ⟨H1 , p2 ⟩, and the corresponding payoff, US (t, m, a(t)):

                                                        %
                         EUS (s(t) = m) =       (p2 ×           ρ12 (a(ti )|m)) × US (t, m, a(ti ))
                                                        ti ∈T
                                                                    %
                                                +((1 − p2 ) ×               ρ22 (a(ti )|m)) × US (t, m, a(ti ))   (4)
                                                                    ti ∈T
                                     s(t) =     BR(ρ) ∈ arg max EUS (s(t) = m)                                    (5)
                                                                   m∈M

      From (4) and (5), S2 ’s strategy is dependent on p2 , given H1 and US (t, m, a(t)):
                                        ' 1
                                           S2 , if p2 ∈ A2
                                   S2 =                        , where A2 ⊆ U = {0, 1}
                                           S22 , if p2 ∈ !U A2
.
   Similarly, H3 ’s strategy is dependent on p3 , which denotes the probability distribution of p2 . Inductively, in
S0 -sequence, S2k+2 = {s ∈ S|∃ρ : ρ = ⟨H2k+1 , p2k+2 ⟩, s ∈ BR(ρ)} and H2k+1 = {h ∈ H|∃µ : µ is consistent with
P r and σ, σ = ⟨S2k , p2k+1 ⟩, h ∈ BR(µ)}, where k > 0.
   H0 -sequence follows exactly the same rule as that for S0 -sequence. The only difference is that we denote
by p′2k+2 ∈ (0, 1) the probability on which S2k+1 depends and p′2k+3 ∈ (0, 1) the probability on which H2k+2
depends. Then S2k+1 = {s ∈ S|ρ : ρ = ⟨H2k , p′2k+2 ⟩, s ∈ BR(ρ)} and H2k+2 = {h ∈ H|∃µ : µ is consistent with
P r and σ, σ = ⟨S2k+1 , p′2k+3 ⟩, h ∈ BR(µ)}, where k > 0.
   In the inductive steps above, pk (or p′k )represents k-order belief in P r of N (∈ {S, H}).We define higher-order
belief in P r as follows.




                                                          106
Def inition 4 P is higher-order belief in P r iff

 (I) In S0 -sequence, H’s prior belief in T = {t1 , t2 , · · · , tn } is P r = p1 . p2k+2 (or p2k+1 ) that determines S2k+2 ’s
     (or H2k+1 ’s) belief in H2k+1 ’s (S2k ’s) behavior represents the probability distribution of p2k+1 (or p2k ),
     P = {p1 , p2 , · · · , pn }.

(II) In H0 -sequence, H’s prior belief in T = {t1 , t2 , · · · , tn } is P r = p1 . p′2k+3 (or p′2k+2 ) that determines S2k+1 ’s
     (or H2k+2 ’s) belief in H2k ’s (S2k+1 ’s) behavior represents the probability distribution of p′2k+2 (or p′2k+1 ),
     P = {p1 , p′2 , · · · , p′n }.

   We further define players’ sympathy towards each other as follows.

Def inition 5 In P -added IBR reasoning framework, S and H share sympathy λ ∈ (0, 1) towards each other.
When S has intention ti , λ = pi (P r(ti )), where pi ∈ P .

  Definition (5) suggests that as interlocutor’ higher-order belief in speaker’s real intention gets close to 1, their
sympathy towards each other increases.

Limit
Since we assume finitely many pure player strategies for finite sets of T , M and A in the game of basic model, the
P -added IBR sequence is bounded to repeat itself. We define the idealized solution of the reasoning framework
as follows.

Def inition 6 The idealized solution of P -added IBR reasoning framework are all infinitely repeated strategies
S ∗ and H ∗ :


                                            S∗   =    {s ∈ S|∀i∃j > i : s ∈ Sj }
                                           H∗    =    {h ∈ H|∀i∃j > i : h ∈ Hj }

   The idealized solution can be explained in two senses: first, it represents the reasoning result of individual
interlocutors with idealized rationality; second, it marks the final reasoning result of a group of infinite inter-
locutors after pair-wise plays. The latter has something to do with evolution in that it assumes players improve
their strategy types level by level with increasing plays.

Solution Analysis
The following proposition shows a complete characterization of idealized solution to the game of basic model in
terms of P -added IBR reasoning framework. Proofs are in the Appendix.

Proposition 1 Suppose ε ∈ (0, 1) and pi (or p′i ) ∈ (0, 1)
                     ⎧ &
                     ⎪
                     ⎪     s(t1 ) = m̄,                                ′
                     ⎪
                     ⎪                      if pi (t1 ) > 1+ε
                                                            1
                                                                and pi (t1 ) > 1+ε
                                                                                1
                                                                                    ,
                     ⎪
                     ⎪  &  s(t 2 ) = m 2 ,
                     ⎨
                           s(t1 ) = m1 ,                               ′
               S∗ =                         if pi (t1 ) < 1+ε
                                                            ε
                                                                and pi (t1 ) < 1+ε
                                                                                ε
                                                                                    ,
                     ⎪
                     ⎪  &  s(t 2 ) = m̄,
                     ⎪
                     ⎪
                     ⎪
                     ⎪     s(t1 ) = m1 ,                                                ′
                     ⎩                      if 1+ε
                                                 ε
                                                     < pi (t1 ) < 1+ε1
                                                                         and 1+ε
                                                                              ε                 1
                                                                                  < pi (t1 ) < 1+ε .
                           s(t2 ) = m2 ,
                             ⎧ ⎧
                             ⎪
                             ⎪    ⎨ h(m1 ) = a(t1 ),
                             ⎪
                             ⎪                                                  ′
                             ⎪
                             ⎪       h(m2 ) = a(t2 ),     if pi (t1 ) > 12 and pi (t1 ) > 12 ,
                             ⎨ ⎩
                      H∗ =        ⎧ h(m̄) = a(t1 ),
                             ⎪
                             ⎪    ⎨ h(m1 ) = a(t1 ),
                             ⎪
                             ⎪                                                  ′
                             ⎪
                             ⎪       h(m2 ) = a(t2 ),     if pi (t1 ) < 12 and pi (t1 ) < 12 .
                             ⎩ ⎩
                                     h(m̄) = a(t2 ),




                                                          107
                  Figure 2: Functions of p = 1+ε
                                              1
                                                 (black curve) and that of p = 1+ε
                                                                                ε
                                                                                   (red curve)


  Figure 2 illustrates functions of p = 1/(1 + ε) and p = ε/(1 + ε). Proposition 1 suggests that when the
coordinates of ⟨ε, p⟩ falls in the area above the black curve or that below the red curve, interlocutors will use
ISA, or else they will communicate explicitly. It is shown that with increase of ε, the area between black curve
and red curve gets smaller. Furthermore, with p getting close to 1, we need smaller ε to satisfy p > 1+ε
                                                                                                      1
                                                                                                         .
  As illustrated in Figure 2, the following corollary follows immediately from Proposition 1:

Corollary 2 In game of basic model, where S and H are under certain cooperation, with N ’s higher-order belief
in S’s real intention p(ti ) getting close to 1, there needs smaller stimulation from ε for N to play ISA strategy,
namely s(ti ) = m̄ and h(m̄) = a(ti ).

   Ceteris paribus, interlocutors are more likely to use ISA when their higher-order belief in speaker’s real
intention is more certain, which means that they know each other better, and they share more sympathy towards
each other.

2.3   Game of Extended Model
the game of basic model, we assume that S and H are totally unknown to each other. With Definition 4 and
Definition 5, we denote by pi = 12 their higher-order belief in t1 . S may have two asymmetric intentions t1 and t2
in the sense that she gets extra gain, ε, if H acts in favor of t1 rather than in the case of t2 . We also assume that
the interlocutors are under uncertain cooperation while facing t1 , that is, S is not sure whether H acts in favor
of or adversely to her when H understands t1 . Given this, we introduce that H is one of two types, α ∈ {α1 , α2 }:
A non-cooperative type, α1 , who acts adversely to her belief in t1 , i.e. chooses ā(t1 ); and a cooperative type,
say α2 , who acts in favor of her belief in t1 , i.e. chooses a(t1 ). We also assume that H is cooperative with her
belief in t2 . We denote by q ∈ (0, 1) S’s belief in H’s type of α1 . The assumption on m̄ is almost the same as in
the basic model: In both t1 and t2 , S may utter m̄. The only difference is that in extended model, we assume
that S may deny her original intention of t1 with a cost, say ε′ , when she utters m̄ and later finds out H’s type
of α1 . However, t1 expressed by the direct message m1 is undeniable, so S’s explicit strategy s(t1 ) = m1 will
lead to S’s poor payoff, −ε′ with ā(t1 ). We assume that S’s loss is greater in the case where α1 performs ā(t1 )
towards m1 than in that S denies t1 after uttering m̄, that is, −ε′′ < ε′ . α1 earns 0 when S successfully denies
t1 , otherwise, α1 earns 1. α2 earns 1 + ε when she performs a(t1 ) in t1 . α2 earns 0 when she misunderstands m̄
in either t, otherwise, α2 earns 1. Figure 3 illustrates the extensive form of this signaling game.

2.4   Solution to the Game of Extended Model
The following proposition shows a complete characterization of idealized solution to the game of extended model
in terms of P -added IBR reasoning framework. Proofs are in the Appendix.

Proposition 3 Suppose ε ∈ (0, 1), −ε′′ < ε′ < 0 and q ∈ (0, 1).
                                                  ⎧
                       '                                                 '
                           s(t1 ) = m̄            ⎨ h(m1 ) = a(t1 )
                                                  ⎪
                                                                            s(t1 ) = m̄
                   ∗                          ∗                       ∗
                  S =                  , α1 (H ) = h(m2 ) = a(t2 ) , S =
                         s(t2 ) = m2              ⎪
                                                  ⎩                        s(t2 ) = m2
                                                      h(m̄) = a(t2 )
.




                                                      108
                        Figure 3: Game of Extended Model under Uncertain Cooperation.

   The following corollary follows immediately from Proposition 3:

Corollary 4 In game of extended model, where S and H are under uncertain cooperation, S will play ISA
strategy with t1 and she will play explicit strategy with t2 .

   Ceteris paribus, the speaker is more likely to use ISA when she has an intention that may induce adverse
action from an uncooperative hearer. The non-cooperative hearer will not act adversely towards ISA, which is
plausibly deniable.

2.5   Model Comparison
Parikhian Game of Partial Information
The main differences between our game model and Parikhian model (2001, 2007) are as follows. First, we assume
that t(∈ T ) represents speaker’s intention, while Parikh assumes that t represents the game situation. Second,
we add the collection variable P as a quantification of sympathy between players. P is the higher-order belief
in speaker’s intention, t. More specifically, p1 (∈ P ) is hearer’s first-order belief or her prior belief in t, p2 is
the speaker’s belief in hearer’s prior belief, namely, p2 is the speaker’s second-order belief in t, etc. In Parikhian
model, p denotes probability distribution on situation set T . Third, we introduce idealized solution of P -added
IBR reasoning framework as the solution to the game, while Parikh adopts equilibrium as the solution concept
and solve the game through equilibrium selection by introducing Pareto dominance as the selection standard.
   Comparing to Parikhian model, our model consider how sympathy may affect use of ISA strategy, and we
also take pragmatic reasoning as an iterated reasoning process.

Sally’s Sympathy Theory
Sally (2003) studies pragmatic phenomena such as irony and indirectness in the term of his sympathy theory.
The core idea of Sally’s sympathy theory is that social interaction and intimacy between game players may affect
the solution by influencing their payoff (Sally, 2000, 2001, 2002). More specifically, for player i and j, if ui ⟨si , sj ⟩
indicates i’s payoff independent of j, and λij designates sympathy between i and j, then the final payoff of i
is ui ⟨si , sj ⟩ + λij uj ⟨si , sj ⟩. Sally (2001, 2003) suggests that λij depends on physical and psychological distance
between players: λij is 0 or negative for enemies or strangers, and is close to 1 for family or close friends.However,
Sally’s approach that models sympathy as the degree of common interest of players does not fit signaling games.
The main reason is that signaling games involve multiple situations, which leads to multiple payoff matrixes.
Different situations make equilibrium less practical as a solution, and thus block Sally’s sympathy model.
   In our model, collection variable P as higher-order belief in speaker’s intention is used to reflect sympathy
between interlocutors. We assume that people who share more sympathy know better about each other and thus
have a greater chance to make a correct prediction of each other’s belief.

Franke’s IBR Model
The main differences between our P -added IBR reasoning framework and Franke’s IBR model(2009) are as
follows. First, Franke assumes that the reasoning starts from a focal point of message semantic meaning. He
assumes that the naïve speaker S0 may choose arbitrarily true message to express her intention and the naïve
hearer H0 may react towards her literal interpretation of an observed message. In our model, we assume that




                                                        109
S0 plays intentionally consistent strategy (Definition 2) and H0 makes best response to her belief updated
by intentionally consistent interpretation of observed message (Definition 3). Second, Franke assumes that the
player type of level k +1 gives best response to their unbiased belief in their opponent type of level k. Specifically,
Nk+1 (N ∈ {S, H}) will average over all possible actions she believes that Nk may take at every iterated reasoning
step. In our model, we introduce higher-order belief, p, that represents the probability distribution on type of
level k + 1 player’s belief in type of level k player’s behavior. In other words, Franke simply assumes that p = 12
for the corresponding p in our model.
   Comparing to Franke’s model, our model has a stronger pragmatic explanation power in two senses: His
assumption of semantic meaning as a focal point blocks the way to analyze pragmatic phenomenon that involves
use of messages going against their literal meaning (e.g. irony and metaphor),while our model gives up this
assumption and allows ISA to express all possible intentions consistent with the context; unlike Franke’s model,
our introduction of higher-order belief enable our model to analyze how sympathy between interlocutors affect
use of ISA strategy.

Mialon & Mialon’s Model
Mialon & Mialon (2013) builds a signaling game model which yields analytical conditions for ISA, and they applyit
to an analysis of terseness, irony and implicit bribery. They discuss the use of ISA strategy in the cases where
successful communication provides greater benefit to the hearer than to the speaker. In comparison, weassumes
symmetric payoff under certain cooperation situation (as in Definition 1) and in uncertain cooperation case, we
consider how payoff may be affected by plausible deniability of ISA. In addition, Mialon & Mialon distinguish
two hearer types, namely naïve type and sophisticated type, while we simply consider the sophisticated hearer
type. Finally and most importantly, Mialon & Mialon adopts the traditional solution concept of PBE, while we
use idealized solution in terms of our P -added IBR reasoning framework, which offers a more intuitive solution
as discussed above.

3     Applications
3.1   Ironical Request
We now employ the basic model to provide a systematic analysis on a typical instance of ISA, ironical request.

Example Yesterday, my husband and I went out for lunch. I could not reach the chopstick box, so I talked to
my husband: “I do like eating noodle with a spoon!”

Correspondence with the Basic Model
My husband believes that I may have two possible intentions: I request him to pass me the chopsticks, say t1 , or
I sincerely express my preference to eating noodle with a spoon, say t2 . When I have t1 , I may explicitly utter,
“Pass me the chopstick”, say m1 , or ironically,“I do like eating noodle with a spoon”, say m̄. When I have t2 , I
may explicitly utter “I plainly like eating noodle with a spoon”, say m2 , or m̄. It is obvious that m̄ has the literal
meaning of m2 . My husband will pass me the chopsticks when he hears m1 , say he performs a(t1 ), and he will
not pass me the chopsticks when he hears m2 , say he performs a(t2 ). If he hears m̄, he may perform either a(t1 )
or a(t2 ). My husband and I love each other and we both prefer that he understands my real intention and act
accordingly. If I express explicitly, my husband will act according to my intention for sure and we both gain a
plain payoff, say 1. If I express implicitly and my husband interprets it correctly, both of us will gain a better
payoff, say 1 + ε, where ε > 0. The values assigned are based in the following considerations: When I want my
husband to pass me the chopsticks and express ironically, my husband’s correct interpretation make us feel close
to each other for he knows me well; when I just want to express my special preference and he does not interpret
it ironically, we also feel happy for he is one of few people that knows about my preference. In contrast, if my
husband misunderstands my implicit word, neither of us is happy, and thus we gain tiny, say 0.

Analysis
My husband and I are so close that we share high degree of sympathy with each other, say λ = 1. That means
we have known each other for a long time, so we are more likely to correctly guess each other’s intention in a
certain context. He has a large chance to correctly get my intention, I have a large chance to correctly guess
that he can correctly understand my intention, and so on. Namely, our higher-order belief in my real intention




                                                      110
is certain, say p = 1. Thus according to Corollary 1, we are more like to use ISA strategy in the case of ironical
request."

3.2    Implicit Bribery
We now employ the extended model to provide a systematic analysis on another instance of ISA, implicit bribery.
The example originally comes from Pinker et al. (2008) and Lee and Pinker (2010).

Example Bob is stopped by a police officer for running a red light. When the police officer asks him to show
his driving license, Bob takes out his wallet and says, “Gee, officer, is there some way we could take care of the
ticket here?” (Pinker et al, 2008:833)

Correspondence with the Extended Model
Bob never saw this police officer before, so they are totally unknown to each other. The officer guesses Bob
may have two possible intentions: Bob intends to bribe him, say t1 , and Bob has no intention to bribe, say t2 .
Both know that if Bob bribe successfully, Bob will gain more than he pays the ticket. Bob has no idea whether
he is caught by an honest officer who does not accept bribery, say type α1 officer, or by a corrupt officer, say
type α2 officer. When Bob intends to bribe, he may offer explicit bribery by saying, “I’ll give you $50 if you
let me go”, say m1 , or he may bribe implicitly by uttering, “Gee, officer, is there some way we could take care
of the ticket here?”, say m̄. When Bob does not intend to bribe, he may just say, “I’m sorry and I’ll be more
careful next time”, say m2 , or he may use m̄. While hearing m1 , an honest officer will arrest Bob for bribery,
say performing ā(t1 ), which leads to a very low payoff for Bob, say −ε′′ , and a plain payoff for himself, say 1.
But a corrupt officer will accept the bribery and let Bob go, say performing a(t1 ), which offers a good payoff
for both Bob and himself, say 1 + ε. While hearing m2 , both honest and corrupt officers will ask Bob to pay the
ticket, say performing a(t2 ), which gives both a plain payoff. When the honest officer hears m̄, if he interprets
it as a bribery, Bob will deny it, whether or not he actually intended to bribe, which not only gives Bob a
relatively lower payoff with the effort of denial, say −ε′ , but also gives himself a relatively lower payoff with a
cost of accepting the denial. And if the honest officer interprets m̄ as non-bribery, he will simply ask Bob to
pay the ticket, which results in plain payoff for both Bob and himself. If corrupt officer correctly interprets m̄
as bribery, he will accept it, which results in a good payoff for both. If he misunderstands m̄ as bribery, though
he is ready to accept it, he gets nothing, and both just get the plain payoff. If the corrupt officer correctly
interprets m̄ as non-bribery, Bob will pay the ticket and both get plain payoff. If corrupt officer misunderstands
m̄ as non-bribery, both gain less for they lose the chance of getting more, say they get 0.

Analysis
Bob and the police officer do not know each other, so for Bob’s intention of bribery, he is not certain whether
the officer will cooperate or not. According to Corollary 2, Bob will play ISA strategy with intention of bribery,
and he will not play ISA strategy if he does not intend to bribe."

4     An Evolutionary View
We now develop our model in an evolutionary view: We combine our model with Lewisian convention theory to
analyze how conventional ISA evolves.

4.1    Analysis on Convention
Lewis (1969) gives a game-theoretic explanation of convention:

          A regularity R in the behavior of members of a population P when they are agents in a recurrent
      situation S is a convention if and only if it is true that, and it is common knowledge in P that, in any
      instance of S among members of P ,
      (1) everyone conforms to R;
      (2) everyone expects everyone else to conform to R;
      (3) everyone has approximately the same preferences regarding all possible combinations of actions;
      (4) everyone prefers that everyone conform to R, on condition that at least all but one conforms to R;




                                                     111
       (5) everyone would prefer that everyone conform to R′ , on condition that at least all but one conforms
           to R′ ,
      where R′ is some possible regularity in the behavior of members of P in S, such that no one in any
      instance of S among members of P could conform both to R′ and to R. (Lewis, 1969:76)

   Lewisian definition of convention suggests that the formation of convention originates in people’s expectation
towards each other and in their reasoning dependent on their own preference. He proposes that this expectation
comes from precedence: In previous cases, if people respect a regularity that they express some intention by a
specific message, and they expect that others prefer to conform to this regularity with the same expectation as
themselves do, then they are prone to continuously conform to this regularity in order to maximize their common
interest.
   However, Lewis does not explain where precedence comes from. We propose that even this precedence comes
from people’s rationality and their belief in rationality. In process of iterated reasoning, people as a group will
evolve towards reaching idealized rationality. Our P -added IBR reasoning framework, which starts from inten-
tionally consistent use and interpretation of message offers an approach to model the formation of precedence.
Figure 4 shows the schema of how convention forms in an evolutionary perspective of our model combining
Lewisian convention theory.




                             Figure 4: Schema of the formation of language convention.


4.2     Predictions
In the case of conventional ISA, people generally use it without considering about its literal meaning. For
instance, when I say, “Can you pass the salt”, you will take it as a request without reasoning on whether or
not I ask about your ability to pass the salt. We predict that the formation of conventional use of ISA has
the following rationale: In communication games, people follow a reasoning pattern that can be modeled as our
P -added IBR reasoning framework; after repeated play, their strategies gradually evolve towards the model’s
idealized solution, which illustrates systematic conditions of use of ISA with ideal rationality; once the solution
becomes a precedence, it satisfies self-perpetuating process in the continuous games, and the corresponding use
of ISA strategy becomes a convention. Corollary 1 and Corollary 2 of our model come from idealized solution to
the game of basic model and that of extended model. The following predictions follow immediately from those
corollaries:

    (I) The use of non-conventional ISA under certain cooperation relies on the sympathy between interlocutors,
        which blocks its evolution towards conventional ISA.

    (II) In uncertain cooperation situations, people are more likely to use ISA, which helps its conventionalization.

5     Summary And Future Work
In this paper, we develops a game-theoretic model to analyze the rationale of ISA. The model provides analytical
conditions for the use of ISA and predicts conventionalization of ISA in an evolutionary perspective. We propose
that in situations under certain cooperation interlocutors who share more sympathy are more likely to use ISA,
while in uncertain cooperation situations people are more likely to use ISA for its plausible deniability. We apply
our model to the analysis of typical instances of non-conventional ISA, namely ironical request and implicit




                                                      112
bribery. The solution of our model predicts that ISA used under uncertain cooperation (e.g. implicit bribery)
is more likely to be conventionalized than that used under certain cooperation, because the latter depends on
request on interlocutors’ sympathy.
   Our model can be further developed in at least the following three ways. First, it might be interesting to
compare our predictions with research results from corpus study in ISA. Second, it might be fruitful to test
the justification of our assumption that the use of ISA has something to do with our rationality in the area
of neuroscience. For instance, fMRI experiments can be designed and performed to test whether there exists
activation of the neuroanatomic regions related to decision making during our processing of ISA. Third, it might
be meaningful to explore computer simulation of our model within the research area of artificial intelligence.

Appendix
Proof of Proposition 1
First look at S0 -sequence. Given Definition 2,
                                                         '
                                                             s(t1 ) = m1 , m̄
                                                  S0 =
                                                             s(t2 ) = m2 , m̄
.
   Given (1), µ1 = (t1 |m̄) = p1 and µ1 = (t2 |m̄) = 1 − p1 . Given (2), EUH1 (a(t1 )|m̄) = p1 × (1 +
ε) and EUH1 (a(t2 )|m̄) = (1 − p1 ) × (1 + ε). Given (3),
                                           ⎧ ⎧
                                           ⎪
                                           ⎪   ⎨ h(m1 ) = a(t1 ),
                                           ⎪
                                           ⎪
                                           ⎪
                                           ⎪      h(m2 ) = a(t2 ), if p1 > 12
                                           ⎨ ⎩
                                     H1 =      ⎧ h(m̄) = a(t1 ),
                                           ⎪
                                           ⎪   ⎨ h(m1 ) = a(t1 ),
                                           ⎪
                                           ⎪
                                           ⎪
                                           ⎪      h(m2 ) = a(t2 ), if p1 < 12
                                           ⎩ ⎩
                                                  h(m̄) = a(t2 ),
    Let p2 = p(p1 > 12 ) and given (4), EUS2 (m̄|t1 ) = p2 × (1 + ε) and EUS2 (m̄|t2 ) = (1 − p2 ) × (1 + ε). Given (5),
                                       ⎧ &
                                       ⎪
                                       ⎪     s(t1 ) = m̄,
                                       ⎪
                                       ⎪                      if p2 (t1 ) > 1+ε
                                                                              1
                                       ⎪
                                       ⎪     s(t2 ) =  m 2 ,
                                       ⎨ &
                                             s(t1 ) = m1 ,
                                 S2 =                         if p2 (t1 ) < 1+ε
                                                                              ε
                                       ⎪
                                       ⎪  &  s(t2 ) = m̄,
                                       ⎪
                                       ⎪
                                       ⎪
                                       ⎪     s(t1 ) = m1 ,
                                       ⎩                      if 1+ε
                                                                  ε                  1
                                                                       < p2 (t1 ) < 1+ε
                                             s(t2 ) = m2 ,
                              1
                p(p2 (t1 )>       )
    Let p3 = p( p(p2 (t1 )< 1+ε
                             ε    > 1) and given (2) and (3),
                            1+ε )
                                               ⎧ ⎧
                                               ⎪
                                               ⎪   ⎨ h(m1 ) = a(t1 ),
                                               ⎪
                                               ⎪
                                               ⎪
                                               ⎪      h(m2 ) = a(t2 ),      if p3 > 12
                                               ⎨ ⎩
                                         H3 =      ⎧ h(m̄) = a(t1 ),
                                               ⎪
                                               ⎪   ⎨ h(m1 ) = a(t1 ),
                                               ⎪
                                               ⎪
                                               ⎪
                                               ⎪      h(m2 ) = a(t2 ),      if p3 < 12
                                               ⎩ ⎩
                                                      h(m̄) = a(t2 ),
   Notably, S0 -sequence starts repetition from H3 . Then H3 = H ∗ and S2 = S ∗ . Similarly, H0 -sequence leads
to the same solution."

Proof of Proposition 2
First look at S0 -sequence. Given Definition 2,
                                                         '
                                                             s(t1 ) = m1 , m̄
                                                  S0 =
                                                             s(t2 ) = m2 , m̄
.




                                                         113
   Given(1), α1 (µ1 (t1 |m̄)) = α1 (µ1 (t2 |m̄)) = 12 and α2 (µ1 (t1 |m̄)) = α2 (µ1 (t2 |m̄)) = 12 . Given (2),
EUα1 (H1 ) (ā(t1 )|m̄) = 0, EUα1 (H1 ) (a(t2 )|m̄) = 1, EUα2 (H1 ) (a(t1 )|m̄) = 12 (1 + ε) and EUα2 (H1 ) (a(t2 )|m̄) = 12 .
Given (3),
                                           ⎧                                 ⎧
                                           ⎨ h(m1 ) = ā(t1 )
                                           ⎪                                 ⎨ h(m1 ) = a(t1 )
                                                                             ⎪
                                    ∗                                  ∗
                              α1 (H ) = h(m2 ) = a(t2 ) , α2 (H ) = h(m2 ) = a(t2 )
                                           ⎪
                                           ⎩                                 ⎪
                                                                             ⎩
                                               h(m̄) = a(t2 )                   h(m̄) = a(t1 )
.
   Given (4), EUS2 (s(t1 ) = m̄) = q + (1 − q) × (1 + ε), EUS2 (s(t1 ) = m1 ) = q × (−ε′′ ) + (1 − q) × (1 + ε),
EUS2 (s(t2 ) = m̄ = q, EUS2 (s(t2 ) = m2 ) = 1. Given (5),
                                                          '
                                                               s(t1 ) = m̄
                                                     S2 =
                                                              s(t2 ) = m2
.
   Obviously, S0 -sequence starts repetition from S2 . Then H1 = H ∗ and S2 = S ∗ . Similarly, H0 -sequence leads
to the same solution."

References
[Aus62] J. L. Austin. How to Do Things with Words– 2nd Edition. Harvard University Press, 1962.

[BB14] A. Blume and O. Board. Intentional vagueness. Erkenn, 79(4 Supplement):855–899, 2014.

[BL87] P. Brown and S. C. Levinson. Politeness: Some Universals in Language Usage. Cambridge University
       Press, 1987.

[Cla96] H. H. Clark. Using Language. Cambridge University Press, 1996.

[Fra09] M. Frank. Signal to Act: Game Theory in Pragmatics. University of Amsterdam Institute for Logic,
        Language and Computation, 2009.

[Gri75] H. P. Grice. Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and Semantics – Volume
        3 / Speech Acts. Academic Press, 1975.

[Jäg08] G. Jäger. Applications of game theory in linguistics. Language and Linguistics Compass, 2(3):406–421,
        2008.

[LP10] J. J. Lee and S. Pinker. Rationales for indirect speech: The theory of the strategic speaker. Psychological
       Review, 117(3):785–807, 2010.

[Lew69] D. Lewis. Convention: A Philosophical Study. Harvard University Press, 1969.

[MM13] H. M. Mialon and S. H. Mialon. Go figure: The strategy of nonliteral speech. American Economic
       Journal Microeconomics, 5(2):186–212, 2013.

[Par01] P. Parikh. The Use of Language. CSLI Publications, 2001.

[Par07] P. Parikh. Situations, rules, and conventional meaning: Some uses of games of partial information.
        Journal of Pragmatics, 39(5):917–933, 2007.

[PNL08] S. Pinker, M. A. Nowak and J. J. Lee. The logic of indirect speech. Proceedings of the National
       Academy of Sciences, 105(105):833–838, 2008.

[Sal01] D. Sally. On sympathy and games. Journal of Economic Behavior and Organization, 44(1):1–30, 2001.

[Sal03] D. Sally. Risky speech: Behavioral game theory and pragmatics. Journal of Pragmatics, 35(8):1223–
        1245, 2003.

[Sea69] J. R. Searle. Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press, 1969.




                                                         114
[Sea75] J. R. Searle. Indirect speech acts. In P. Cole & J. L. Morgan (Eds.), Syntax and Semantics – Volume
        3 / Speech Acts. Academic Press, 1975.
[Ter11] M. Terkourafi. Why direct speech is not a natural default: Rejoinder to Steven Pinker’s “Indirect
        speech, politeness, deniability, and relationship negotiation”. Journal of Pragmatics, 43(11):2869–2871,
        2011.

[Roo03] R. van Rooij. Being polite is a handicap: Towards a game theoretical analysis of polite linguistic
        behavior. Proceedingsof the 9th Conference on Theoretical Aspects of Rationality and Knowledge. Los
        Angeles, 2003.
[Roo04] R. van Rooij. Signalling games select Horn strategies. Linguistics and Philosophy, 27(4):493–527, 2004.

[Roo08] R. van Rooij. Games and quantity implicatures. Journal of Economic Methodology, 15(3):261–274,
        2004.

[Wit53] L. Wittgensten. Philosophical Investigations. Blackwell, 1953.




                                                   115