<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Logic for Social In uence through Communication</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Zoe Christo</string-name>
          <email>zoe.christoff@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute for Logic, Language and Computation, University of Amsterdam</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>We propose a two dimensional \social network plausibility framework" to model doxastic in uence through communication in a social network. To do so, we combine two approaches: on the one hand, a hybrid logic setting, to model the social network itself (who is related to whom), and on the other hand, dynamic epistemic logic, to model the distribution of beliefs among agents (who believes what) and belief changes induced by communication events (what is said to whom and how do the hearers revise their beliefs). Combining both, we show how to design some particular communication protocols in this new framework to represent some level of social doxastic in uence, assuming that the communicating agents are sincere and trust each other.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Agents involved in a social network typically interact with the agents they are
related to. By exchanging information, they in uence those related agents and
are in uenced by them. If we consider the example of online social networks,
agents communicate mainly with their \friends" or \followers". The same seems
to apply in other examples of social relationships such as being colleagues or
family members: people are in uenced by people they interact with and the
structure of who can interact with whom speci es a social network structure. In
other words, agents who communicate are typically related by some social
relationship (and hence, are part of s social network structure) and agents in a social
network typically communicate with the agents they are related to according to
the structure of a social network. Therefore, it seems quite natural to try and
combine explicitly a social network structure with communication events in a
unique framework to model the e ects of social in uence.</p>
      <p>
        We propose a logical setting in which di erent scenarios of \social in uence
via communication" can be modeled. We are interested in reasoning about the
doxastic states of agents in a social network and focus in particular on the
communication protocols that can express di erent levels of social doxastic in uence.
Formally we use the tools and techniques of dynamic epistemic logic1, combining
1 See for instance [
        <xref ref-type="bibr" rid="ref1 ref7">1, 7</xref>
        ].
the work of Baltag and Smets on communication protocols for belief merge [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]
with the work of Seligman, Liu and Girard on modeling social in uence and peer
pressure e ects [8{10].
      </p>
      <p>
        First, we build on the hybrid logic framework of Liu, Seligman and Girard [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],
in the \Facebook logic" style of [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This setting combines a static social network
model with an in uence operator to represent how belief states change in a
community according to the following peer pressure principle: every agent tends
to align her beliefs with the ones of her friends. It is assumed that there are
two situations in which an agent is pressured into changing her belief state. The
rst one is strong in uence, the situation of maximal pressure to align, where
all of my friends believe that ϕ, leading me to revise my beliefs so that I also
believe that ϕ (after the successful revision with ϕ). The second one is weak
in uence, de ned as follows: whenever I believe that ϕ but none of my friends
believes that ϕ and some of my friends even believe that ¬ϕ, I (successfully)
contract my belief in ϕ. Moreover, while I'm being in uenced by my friends'
doxastic states, my friends are being in uenced by mine, so that everybody
is in uenced by their friends' opinions all the time and at the same time. An
important simplifying feature of this framework is that agents are in uenced
directly by their friends' beliefs, which corresponds to assuming that friends
have access to each other's mental states, that they are in a sense transparent
to each other. Given this de nition of in uence, it can be shown that in some
con gurations all agents will keep switching their opinions with their friends
forever, while the other con gurations will always reach a stable state at some
point, after a nite number of repetitions of the in uence operator. The language
of the framework allows to characterize the stable con gurations and the ones
which will stabilize. A su cient (but non-necessary) condition for stability is
that the beliefs of everybody in the community are identical: once all friends
agree, nothing changes anymore, since there is no pressure to align anymore.
      </p>
      <p>
        Second, we adopt the perspective of Baltag and Smets in [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ], investigating
how a group of agents has to communicate in order to reach a state in which all
agents \agree" on all their plausibility ordering, i.e, a state in which they have
completely merged their opinions, in a way which re ects the relative importance
of each agent. For instance, when an agent publicly announces a sentence which
she believes to be true, she may convince the others, i.e., she may in uence
them into revising their beliefs with the announced sentence. The central idea
of the beliefs merge protocols is that agents will speak in turn, according to a
given rank of expertise, and announce to all the other agents all of what they
privately believe to be the case. If the hearers trust the speaker enough, they
will be in uenced into revising their beliefs with the announced sentences, i.e,
they will come to agree on what has been announced so far. This process can
continue until a stable state is reached, a state in which none of what any agent
believes would change anything anymore, if it was announced to the others.
The reachability of such a global agreement stable state depends on how much
the agents trust each other (do they revise with the announced sentences and
if yes, how exactly?), on their sincerity (do they announce only sentences that
they actually believe to be true?), and on the exhaustivity of the communication
process (do they announce all non redundant sentences which they believe?).2
      </p>
      <p>Both the above mentioned logical frameworks are concerned with
representing the belief revision induced by what the other agents believe and with the
reachability of a certain type of stable states. Moreover, in both settings, once
all agents of a group agree, nothing changes anymore. However, while the rst
setting assumes direct (i.e., without explicit communication) bilateral and
synchronic in uence between all related agents, the second one assumes unilateral
and diachronic in uence on all agents through sequential public communication
events. Moreover, while only the rst setting allows to model agents and the
social network explicitly, only the second setting allows to model beliefs and belief
revision explicitly in terms of their underlying plausibility structures. We aspire
to design a framework which can incorporate both aspects.</p>
      <p>
        Combining both approaches, we propose a uni ed general \social network
plausibility framework" in which the agents, their plausibility orderings and their
social relationships are modeled explicitly and on which di erent communication
protocols can be de ned to represent di erent types of belief change under peer
pressure. In the next sections, we will introduce the logical tools needed to build
this new framework. In section 2 we introduce the two dimensional social network
plausibility static framework. In section 3 we add the dynamics to our framework
and show how the notion of strong in uence from [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] can be rede ned as a
particular case of communication protocols in our framework, assuming a certain
degree of trust in between friends, exhaustivity of the communication and success
of the revision process.
2
      </p>
      <p>
        A two dimensional social network plausibility
framework
Our formal setting consists of two dimensions: a doxastic dimension and a social
network dimension. To model the beliefs of agents, we follow [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]: we include in
our models (epistemically) possible worlds and a subjective plausibility ordering
relative to each agent and we include in our language the modal operator B for an
agent's (simple) belief, de ned as truth in all the states that the agent considers
to be the most plausible.3 To model the social relationships of the agents we
2 [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] investigate which communication protocols lead the agents to merge di erent
doxastic attitudes, and how they can merge their entire plausibility orderings. In
this paper, we restrict ourselves to the case of belief merge, since belief this is the
unique attitude considered in the in uence setting of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
3 For the time being we will limit ourselves to considering only simple belief. In future
work, we will also consider other doxastic attitudes from [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], namely conditional
beliefs B : belief under the condition that ψ; safe belief (or \defeasible knowledge") 2:
belief stable under revision with any new true information; and strong (or \robust")
belief Sb: belief stable under revision with any (true or false) new information which
is not known to contradict it, and irrevocable knowledge K.
follow [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]: we represent agents and their social relationship explicitly in our
model. To express things about this social dimension in an indexical way, our
language contains a modality F , quantifying over friends, reading \all of my
friends", nominals (each nominal is true at exactly one agent) as rigid designators
and the operator @n, which switches the evaluation point to the unique one
satisfying the nominal n.
      </p>
      <p>De nition 1 (Syntax). The social network plausibility static language is the
following, where p ∈ Φ is an atomic proposition and n ∈ N is an agent nominal:</p>
      <p>Our two dimensional models are simply the result of embedding one
dimension into the other one: a social network plausibility model is a ( nite,
multiagent, pointed) plausibility model in which a social network frame is associated
to each possible state.</p>
      <p>De nition 2 (Social network plausibility model). A social network
plausibility model is a tuple M = (S, A, ≤a∈A, ∥·∥, s0, ≍s∈S ), such that:
{ S is a ( nite) set of possible states,
{ A is a ( nite) set of agents,
{ ≤a⊆ S × S is a locally connected preorder, interpreted as the subjective
plausibility relation of agent a, for each agent a ∈ A,
{ s0 ∈ S is a designated state, interpreted as the actual state,
{ ≍s⊆ A×A is an irre exive and symmetric relation, interpreted as friendship,
for each state s ∈ S,
{ ∥·∥ : N ∪ Φ → P(S × A) is a valuation, assigning a set ∥p∥ ⊆ S × A to
every element p of some given set Φ of \atomic sentences" and assigning a
set ∥n∥ = S × {a} for some unique a ∈ A to every element n of some given
set N of \nominals".</p>
      <p>
        We inherited from [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ] an indexical semantics where every formula is
evaluated both at a state w ∈ S and at an agent a ∈ A. For instance, assuming
that p means \I am blonde", then BF p means \I believe that all my friends are
blonde" and F Bp means \all of my friends believe that they are blonde".
De nition 3 (Semantic clauses). Let M = (S, A, ≤a∈A, ∥·∥, s0, ≍s∈S ),
a, b ∈ A, w, v ∈ S, p ∈ Φ and n ∈ N . We denote by n the unique agent at which
the nominal n holds, by s(a) the comparability class of state s relative to agent
a: for t ∈ S, t ∈ s(a) i s ≤a t or t ≤a s, and we use the abbreviation besta ϕ
to denote the most plausible ϕ-states according to a: besta ϕ := {s ∈ ∥ϕ∥ : t ≤a
s for all t ∈ ∥ϕ∥}.
      </p>
      <p>M, w, a</p>
      <p>p i ⟨w, a⟩ ∈ ∥p∥
M, w, a
M, w, a
M, w, a
M, w, a
M, w, a
M, w, a
n i ⟨w, a⟩ ∈ ∥n∥ i a = n
¬ϕ i M, w, a 2 ϕ
ϕ ∧ ψ i M, w, a
F ϕ i M, w, b
@n ϕ i M, w, n
Bϕ i M, v, a</p>
      <p>ϕ and M, w, a ψ
ϕ for all b such that a ≍ b</p>
      <p>ϕ
ϕ for all v ∈ S such that v ∈ bestaw(a)
a
c
w
b p
d</p>
      <p>c
a, b, d</p>
      <p>
        a
p c
v
b p
d
Let us now consider how models change under social in uence through
communication. We start by imposing some simplifying assumptions. First, only friends
communicate.4 What change is induced by communication between friends? In
general, depending on how much an agent trusts the source of new information,
she can transform her belief state in di erent ways.5 We assume that friends trust
their friends: whatever any of my friends announces, I revise my beliefs with it.6
More precisely, we assume that friends trust their friends strongly enough to
perform a radical revision with any formula ϕ announced. The operation on
plausibility models corresponding to this strong level of trust is radical upgrade
or \lexicographic upgrade" ⇑ ϕ [
        <xref ref-type="bibr" rid="ref2 ref5">2,5</xref>
        ], which promotes all ∥ϕ∥-worlds so that they
4 To simplify, we consider here only cases of public communication, since these are
the cases considered in the protocols proposed in [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. This is only a starting point
and we will relax this assumption and introduce a distinction between the insiders
of some private communication, the friends of the announcer, and the outsiders
(everybody else).
5 See [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] for the de nition of di erent types of upgrades corresponding to di erent
levels of trust.
6 We do not restrict our setting of in uence to formulas for which revision is successful,
unlike [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
become more plausible than all ∥¬ϕ∥-worlds (in each of the agent's information
cell), while keeping everything else the same. We de ne the operation on social
network plausibility models resulting from public communication in the obvious
way: each agent revises its plausibility ordering (within each information cell)
and everything else stays unchanged.
      </p>
      <p>De nition 4 (Joint radical upgrade). ⇑ ϕ is a model transformer which
takes as input M= (S, A, ≤a∈A, ∥·∥, s0, ≍s∈S ) and outputs M′=(S, A, ≤′a∈A,
∥·∥, s0, ≍s∈S ) such that:
s ≤′a t i either (s, t ̸∈ ∥ϕ∥and s ≤a t)or (s, t ∈ ∥ϕ∥and s ≤a t)or (t ∈
s(a)and s ̸∈ ∥ϕ∥and t ∈ ∥ϕ∥).</p>
      <p>
        How do agents with such a level of trust have to communicate to reach a state
where they all have the same beliefs? The assumption in [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] that agents speak
in turn, given some expertise ranking, allows to de ne the following lexicographic
belief merge protocol : rst, the agent with the highest rank announces that ϕ,
for every (non-equivalent) ϕ that she believes. Then, the agent with the second
highest rank does the same, and so on, until a state of global agreement is
reached, i.e., a state such that nothing any agent could sincerely announce would
change anything anymore.7
      </p>
      <p>We adapt this protocol to accommodate the indexicality or our setting, i.e.,
we make sure that when an agent a announces ϕ, the hearers revise with @aϕ
and not directly with ϕ. This seems to re ect in a natural way the indexicality
of real-life communication. Note also that this protocol trivially requires that all
agents are friends with each other.</p>
      <p>De nition 5 (Beliefs lexicographic merge indexical protocol).
ρa := ∏{⇑ @aϕ : ∥@aϕ∥ ⊆ S × A such that M, w, a |= Bϕ}</p>
      <p>etc for all c ∈ A
where ∏ is a sequential composition operator and M[ a] is the new model after all
agents have performed a radical upgrade with each formula announced by a.</p>
      <p>
        In the reminder, we will show social in uence as given in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] can be
reinterpreted in our framework as some particular communication protocols. Strong
in uence (Is) is de ned in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] as the situation where all of my friends believe
that ϕ: Isϕ := F Bϕ. This situation causes me to revise my beliefs with ϕ:
assuming that revision is successful, whatever my initial state was, I will come to
7 Here we consider a protocol to merge only beliefs, since belief is the only attitude
modeled in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] but [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] actually consider how to merge the entire plausibility
relations of all agents.
believe that ϕ and (assuming they did not change their mind in the meantime)
agree with my friends.
      </p>
      <p>Let us consider the simplest example of strong in uence: some agent a is the
other agents' only friend, has the highest expertise rank, and believes that ϕ.
Translating this into a communication setting, a will announce that ϕ and the
others will be (strongly) in uenced into revising their beliefs with ϕ, evaluated
at agent a. For instance, if a announces \I am blonde"(p), which is equivalent
to her announcing \a is blonde" (@ap), her friends come to believe that a is
blonde (@ap). They now agree on whatever was announced (they all believe
@ap). This is, therefore, at the same time, the simplest case of beliefs merge,
and the simplest case of strong in uence (as long as we only consider the case of
a's friends and ignore what happens to agent a for now). This can be represented
by a one step version of the beliefs lexicographic merge indexical protocol given
above:
De nition 6 (One-to-the others unilateral strong in uence protocol).</p>
      <p>where ∏is a sequential composition operator</p>
      <p>Assume now that all of agent a's friends are friends with each others. Agent
a is in a state of strong in uence with ϕ if and only if everybody else agrees on
believing ϕ, in which case she will revise her belief with ϕ. But before she revises
her beliefs, a rst needs to be aware of what her friends believe. We de ne the
following protocol, in which, unlike in the above, agents have to announce that
they believe something, whenever they did initially believe it. This will result
in a believing that all of her friends believe something if they actually did. We
still need a to revise her beliefs with ϕ. So we add the last step of the protocol,
fundamentally di erent from the rest of the protocol, representing the reasoning
of agent a, the conclusion she reaches, announcing to herself (and thereby to her
friends) that ϕ.</p>
      <p>De nition 7 (The others-to-one unilateral strong in uence protocol).</p>
      <p>etc, for all d ∈ A such that M, w, d |= ⟨F ⟩a</p>
      <p>M[ b; c;:::], w, a |= BF Bϕ}
where ∏is a sequential composition operator and M[ b; c;:::] is the model resulting
from the successive revisions (by all friends) with each of the formulas announced by
each of them.8
8 The de nition of strong in uence has the counterintuitive consequence that if ϕ
means \I am blonde" and if all of my friends believe that they (themselves) are</p>
    </sec>
    <sec id="sec-2">
      <title>Conclusion and further research</title>
      <p>
        So far we have considered only the case in which agents are under maximal
and unilateral in uence. We have ignored weaker cases where not all but only
a signi cant proportion of friends believe something, and we have ignored the
fact that while being in uenced by my friends, I in uence them too. This is a
crucial di erence between the way the communication protocols from [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] are
de ned (agents speak in turn) and the way the social in uence operator from [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
is de ned \globally" and synchronically (all agents in uence their friends while
being in uenced by them). To faithfully translate this global notion in the present
communication setting, we would need to allow all friends to have the same
expertise rank and therefore speak at the same time and we would need to de ne
a corresponding \parallel" actions operator. We have also restricted ourselves to
examples with public communication. In future work, we will consider private
forms of communication (public announcement restricted to the group of the
announcer's friends).
      </p>
      <p>
        As mentioned in the introduction, in the setting of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], friends are in uenced
directly by each other's beliefs, as if they had direct access to other agents' minds,
as if they were \transparent" to each other. Our rst goal here has been to show
that this direct in uence notion corresponds to a particular case of
communication, where sincerity, trust and exhaustivity replace transparency, allowing
agents to get access to each other's mental states. However, this assumption
prevents such a framework from modeling some particular social phenomena where
I am in uenced not directly by what the others actually believe, but by what I
believe that they believe, leaving some space for error or uncertainty. Our
communication setting for in uence should therefore be generalized beyond sincerity
to properly distinguish between what an agent privately believes and what she
shares with others (what she announces to others, what she expresses, what she
seems to believe according to her behaviour, etc.9). In other words, in addition
to adding communication to regain a transparent doxastic in uence setting as
we have started doing here, it would be interesting to focus on what agents can
typically access (observe) of each other: their behaviour, whether it re ects their
private beliefs or not.
      </p>
      <p>
        Acknowledgments I would like to thank Johan van Benthem, Jens Ulrik
Hansen, Emiliano Lorini, and Sonja Smets for their suggestions and comments
during the elaboration of this short paper. An early version of this work has been
presented at the Seventh Workshop in Decisions, Games &amp; Logic and I would
blonde, I end up believing that I am blonde too. One solution is to restrict the
above protocol to formulas ϕ whose truth do not depend on the evaluation agent,
characterized by making the following valid: @aϕ ⇔ @bϕ.
9 For a dynamic hybrid framework allowing such a discrepancy between what an agent
believes and what she seems to believe, see [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
like to thank the anonymous referees of both DGL and EUMAS/LAMAS for
their valuable feedback.
      </p>
      <p>The research leading to these results has received funding from the
European Research Council under the European Communitys Seventh Framework
Programme (FP7/2007-2013)/ERC Grant agreement no. 283963.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Moss</surname>
          </string-name>
          , and
          <string-name>
            <surname>S. Solecki.</surname>
          </string-name>
          <article-title>The logic of public announcements, common knowledge and private suspicions</article-title>
          .
          <source>In Proceedings of TARK'98 (Seventh Conference on Theoretical Aspects of Rationality and Knowledge)</source>
          , pages
          <fpage>43</fpage>
          {
          <fpage>56</fpage>
          . Morgan Kaufmann Publishers,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          .
          <article-title>A qualitative theory of dynamic interactive belief revision</article-title>
          . In G. Bonanno, W. van der Hoek, and M. Wooldridge, editors,
          <source>Logic and the Foundations of Game and Decision Theory</source>
          , volume
          <volume>3</volume>
          of Texts in Logic and Games, pages
          <volume>9</volume>
          {
          <fpage>58</fpage>
          . Amsterdam University Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          .
          <article-title>Protocols for belief merge: Reaching agreement via communication</article-title>
          . volume
          <volume>494</volume>
          <source>of CEUR Workshop Proceedings</source>
          , pages
          <volume>129</volume>
          {
          <fpage>141</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          .
          <article-title>Protocols for belief merge: Reaching agreement via communication</article-title>
          .
          <source>Logic Journal of the IGPL</source>
          ,
          <volume>21</volume>
          (
          <issue>3</issue>
          ):
          <volume>468</volume>
          {
          <fpage>487</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>J. Benthem van.</surname>
          </string-name>
          <article-title>Dynamic logic for belief revision</article-title>
          .
          <source>Journal of Applied Non-Classical Logics</source>
          ,
          <volume>14</volume>
          :
          <fpage>129</fpage>
          {
          <fpage>155</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Z.</given-names>
            <surname>Christo</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. U.</given-names>
            <surname>Hansen</surname>
          </string-name>
          .
          <article-title>A two-tiered formalization of social in uence</article-title>
          . In D. Grossi,
          <string-name>
            <given-names>O.</given-names>
            <surname>Roy</surname>
          </string-name>
          , and H. Huang, editors, Logic, Rationality, and Interaction, volume
          <volume>8196</volume>
          of Lecture Notes in Computer Science, pages
          <volume>68</volume>
          {
          <fpage>81</fpage>
          . Springer Berlin Heidelberg,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>H.</given-names>
            <surname>Ditmarsch van</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kooi</surname>
          </string-name>
          , and W. Hoek van der.
          <source>Dynamic Epistemic Logic</source>
          . Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liang</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Seligman</surname>
          </string-name>
          .
          <article-title>A logical model of the dynamics of peer pressure</article-title>
          .
          <source>Electronic Notes in Theoretical Computer Science</source>
          ,
          <volume>278</volume>
          (
          <issue>0</issue>
          ):
          <volume>275</volume>
          {
          <fpage>288</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>F.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Seligman</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Girard</surname>
          </string-name>
          .
          <article-title>Logical dynamics of belief change in the community</article-title>
          . Synthese. Special Issue on Social Epistemology,
          <string-name>
            <given-names>C.</given-names>
            <surname>Proietti</surname>
          </string-name>
          and F. Zenker, editors, to appear.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>J. Seligman</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Liu</surname>
            , and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Girard</surname>
          </string-name>
          .
          <article-title>Logic in the community</article-title>
          . In M.
          <article-title>Banerjee and A</article-title>
          . Seth, editors,
          <source>Logic and Its Applications</source>
          , volume
          <volume>6521</volume>
          of Lecture Notes in Computer Science, pages
          <volume>178</volume>
          {
          <fpage>188</fpage>
          . Springer Berlin Heidelberg,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>