<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Rationality and Randomness</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Universita di Roma \Tor Vergata"</institution>
          ,
          <addr-line>Roma</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Imagine you are sitting in a room with a large number of people attending a conference and the organizers give you and all the other persons in the room a small piece of paper, asking each person to write down a number between 0 and 100. All the numbers will then be collected and a prize will be given to the person who will have written the closest number to half of the average of all the numbers. What number would you write on your piece of paper? Game theory [16] provides a powerful and elegant framework to predict the outcome of situations like the one described above. Intuitively speaking, in order to choose a number to write on her piece of paper, a person in the room could think as follows: Since numbers are constrained to be between 0 and 100, their average will be a number between 0 and 100 as well, so half of the average will be a number between 0 and 50; hence, if I am rational I would de nitely not write a number larger than 50. Now, if everyone in the room is rational and writes a number not larger than 50, then the average itself will be not larger than 50 and half of the average will be not larger than 25; so, if I am rational and I believe that all other people in the room are rational, I would not write a number larger than 25. If everyone writes a number not larger than 25, . . . A few more steps of reasoning along the above lines and we have the game theoretic prediction of the outcome: Every person in the room writes 0 on her piece of paper. This is indeed the unique Nash equilibrium [14] of that game. However, if you try it out in any real scenario, you will see by yourself how far from the truth is such a prediction. The main reason is that it relies on the very strong assumption that rationality is common knowledge [6] among the agents (everyone is rational, everyone knows that everyone is rational, everyone knows that everyone knows that everyone is rational, and so forth).</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Several branches of game theory try to relax the rationality assumption in order
to get to more reliable predictions. For example, Evolutionary game theory [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
applies the game theoretic framework to repeated games in biological contexts
and introduces new solution concepts (evolutionary stable strategies); Behavioral
game theory [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposes a more experimental approach: Observe the behavior
of real players and appropriately extend the theory to give explanation to the
outcomes.
      </p>
      <p>
        A simple and elegant approach to model players with limited rationality in the
case of repeated games is the Logit dynamics, introduced in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and inspired by
statistical mechanics: Starting from some initial strategy pro le, at each round
a player is selected uniformly at random and she updates her current strategy
according to a probability distribution biased toward strategies promising higher
payo s. More formally, let n be the number of players, for i = 1; : : : ; n let Si
be the set of strategies for player i and let ui : S1 Sn ! R be her
utility function. If x = (x1; : : : ; xn) 2 S1 Sn is the current pro le of
strategies and player i is selected for the update, then she will play strategy
y 2 Si with probability proportional to e ui(x i;y), where (x i; y) is the vector
obtained from x by replacing the i-th entry xi with y and &gt; 0 is a parameter
measuring the rationality level of the players (it is indeed easy to see that,
for = 0 players play one of their strategies uniformly at random, i.e., they
have no rationality at all, while for ! 1 players tend to best respond to
the current strategies of the other players, i.e., they are fully rational ). Logit
dynamics de nes an ergodic Markov chain over the space of strategy pro les
(more precisely, a family of ergodic Markov chains indexed by ) whose unique
stationary distribution we proposed [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] as the equilibrium solution concept
for the game. The \prediction" of such a solution concept is thus that, for any
subset A S1 Sn of the state space, the fraction of times that the system
spends in A is given by its stationary probability (A), in the long run. This
is certainly a more rough prediction than that of a Nash equilibrium but it is
likely much more realistic in several cases (as much as a statement like \The next
economic crisis will happen in 2059" cannot be considered a serious prediction
today, while a statement like \We should expect at least two global economic
crises for each century " is certainly weaker but also much more credible while
still containing some non-negligible information).
      </p>
      <p>
        Once we model an evolving system of agents as an ergodic Markov chain, the
unique stationary distribution of the chain becomes a powerful solution concept
for the system, since it allows us to make predictions on its behavior in the
long run, regardless of the starting state. However, the strength of this solution
concept is also its weakness: The predictive appeal of the stationary distribution
essentially vanishes if the mixing time [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] (the time it takes the chain to get
close to its stationary distribution, starting at an arbitrary state) is too long with
respect to the time-scale we are interested in. In [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ] we classi ed some classes of
strategic games with respect to their Logit dynamics' mixing time, distinguishing
between polynomial and exponential (in the number of players) mixing. For the
games whose Logit dynamics has exponential mixing time, in order to be able to
make predictions at polynomial time-scales, in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] we introduced a generalization
of the concept of stationary distribution for a Markov chain and we called it
metastable probability distribution.
      </p>
      <p>
        Metastability is a word that can have slightly di erent meanings in di
erent scienti c disciplines. Nevertheless, all the meanings are somehow related to
evolving systems hanging around \persistent" con gurations but out of their
main equilibrium. Our de nition in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] is no exception: Roughly speaking, we
de ne a probability distribution to be metastable for a Markov chain with
transition matrix P , if P is close (with respect to an appropriate distance function)
to . Among all possible metastable distributions, the interesting ones are those
quickly reached from some subset of the state space. As a simple example,
consider the following process: Start with a sequence of n bits (x1 : : : ; xn) 2 f0; 1gn;
at each round pick one index i 2 [n] uniformly at random and replace xi with
either 0 or 1 with probability 1=2; stop when you reach either the sequence with
all zeros 0 = (0; : : : ; 0) or the sequence with all ones 1 = (1; : : : ; 1). This is a
lazy random walk on the hypercube (see, e.g., [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]) modi ed by making 0 and
1 absorbing states. Clearly, starting from any state x 2 f0; 1gn, eventually the
process will end up in one of the two absorbing states. However, for every initial
state x 6= 0; 1, the process will run for an exponential number of rounds, in
expectation, before being absorbed; can't we say anything on the distribution xP t
at a shorter time-scale? Yes, indeed. It is easy to see that, after t = (n log n)
rounds, xP t will get close to the uniform distribution U f0; 1gn, and it will
stay close to U for an exponential number of rounds.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] we introduced and studied metastable probability distributions as
solution concepts for strategic games. However, the usefulness of looking at the
\metastable phase" of evolving systems goes beyond the domains of game
theory and Markov chains. In the next section we brie y describe an example of a
simple dynamics whose metastable phase analysis allowed us to solve an
intriguing computational task in a distributed way [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>Community detection via averaging</title>
      <p>
        Consider the following simple rules executed synchronously, in discrete rounds,
by the nodes of an undirected graph: Each node initially holds a value (say, a
rational number) and then, at each round, it updates its value to the average
of the values held by its neighbors. It comes as no surprise that, under mild
assumptions on the graph (namely, it has to be connected and non-bipartite) all
nodes will converge to the same value, in the long run. May be it is much less
obvious that, in its metastable regime, this simple dynamics can be e ectively
used to e ciently solve the community detection problem [
        <xref ref-type="bibr" rid="ref1 ref11 ref12">12, 11, 1</xref>
        ] in a
fullydistributed way.
      </p>
      <p>Let us consider a graph formed by two good expanders connected by a sparse
cut. For the sake of concreteness and simplicity, let us assume we have a \very
regular" graph G = (V; E) with an even number n of nodes partitioned in two
blocks of equal size, V1 and V2, and that each node has exactly a neighbors in its
own block and exactly b neighbors in the other block, for some positive integers
a and b, with a b. Let 1 and 2 be the averages of the initial values of the
nodes in V1 and V2, respectively. Intuitively speaking, by running the averaging
dynamics on a graph like that the following happens: In a rst phase, the values
of all nodes in V1 quickly converge to a value close to 1, while those of nodes
in V2 to a value close to 2. After that initial phase, the system enters in a
metastable regime in which all values slowly converge toward the global average
( 1 + 2)=2; if the two averages 1 and 2 are di erent, say 1 &lt; 2, in this
second phase the value of each node in V1 increases at every round and the value
of each node in V2 decreases. Thus, if we augment the averaging dynamics with
a \clustering criterion" asking each node to raise, at each round, a blue ag or a
red ag depending on whether its value has increased or decreased from the last
round, then as soon as the system enters the metastable regime, all nodes in one
of the blocks raise the red ag and all nodes in the other block raise the blue
one. We thus have a fully-distributed protocol that allows the nodes to perform
a perfect reconstruction of the two clusters of the graph.</p>
      <p>To make the intuitions in the above paragraph a bit more real, we can use
some linear algebra: Let us name x(t) = (x(ut); u 2 V ) the vector where x(ut) is the
value of node u at round t. It is easy to see that the averaging dynamics is de ned
by the recursion x(t+1) = P x(t), where P = A=(a + b) and A is the adjacency
matrix of the graph. The vector of values at round t is thus x(t) = P tx(0). Since
P is a symmetric matrix with real entries, all its eigenvalues are real. Since P is
stochastic (the entries of each row sum up to 1), all its eigenvalues are between
1 and 1 and vector 1 = (1; u 2 V ), i.e., the vector with all entries equal to 1,
is an eigenvector with eigenvalue 1. Moreover, for our \very regular" graph G,
the partition indicator vector = (1; u 2 V1; 1; u 2 V2), taking value +1 on
all the nodes of one of the blocks and 1 on all the nodes of the other block, is
also an eigenvector and its eigenvalue is (a b)=(a + b). Under the assumption
a b, that formalizes the fact that the two blocks are internally well-connected
with respect to the cut between them, (a b)=(a + b) is the second-largest
eigenvalue, 2, of P . The vector of values at round t can thus be written as
x(t) = 11 + 2 t2 + e(t), where 1 and 2 are suitable coe cients depending
only on the initial values (namely, 1 is the global average ( 1 + 2)=2 of the
initial values and 2 is half of the di erence of the initial averages ( 1 2)=2)
and e(t) is a vector orthogonal to 1 and . When we consider the di erence
between the vectors in two consecutive rounds, the component in the direction
of 1 cancels, and what is left is x(t 1) x(t) = 2 t2 1(1 2) + e(t 1) e(t).
Since e(t), for t = 0; 1; : : : , are vectors orthogonal to 1 and , the norm of
e(t) goes to zero at least as fast as the third eigenvalue of P , t . Hence, if the
3
coe cient 2 is non-zero, after a number of rounds depending on 1=( 2 3) it
holds that j 2 t2 1(1 2)j &gt; je(t 1)(u) e(t)(u)j for all nodes u. This implies
that, for each node u, the sign of x(ut 1) x(ut) is equal to the sign of 2 : In
other words, it is positive for all the nodes in one of the blocks and negative for
all the nodes in the other block. Finally, in order to ensure in a distributed way
that 2 = ( 1 2)=2 is non-zero, we can use randomness : If each node chooses
its initial value as +1 or 1 with probability 1=2, then the two initial averages
di er for (1=pn) with high probability.</p>
    </sec>
    <sec id="sec-3">
      <title>Some nal remarks</title>
      <p>Evolving systems governed by simple local rules that produce observable global
phenomena, can be fruitfully studied from a computational perspective. On the
one hand, they seem suitable for modeling natural phenomena emerging in
different elds (physics, economy, biology); on the other hand, they can be used
as design principles and building blocks for lightweight distributed protocols for
relevant computational tasks.</p>
      <p>Randomness plays a fundamental role in this kind of processes, either in a
Bayesian sense, to model the intrinsic uncertainty about the local rules executed
by the agents of the system, or as a distributed symmetry-breaking tool.</p>
      <p>The most interesting phenomena often cannot be studied by means of limiting
behaviors and equilibria, because they appear at shorter time-scales, i.e., when
systems are in their metastable phases.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Emmanuel</given-names>
            <surname>Abbe</surname>
          </string-name>
          and
          <string-name>
            <given-names>Colin</given-names>
            <surname>Sandon</surname>
          </string-name>
          .
          <article-title>Community detection in general stochastic block models: Fundamental limits and e cient algorithms for recovery</article-title>
          .
          <source>In Proc. of the 56th IEEE Ann. Symp. on Foundations of Computer Science (FOCS'15)</source>
          , pages
          <fpage>670</fpage>
          {
          <fpage>688</fpage>
          . IEEE,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Vincenzo</given-names>
            <surname>Auletta</surname>
          </string-name>
          , Diodato Ferraioli, Francesco Pasquale, Paolo Penna, and
          <string-name>
            <given-names>Giuseppe</given-names>
            <surname>Persiano</surname>
          </string-name>
          .
          <article-title>Logit dynamics with concurrent updates for local interaction potential games</article-title>
          .
          <source>Algorithmica</source>
          ,
          <volume>73</volume>
          (
          <issue>3</issue>
          ):
          <volume>511</volume>
          {
          <fpage>546</fpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Vincenzo</given-names>
            <surname>Auletta</surname>
          </string-name>
          , Diodato Ferraioli, Francesco Pasquale, Paolo Penna, and
          <string-name>
            <given-names>Giuseppe</given-names>
            <surname>Persiano</surname>
          </string-name>
          .
          <article-title>Convergence to equilibrium of Logit dynamics for strategic games</article-title>
          .
          <source>Algorithmica</source>
          ,
          <volume>76</volume>
          (
          <issue>1</issue>
          ):
          <volume>110</volume>
          {
          <fpage>142</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Vincenzo</given-names>
            <surname>Auletta</surname>
          </string-name>
          , Diodato Ferraioli, Francesco Pasquale, and
          <string-name>
            <given-names>Giuseppe</given-names>
            <surname>Persiano</surname>
          </string-name>
          .
          <article-title>Metastability of Logit dynamics for coordination games</article-title>
          .
          <source>In Proc. of the ACMSIAM Symp. on Discrete Algorithms (SODA'12)</source>
          , pages
          <fpage>1006</fpage>
          {
          <fpage>1024</fpage>
          .
          <string-name>
            <surname>SIAM</surname>
          </string-name>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Vincenzo</given-names>
            <surname>Auletta</surname>
          </string-name>
          , Diodato Ferraioli, Francesco Pasquale, and
          <string-name>
            <given-names>Giuseppe</given-names>
            <surname>Persiano</surname>
          </string-name>
          .
          <article-title>Mixing time and stationary expected social welfare of Logit dynamics</article-title>
          .
          <source>Theory of Computing Systems</source>
          ,
          <volume>53</volume>
          (
          <issue>1</issue>
          ):3{
          <fpage>40</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Robert</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Aumann</surname>
          </string-name>
          . Agreeing to disagree.
          <source>The Annals of Statistics</source>
          , pages
          <volume>1236</volume>
          {
          <fpage>1239</fpage>
          ,
          <year>1976</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Luca</given-names>
            <surname>Becchetti</surname>
          </string-name>
          , Andrea Clementi, Emanuele Natale, Francesco Pasquale, and
          <string-name>
            <given-names>Luca</given-names>
            <surname>Trevisan</surname>
          </string-name>
          .
          <article-title>Find your place: Simple distributed algorithms for community detection</article-title>
          .
          <source>In Proc. of the ACM-SIAM Symp. on Discrete Algorithms (SODA'17)</source>
          , pages
          <fpage>940</fpage>
          {
          <fpage>959</fpage>
          .
          <string-name>
            <surname>SIAM</surname>
          </string-name>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Lawrence</surname>
            <given-names>E. Blume.</given-names>
          </string-name>
          <article-title>The statistical mechanics of strategic interaction</article-title>
          .
          <source>Games and Economic Behavior</source>
          ,
          <volume>5</volume>
          (
          <issue>3</issue>
          ):
          <volume>387</volume>
          {
          <fpage>424</fpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Colin</given-names>
            <surname>Camerer</surname>
          </string-name>
          .
          <article-title>Behavioral game theory: Experiments in strategic interaction</article-title>
          . Princeton University Press,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Persi</surname>
            <given-names>Diaconis</given-names>
          </string-name>
          , Ronald L.
          <string-name>
            <surname>Graham</surname>
            ,
            <given-names>and John A. Morrison.</given-names>
          </string-name>
          <article-title>Asymptotic analysis of a random walk on a hypercube with many dimensions</article-title>
          .
          <source>Random Structures &amp; Algorithms</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <volume>51</volume>
          {
          <fpage>72</fpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>Santo</given-names>
            <surname>Fortunato</surname>
          </string-name>
          .
          <article-title>Community detection in graphs</article-title>
          .
          <source>Physics reports</source>
          ,
          <volume>486</volume>
          (
          <issue>3</issue>
          ):
          <volume>75</volume>
          {
          <fpage>174</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>Michelle</given-names>
            <surname>Girvan</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mark E.J.</given-names>
            <surname>Newman</surname>
          </string-name>
          .
          <article-title>Community structure in social and biological networks</article-title>
          .
          <source>Proc. of the National Academy of Sciences</source>
          ,
          <volume>99</volume>
          (
          <issue>12</issue>
          ):
          <volume>7821</volume>
          {
          <fpage>7826</fpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>David</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Levin</surname>
            , Yuval Peres,
            <given-names>and Elizabeth L.</given-names>
          </string-name>
          <string-name>
            <surname>Wilmer</surname>
          </string-name>
          .
          <article-title>Markov chains and mixing times</article-title>
          .
          <source>American Mathematical Society</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. John Nash.
          <article-title>Non-cooperative games</article-title>
          .
          <source>Annals of Mathematics</source>
          , pages
          <volume>286</volume>
          {
          <fpage>295</fpage>
          ,
          <year>1951</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>J. Maynard</surname>
          </string-name>
          <article-title>Smith. The theory of games and the evolution of animal con icts</article-title>
          .
          <source>Journal of Theoretical Biology</source>
          ,
          <volume>47</volume>
          (
          <issue>1</issue>
          ):
          <volume>209</volume>
          {
          <fpage>221</fpage>
          ,
          <year>1974</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <article-title>John von Neumann and Oskar Morgenstern</article-title>
          .
          <article-title>Theory of games and economic behavior</article-title>
          . Princeton University Press,
          <year>1944</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>