<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Convention and Innovation in Social Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eberhard Karls Universitat Tubingen</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>To depict the mechanisms that have enabled the emergence of semantic meaning, philosophers and researchers particularly access a game-theoretic model: the signaling game. In this article I will argue that this model is also quite appropriate to analyze not only the emergence of semantic meaning, but also semantic change. In other words, signaling games might help to depict mechanisms of language change. For that purpose the signaling game will be i) combined with innovative reinforcement learning and ii) conducted repeatedly as simulation runs in a multi-agent account, where agents are arranged in social network structures: scale-free networks with small-world properties. The results will give a deeper understanding of the role of environmental variables that might promote semantic change or support solidity of semantic conventions.</p>
      </abstract>
      <kwd-group>
        <kwd>signaling game</kwd>
        <kwd>reinforcement learning</kwd>
        <kwd>multi-agent account</kwd>
        <kwd>scale-free networks</kwd>
        <kwd>small-world properties</kwd>
        <kwd>mechanisms of language change</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Introduction
\What are the mechanisms that can explain the emergence of semantic
meaning?" Philosophers have long been concerned with this question. Russell once
said: \[w]e can hardly suppose a parliament of hitherto speechless elders
meeting together and agreeing to call a cow a cow and a wolf a wolf."[
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. With
this sentence Russell wanted to point to a particular paradox of the evolution of
human language: language (as a tool to make verbal agreements) is needed for
language (in form of semantic meaning) to emerge.
      </p>
      <p>
        Lewis found a very elegant solution for this paradox: he showed that semantic
meaning can arise without previous agreements, but just by regularities in
communicative behavior. He showed it with a game-theoretical model: the signaling
game [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. This game basically models a communicative situation between a
speaker and a hearer, and just by playing this game repeatedly and using simple
update mechanisms to adjust subsequent behavior, both participants might
nally agree on semantic conventions without making an overt verbal agreement
in advance [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. In other words: semantic meaning can arise automatically and
\unconsciously" just by repeated communication and simple adaption
mechanisms;1 a signaling game is an elegant way to formalize these dynamics.
1 \Unconsciousness" of participants means here that they do not choose to use a
speci c expression for an object, but rather learn it by optimizing behavior.
      </p>
      <p>Apparently, quite similar mechanism can be assumed for language change,
or to be more precise: for semantic innovation, semantic shift and semantic loss.
Like we cannot assume that speechless elders made agreements to call a wolf
a \wolf", we furthermore cannot assume that the people in the 1970s made a
public announcement to use the word \groovy" when they wanted to express that
something is really nice, and another announcement in the 1980s, that people
should not use this word anymore. Just as semantic meaning can emerge in an
unconscious and automatic way, in the same way, expressions arise, change their
meaning, or get lost. It seems to be plausible that a signaling game might also
be an appropriate model to explain general mechanisms of semantic change.</p>
      <p>
        A number of studies came up to analyze how semantic meaning arises in
realistic population structures, by conducting multi-agent simulations : applying
repeated signaling games between connected agents placed in social network
structures, c.f. lattice structures [
        <xref ref-type="bibr" rid="ref18 ref31">31, 18</xref>
        ] and small-work networks [
        <xref ref-type="bibr" rid="ref20 ref28">28, 20</xref>
        ]. Next
to the signaling game, another line of research uses the so-called naming game
[
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] to analyze the emergence of semantic conventions in realistic population
structures [
        <xref ref-type="bibr" rid="ref27 ref5">27, 5</xref>
        ]. It can be shown that both accounts imply similar mechanisms
and reveal similar resulting dynamics. However, as mentioned before, all these
studies analyze how semantic meaning `arises', not how it `changes'.
      </p>
      <p>
        Another line of research uses multi-agent simulations to analyze language
change in social network structures, but without applying signaling games or
similar models of communication [
        <xref ref-type="bibr" rid="ref14 ref22 ref9">22, 14, 9</xref>
        ]. In these studies agents i) do not
communicate, but just choose among (linguistic) variants they are aware of and
ii) make explicit decisions of what variant to use. Because of the rst point
these studies somehow lack the quintessence of language change, namely that
it happens through repeated communication. Because of the second point these
studies let agents behave in a much too \conscious" way. Language change might
usually be the result of much more unconscious decisions and hidden dynamics.
      </p>
      <p>
        In this study I use repeated signaling games in combination with an update
mechanism that depicts unconscious behavior of decision making. This
mechanism is called reinforcement learning [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Applying reinforcement learning on
repeated signaling games is not new, but in fact one of the most popular
dynamics in this eld [
        <xref ref-type="bibr" rid="ref25 ref3 ref4">3, 4, 25</xref>
        ]. What is new in this study is the fact that the
account is applied to analyze the change rather than the emergence of semantic
conventions. For that purpose signaling games and reinforcement learning will
be employed to conduct simulation experiments of communicating agents in
social network structures with the goal to evaluate the environmental factors that
might or might not support semantic change or stability.
      </p>
      <p>
        This article is divided in the following way: in Section 2 some basic notions of
repeated signaling games, reinforcement learning dynamics and network theory
will be introduced. Furthermore, I will discuss a noteworthy extension for
reinforcement learning, called innovation [
        <xref ref-type="bibr" rid="ref1 ref25">25, 1</xref>
        ]. It can be shown that this additional
feature realizes an interesting interplay between stabilizing and renewing e ects
[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]; and I will adopt it for my experiments, which are described and analyzed
in Section 3. A nal conclusion will be presented in Section 4.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Signaling Games, Learning and Networks</title>
      <p>This section will give a coarse technical and theoretical background to
understand the important concepts of this article: the signaling game, reinforcement
learning with innovation, and some basic notions of network theory.
2.1</p>
      <sec id="sec-2-1">
        <title>Signaling Games</title>
        <p>A signaling game SG = hfS; Rg; T; M; A; P r; U i is a game played between a
sender S and a receiver R. T is a set of information states, M is a set of messages
and A is a set of interpretation states (or actions). Pr(t) 2 (T )2 is a probability
distribution over T and describes the probability that an information state is
topic of communication. U : T A ! R is a utility function that basically
determines how well an interpretation state matches an information state.</p>
        <p>Let us take a look at the simplest variant of the game where we have two
states, two messages and two actions: T = ft1; t2g, M = fm1; m2g, A = fa1; a2g,
a at probability distribution of P r(t) = 1=jT j 8t 2 T , and a simple utility
function that gives a positive value if the interpretation state a matches the
information state t, marked by the same index: U (ti; aj ) = 1 i i = j, else 0. Such a game
is played as follows: an information state t is chosen with prior probability P r3,
which the sender wants to communicate to the receiver by choosing a message m.
The receiver wants to decode this message by choosing an interpretation state a.
Communication is successful i the information state matches the interpretation
state. In this study only a subset of all possible signaling games is considered,
which I call n k-games, as de ned in De nition 1.</p>
      </sec>
      <sec id="sec-2-2">
        <title>De nition 1 (n k-game). A n</title>
        <p>jT j = jAj = n, jM j = k, 8t 2 T : P r(t) = 1=jT j and U (ti; aj ) =
k-game is a signaling game SG with:
1 if i = j
0 else
Note that messages are initially meaningless in this game, but meaningfulness
can arise from regularities in behavior. Behavior is here de ned in terms of
strategies. A behavioral sender strategy is a function : T ! (M ), and a
behavioral receiver strategy is a function : M ! (A). A behavioral strategy
can be interpreted as a single agent's probabilistic choice.</p>
        <p>Now, what circumstances can tell us that a message is attributed with a
meaning? The answer is: this can be indicated by the combination of sender
and receiver strategy, called strategy pro le. A message has a meaning between
a sender and a receiver, if both use pure strategies that constitute a speci c
isomorphic strategy pro le. For the 2 2-game there are exactly 2 such strategy
pro les, as depicted in Figure 1. Here in pro le L1 the message m1 has the
meaning of state t1=a1 and message m2 has the meaning of state t2=a2. For
pro le L2 it is exactly the other way around.
2 (X) : X ! R denotes a probability distribution over random variable X.
3 Informally, the information state came to the sender's mind. In game theory we say
that the state is chosen by an invisible participant, called nature N .</p>
        <p>L1:
t1
t2
m1
m2
a1
a2</p>
        <p>L2:
t1
t2
m1
m2
a1
a2</p>
        <p>
          Lewis called such strategy pro les signaling systems [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], which have
interesting properties. It can be shown that signaling systems i) ensure perfect
communication and maximal utility, ii) are Nash equilibria over expected
utilities [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], and iii) are evolutionary stable states [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ][
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Furthermore, note that
the number of signaling systems increases strongly with the number of states
and/or messages: an n k-game has k!=(k n)! possible signaling systems.
        </p>
        <p>
          At this point it is explained how semantic meaning can be expressed by
participants' communicative behavior: a message has a meaning, if sender and
receiver communicate according to a signaling system. However, this does not
explain at all, how participants come to such a signaling system in the rst place,
by expecting that messages are initially meaningless. To explore the paths that
might lead from a meaningless to a meaningful message, it is necessary to explore
the process that leads from participants' arbitrary communicative behavior to a
behavior that constitutes a signaling system. Such a process can be simulated by
repeated signaling games, where the participants' behavior is guided by update
dynamics. One popular dynamics is called reinforcement learning [
          <xref ref-type="bibr" rid="ref25 ref3 ref4">3, 4, 25</xref>
          ].
Reinforcement learning can be captured by a simple model based on urns, also
known as Polya urns [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. An urn models a behavioral strategy, in the sense that
the probability of making a particular decision is proportional to the number of
balls in the urn that correspond to that choice. By adding or removing balls from
an urn after each access, an agent's behavior is gradually adjusted. For signaling
games, the sender has an urn ft for each state t 2 T , which contains balls for
di erent messages m 2 M . The number of balls of type m in urn ft designated
with m(ft), the overall number of balls in urn ft with jftj. If the sender is faced
with a state t she draws a ball from urn ft and sends message m, if the ball
is of type m. The same holds in the same way for the receiver. The resulting
sender response rule and receiver response rule is given in Equation 1 and
2, respectively.
        </p>
        <p>(mjt) =
m(ft)
jftj
(1)
(ajm) =
a(fm)
jfmj
(2)
The learning dynamics is realized by changing the urn content dependent on the
communicative success. The standard account works as follows: if communication
via t, m and a is successful, the number of balls in urn ft is increased by 2 N
balls of type m. Similarly, for the receiver. In this way successful communicative
behavior is more probable to reappear in subsequent rounds.</p>
        <p>
          This mechanism can be intensi ed by lateral inhibition: if communication
via t, m and a is successful, not only will the number of ball type m in urn ft
be increased, but also will the number of all other ball types m0 2 M n fmg be
decreased by 2 N. Similarly, for the receiver. Franke and Jager [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] introduced
the concept of lateral inhibition for reinforcement learning in signaling games
and showed that it leads the system more speedily towards pure strategies.
        </p>
        <p>Furthermore, negative reinforcement can be used to punish unsuccessful
behavior. It changes urn contents in the case of unsuccessful communication in
the following way: if communication via t, m and a is unsuccessful, the number
of balls in the sender's urn ft is decreased by 2 N balls of type m; and the
number of balls in the receiver's urn fm is decreased by balls of type a.</p>
        <p>
          Note that reinforcement learning might have the property to slow down the
learning e ect: if the total number of balls in an urn increases over time, but
the rewarding value is a xed value, then the learning e ect mitigates. A way
to prevent learning from slowing down is to keep the overall number of balls jfj
on a xed value by scaling the urn content appropriately after each round of
play. Such a setup is a variant of so-called Bush-Mosteller reinforcement [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
        </p>
        <p>All in all, a reinforcement learning setup for a signaling game can be captured
by RL = h( ; ); ; ; ; ; i, where and are the participants' response rules,
is the reward value, the punishment value, the lateral inhibition value and
the urn size. Finally, is a function that de nes the initial urn settings.</p>
        <p>
          With the goal to analyze issues of language change, a really interesting
additional feature for reinforcement learning is called innovation. The basic idea
stems from Skyrms [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] and works as follows: each sender urn contains, next to
the balls for each message, an additional ball type, which Skyrms calls black ball.
Whenever the sender draws a black ball from an urn, he sends a completely new
message that was never sent before. In other words, the sender invents a new
message. Further experiments with this setup were made for 2-players games [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
as well as for multi-agent accounts [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
        </p>
        <p>
          The second study [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] used a reinforcement learning setup with negative
reinforcement and lateral inhibition. In such a setup the black balls of the agents'
sender urns can increase and decrease in dependence of communicative success.
By naming the total number of an agent's black balls her force of innovation,
the study revealed an interesting relationship between society-wide force of
innovation and communicative success: increasing communicative success leads to
decreasing force of innovation, and vice versa.4 Note that this relationship
between both values implies two things: i) once a population has learned one unique
signaling convention and reaches perfect communication, the force of innovation
has dropped to zero: the society has reached a stable state without any spirit of
innovation; ii) if the society contains multiple conventions and communication
is therefore not perfectly successful society-wide, the force of innovation has a
positive level and produces new strategies that might nally manifest as new
conventions; in other words: language change is possible to be realized.
4 It was shown for experiments with 3-agent populations that the force of innovation
and communicative success reveal a signi cant negative correlation.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Basic Notions of Network Theory</title>
        <p>
          To ensure that a network structure resembles a realistic interaction structure
of human populations, it should have small-world properties; c.f. Jackson found
out that these properties show in the analysis of human friendship networks [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
According to this line of studies, the essential two properties of small-world
networks are i) a short characteristic path length, and ii) a high clustering coe cient
[
          <xref ref-type="bibr" rid="ref30">30</xref>
          ].5 Additionally, most often human networks display a third property, namely
to be scale-free: the frequency of agents with ever larger numbers of connections
roughly follows a power-law distribution. In this sense I consider a special kind
of a scale-free network, which is both scale-free and has small-world properties
[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. This network type is constructed by a preferential attachment algorithm that
takes two parameters m that controls the network density, and p that controls
the clustering coe cient [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. In my experiment I used a scale-free network with
500 nodes, m = 2 and p = :8, which ensures small-world properties.
        </p>
        <p>A main goal of this work is to investigate the relationship between the change
of meaning and the structural properties of the network and its members. As
the experiments will show, there seems to be an explanatory value of network
properties that express an agent's connectivity and embeddedness. In order to
capture these properties more adequately, suitable notions from social network
theory will be considered: degree centrality (DC) describes the local connectivity
of an agent, closeness centrality (CC) and betweenness centrality (BC) her global
centrality, and individual clustering (CL) her local embeddedness.5</p>
        <p>
          As I will argue later, also the strength of ties between agents might play
an important role in language change. Easley and Kleinberg [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] showed that the
strength of a tie between two agents has basically a strong linear correlation with
the overlap of both agents' neighborhoods. To keep things easy I will de ne the
strength of a tie by this neighborhood overlap. Furthermore, since my analysis
deals with agents rather than with ties between them, I calculate an agent's ties
strength T S as the average strength value of all ties of this agent:
De nition 2 (Ties Strength). For a given network the ties strength of agent
n is de ned as follows (where N (i) is the set of neighbors of agent i):
        </p>
        <p>P
T S(n) = m2N(n)</p>
        <p>N(n)[N(m)</p>
        <p>
          N(n)\N(m)
jN (n)j
(3)
Note: the notions of DC, CC, BC, CL and T S describe static network properties
of an agent, since they do not change during a simulation run and are determined
by the network structure and the agent's position inside it.
5 For the de nition of these network properties I refer to Jackson's Social and
Economic Networks [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], Chapter 2.
        </p>
        <p>Finally, as the experiments will show, agents in a social network agree on
signaling systems as groups, which constitute connected components6. Such a
group-wide signaling system is called a signaling convention (De nition 3).
De nition 3 (Signaling Convention). For a given network structure of agents
that play the repeated signaling game with their connected neighbors, a signaling
convention is a signaling system that is used by a group of agents that constitutes
a connected component of the network structure.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Simulating Language Change</title>
      <p>
        A fascinating puzzle in the theory of language change is the threshold-problem[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]:
how can a new linguistic variant spread and reach a particular threshold of
speakers that enables to replace a concurrent old variant? To reach such a threshold
is rather improbable considering the facts that i) the new variant is expected to
be initially used by a minority and ii) the old variant is expected to be a
societywide linguistic conventions that serves for perfect communication. Therefore,
sociolinguists expect that new variants mostly do not disseminate but remain
in small social groups, often with short durability [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Now, what enables new
variants in rare cases to spread and establish a new linguistic convention?
      </p>
      <p>
        Some sociolinguists expect particular environmental patterns of the social
network structure to be source and engine for language change [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Their
weakties theory purports that new (innovative) variants i) emerge most often among
edges that constitute weak ties in the social network, and ii) disseminate via
central nodes. According to the theory, exactly the combination of weak ties and
central nodes supports new variants to overcome the threshold problem [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        In my experiments agents in a social network communicate via signaling
games and update by innovative reinforcement learning. This leads to the e ect
that i) multiple local conventions emerge (see c.f.[
        <xref ref-type="bibr" rid="ref18 ref20 ref28 ref31">31, 18, 28, 20</xref>
        ]), and ii) agents
invent new messages from time to time, since communication is not perfectly
successful in a society with multiple conventions and therefore the force of
innovation stays on a positive level. As my experiments will show, while mostly
invented messages disappear as fast as they appear, from time to time new
variants can spread and realize new regional conventions. Therefore, I want to
analyze if particular structural features support emergence and spread of
innovation. Do the results support the weak-tie theory? Is it possible to detect other
network properties that support language change?
3.1
      </p>
      <sec id="sec-3-1">
        <title>Experimental Settings</title>
        <p>
          I conducted simulation runs of agents that are placed in a social network
structure. Per simulation step the agents communicate by playing a signaling game
with each of their direct neighbors. They update their behavior by innovative
reinforcement learning. The concrete settings of the experiments were as follows:
6 A connected component of a network is a subgraph in which any two nodes are
connected to each other by at least one path.
{ network structure: a scale-free network with 500 agents (Holme-Kim
algorithm [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] with m = 2 and p = :8)
{ signaling game: a 3 9-game
{ reinforcement learning : Bush-Mosteller reinforcement with negative
reinforcement and lateral inhibition ( = 1, = 1, = 1, = 20)
{ stop condition: reaching 100,000 simulation steps
{ initiation condition: the network is initially divided in 8 connected
components, and agents communicate only with neighbors of the same component
with a given signaling system for the rst 100 simulation steps
{ number of simulation runs : 10
        </p>
        <p>Since I am interested in the mechanisms that show how and why
semantic conventions change, not how they evolve from the scratch, the simulation
runs were started with the given initiation condition, which ensures that
already established local signaling conventions are given from the beginning. In
the following the results of the simulation runs will be presented.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Global Values</title>
        <p>To get a good impression of how the population behaves during a simulation
run, two global values were measured: i) the average communicative success : the
utility value of a played game averaged over all plays during a simulation step,
and ii) the global number of signaling conventions : the total number of signaling
conventions agents have learned7 at the given simulation step.</p>
        <p>In all simulation runs similar results were observed: after around 1,000
simulation steps the average communicative success increased to a value of around
:85, and the number of signaling conventions to a value of around 25.
Furthermore, while both values show no tendency to increase or decrease in the long-run,
they oscillate quite strongly: the communicative success oscillates between :8 and
:9 and the number of signaling conventions oscillates between 20 and 30. This
result reveals a global interaction dynamics that shows long-term stability and
short-term reactivity at the same time.</p>
        <p>
          Especially the oscillation of the number of signaling conventions is an
indicator for local reactivity. To get a better understanding of what is actually
happening, Figure 2 shows a sequence of the rst 10,000 simulation steps for
the number of learners for 6 di erent signaling conventions: here regions of new
conventions emerge, grow to a speci c amount and possibly get extinct. This
pattern shows quite nicely how language change is realized: an innovation is
made at one point in time and place, and then it spreads and its number of
speakers increases to a speci c amount and constitutes a region of a new
signaling convention. The next step is now to detect agents that tend to contribute to
innovation and spread, and to investigate if speci c structural patterns support
such a behavior.
7 Since generally agents do not learn a totally pure strategy, an agent is attributed
to have learned a signaling convention, when her behavioral strategy pro le and a
signaling system reveal a so-called Hellinger similarity of &gt; :7. For a formal de nition
see [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], De nition 2.11.
simulation steps
Considering the dynamic picture of language change in the simulation runs,
I was interested if it is possible to detect speci c roles of agents that might
support language change or strengthen local conventions. Following the study of
Muhlenbernd and Franke [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], I was particularly interested in the way an agent's
static structural features and dynamic behavioral features might correlate. Static
features are given by an agent's network properties ties strength T S, degree
centrality DC, closeness centrality CC, betweenness centrality BC and clustering
coe cient CL, as introduced in Section 2.3.
        </p>
        <p>
          Dynamic features of an agent can be measured through her behavior or
position during a simulation run. Since I was interested in the way agents were
involved in the spread of a new variant, I de ned and measured the dynamic
features innovation skill and impact. To compare these values to a number of
further dynamic features, I also de ned and measured loyalty, interiority and
mutual intelligibility. For an agent n these features are de ned as follows:
{ innovation skill IN V (n): the proportion of simulation steps at which agent
n switched to a new convention, which no neighbor has actually learned
{ impact IM P (n): the proportion of simulation steps at which a neighbor of
agent n switched to agent n's convention
{ loyalty LOY (n): the proportion of simulation steps agent n played her
favorite strategy (most often played strategy)
{ interiority IN T (n): the proportion of simulation steps for which agent n has
exclusively neighbors with the same convention
{ mutual intelligibility M I(n): the average M I8 value of agent n to her
neighborhood at a given simulation step, averaged over all simulation steps
8 The mutual intelligibility value M I reproduces the expected utility for two di erent
strategy pairs. For the de nition see [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], De nition 3.
        </p>
        <p>S
T</p>
        <p>C
D</p>
        <p>C
C</p>
        <p>C
B</p>
        <p>L
C</p>
        <p>V
N
I
0.8
0.6</p>
        <p>In my analysis I measured the correlation of all 5000 data points9 and for
each possible combination of feature. The resulting plot is shown in Figure 3:
here correlations are depicted as circles, where the size represents the strength
of the correlation, and the brightness represents the direction of the relationship
(positive: light, negative, dark).</p>
        <p>The results show rst of all: the data support the weak-tie theory, since
i) IN V has a high negative correlation with T S, and ii) IM P reveals a high
positive correlation with all three centrality properties DC, CC and BC. Thus,
innovation mostly starts at weak ties and spreads via central nodes.</p>
        <p>
          But there are further interesting correlations. IN V has a high negative
correlation with LOY , M I and IN T . This shows that innovative agents i) do hardly
stay with their favorite convention, ii) are not very intelligible to neighbors, and
iii) are rather positioned at the border of a convention region. Note: the point
that innovation is expected to emerge at the periphery of societies was also
supported by eld studies and computational work [
          <xref ref-type="bibr" rid="ref17 ref9">9, 17</xref>
          ].
9 Data points are the agents' features; for 10 simulation runs with 500 agents each.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>
        In this study I used the signaling game { a model that is generally used to deal
with issues of language evolution { to analyze the dynamics of language change.
At rst this is an ambitious challenge by considering that signaling games are
designed in a way that players are generally attracted to convention and
stability. For all that, I was particularly interested in the way environmental variables
in terms of network structure might describe characteristics that promote or
mitigate semantic change. For that purpose I made experiments on social
network structures of agents that play the signaling game repeatedly with connected
neighbors and update their behavior by a simple dynamics: reinforcement
learning. I extended this learning account by an additional feature { innovation {
that supports the changing nature of the population's dynamics. In my analysis
I compared di erent features of agents { static network properties and dynamic
behavioral properties of agents { to extract the characteristics of di erent roles
that might be involved in language change. The results support the weak
tiestheory: innovation start at weak ties and spreads via central nodes [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        Since this study gives only a rst impression of where to look for forces of
language change, there are at least two steps necessary to reveal more insightful
results. First of all, the current data should be further analyzed by using
regression models to nd out, if there are non-trivial interactions { e.g. non-linear
dependencies { between static network properties and the role of agents in
language change dynamics. Second, my current results indicate to analyze further
i) static properties, like information ow measures [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] or closeness vitality [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ];
and ii) dynamic features like the individual force of innovation, the number of
known messages or the growth magnitudes of an agent's newly innovated
signaling system. These two additional steps are currently investigated and can
hopefully enrich subsequent work by delivering deeper insights into the role of
innovation in dynamics of semantic change.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Alexander</surname>
            ,
            <given-names>J M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skyrms</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zabell</surname>
            ,
            <given-names>S.L.</given-names>
          </string-name>
          :
          <article-title>Inventing New Signals</article-title>
          .
          <source>Dynamic Games and Applications 2</source>
          .1,
          <issue>129</issue>
          {
          <fpage>145</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Barabasi</surname>
            ,
            <given-names>A.-L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reka</surname>
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Emergence of Scaling in Random Networks</article-title>
          .
          <source>Science</source>
          <volume>286</volume>
          ,
          <fpage>509</fpage>
          -
          <lpage>512</lpage>
          (
          <year>1999</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Barret</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          :
          <article-title>The Evolution of Coding in Signaling Games</article-title>
          .
          <source>Theory and Decision</source>
          <volume>67</volume>
          ,
          <issue>223</issue>
          {
          <fpage>237</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Barret</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zollman</surname>
            ,
            <given-names>K.J.S.:</given-names>
          </string-name>
          <article-title>The Role of Forgetting in the Evolution and Learning of Language</article-title>
          .
          <source>Journal of Experimental and Theoretical Arti cial Intelligence</source>
          <volume>21</volume>
          .4,
          <issue>293</issue>
          {
          <fpage>309</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Dall'Asta L</surname>
          </string-name>
          .,
          <string-name>
            <surname>Baronchelli</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barrat</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Loreto</surname>
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Agreement Dynamics on SmallWorld Networks</article-title>
          .
          <source>Europhys. Lett</source>
          .
          <volume>73</volume>
          ,
          <issue>969</issue>
          {
          <fpage>975</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Bush</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mosteller</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Stochastic Models of Learning</article-title>
          . New York: John Wiley &amp; Sons (
          <year>1955</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Crawford</surname>
            <given-names>V.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sobel</surname>
            <given-names>J</given-names>
          </string-name>
          .:
          <source>Strategic Information Transmission. Econometrica</source>
          <volume>50</volume>
          ,
          <issue>1431</issue>
          {
          <fpage>1451</fpage>
          (
          <year>1982</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Easley</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kleinberg</surname>
          </string-name>
          , J.: Networks, Crowds, and
          <article-title>Markets: Reasoning about a Highly Connected World</article-title>
          . Cambridge University Press, Cambridge (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. Fagyal
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Swarup</surname>
          </string-name>
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Escobar</surname>
          </string-name>
          <string-name>
            <given-names>A.M.</given-names>
            ,
            <surname>Gasser</surname>
          </string-name>
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Lakkaraju</surname>
          </string-name>
          <string-name>
            <surname>K.</surname>
          </string-name>
          :
          <article-title>Center and Peripheries: Network Roles in Language Change</article-title>
          .
          <source>Lingua 120</source>
          ,
          <year>2061</year>
          {
          <year>2079</year>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Franke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Jager, G.:
          <article-title>Bidirectional Optimization from Reasoning and Learning in Games</article-title>
          .
          <source>Journal of Logic, Language and Information 21.1</source>
          ,
          <issue>117</issue>
          {
          <fpage>139</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Holme</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>B.J.</given-names>
          </string-name>
          :
          <article-title>Growing Scale-free Networks with Tunable Clustering</article-title>
          .
          <source>Physical Review E 65.2</source>
          ,
          <fpage>0261071</fpage>
          -
          <lpage>0261074</lpage>
          (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Huttegger S.M.</surname>
          </string-name>
          <article-title>: Evolution and the Explanation of Meaning</article-title>
          .
          <source>Philosophy of Science 74</source>
          ,
          <issue>1</issue>
          {
          <fpage>27</fpage>
          (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Jackson</surname>
            ,
            <given-names>M.O.</given-names>
          </string-name>
          : Social and
          <string-name>
            <given-names>Economic</given-names>
            <surname>Networks</surname>
          </string-name>
          . Princeton: Princeton University Press (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Ke</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gong</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
          </string-name>
          , W.S-Y.:
          <article-title>Language Change and Social Networks</article-title>
          .
          <source>Communications in Computational Physics</source>
          <volume>3</volume>
          (
          <issue>4</issue>
          ),
          <volume>935</volume>
          {
          <fpage>949</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. Koschutzki
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Lehmann</surname>
          </string-name>
          <string-name>
            <given-names>K.A.</given-names>
            ,
            <surname>Peeters</surname>
          </string-name>
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Richter</surname>
          </string-name>
          <string-name>
            <given-names>S.</given-names>
            , Tenfelde- Podehl S.,
            <surname>Zlotowski</surname>
          </string-name>
          <string-name>
            <surname>O.</surname>
          </string-name>
          :
          <source>Network Analysis - Chapter 3: Centrality Indices. Lecture Notes in Computer Science</source>
          <volume>3418</volume>
          ,
          <issue>16</issue>
          {
          <fpage>61</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Lewis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Convention</surname>
            .
            <given-names>A Philosophical</given-names>
          </string-name>
          <string-name>
            <surname>Study</surname>
          </string-name>
          . Harvard University Press, Cambridge (
          <year>1969</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Milroy</surname>
            ,
            <given-names>J. Milroy L.</given-names>
          </string-name>
          :
          <article-title>Linguistic change, social network and speaker innovation</article-title>
          .
          <source>Journal of Linguistics</source>
          <volume>21</volume>
          (
          <issue>02</issue>
          ),
          <volume>339</volume>
          {
          <fpage>384</fpage>
          (
          <year>1985</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18. Muhlenbernd, R.:
          <article-title>Learning with Neighbours: Emergence of Convention in a Society of Learning Agents</article-title>
          .
          <source>Synthese 183.S1</source>
          ,
          <volume>87</volume>
          {
          <fpage>109</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19. Muhlenbernd R.:
          <article-title>Signals and the Structure of Societies</article-title>
          .
          <source>Ph.D. Thesis</source>
          . University of Tubingen, TOBIAS-Lib Online Publication http://nbn-resolving.de/urn:nbn: de:bsz:
          <fpage>21</fpage>
          -opus-
          <volume>70046</volume>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20. Muhlenbernd, R.,
          <string-name>
            <surname>Franke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          : Signaling Conventions:
          <article-title>Who Learns What Where and When in a Social Network?</article-title>
          <source>Proceedings of EvoLang IX</source>
          ,
          <volume>242</volume>
          {
          <fpage>249</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21. Muhlenbernd R.,
          <source>Nick J.D.: Language Change and the Force of Innovation. Pristine Perspectives on Logic, Language, and Computation - Lecture Notes in Computer Science</source>
          <volume>8607</volume>
          ,
          <issue>194</issue>
          {
          <fpage>213</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Nettle</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Using Social Impact Theory to Simulate Language Change</article-title>
          .
          <source>Lingua</source>
          <volume>108</volume>
          ,
          <issue>95</issue>
          {
          <fpage>117</fpage>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Erev</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Learning in Extensive-Form Games: Experimental Data and Simple Dynamic Models in the Intermediate Term</article-title>
          .
          <source>Games and Economic Behaviour</source>
          <volume>8</volume>
          ,
          <issue>164</issue>
          {
          <fpage>212</fpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>The Analysis of Mind. Unwin Brothers Ltd (</article-title>
          <year>1921</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Skyrms</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          : Signals: Evolution, Learning &amp; Information. Oxford University Press, Oxford (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Steels</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>A Self-Organizing Spatial Vocabulary</article-title>
          .
          <source>Arti cial Life</source>
          <volume>2</volume>
          (
          <issue>3</issue>
          ),
          <volume>319</volume>
          {
          <fpage>332</fpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Steels</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McIntyre</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Spatially Distributed Naming Games</article-title>
          .
          <source>Advances in Complex Systems</source>
          <volume>1</volume>
          (
          <issue>4</issue>
          ),
          <volume>301</volume>
          {
          <fpage>323</fpage>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Wagner</surname>
            , E.: Communication and
            <given-names>Structured</given-names>
          </string-name>
          <string-name>
            <surname>Correlation</surname>
          </string-name>
          .
          <source>Erkenntnis 71.3</source>
          ,
          <issue>377</issue>
          {
          <fpage>393</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Warneryd</surname>
            <given-names>K.</given-names>
          </string-name>
          : Cheap Talk, Coordination, and
          <string-name>
            <given-names>Evolutionary</given-names>
            <surname>Stability</surname>
          </string-name>
          .
          <source>Games and Economic Behaviour</source>
          <volume>5</volume>
          ,
          <fpage>532</fpage>
          -
          <lpage>546</lpage>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Watts</surname>
            <given-names>D.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Strogatz</surname>
            <given-names>S.H.</given-names>
          </string-name>
          :
          <article-title>Collective Dynamics of Small-World Networks</article-title>
          .
          <source>Nature</source>
          <volume>393</volume>
          ,
          <volume>440</volume>
          {
          <fpage>442</fpage>
          (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Zollman</surname>
            ,
            <given-names>K.J.S.</given-names>
          </string-name>
          :
          <article-title>Talking to Neighbors: The Evolution of Regional Meaning</article-title>
          .
          <source>Philosophy of Science 72.1</source>
          ,
          <issue>69</issue>
          {
          <fpage>85</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>