<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Rationality and Randomness</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Francesco</forename><surname>Pasquale</surname></persName>
							<email>pasquale@mat.uniroma2.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Università di Roma &quot;Tor Vergata&quot;</orgName>
								<address>
									<settlement>Roma</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Rationality and Randomness</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">0330CBD613C4E465886DE7F461F16257</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T07:53+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract/>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Imagine you are sitting in a room with a large number of people attending a conference and the organizers give you and all the other persons in the room a small piece of paper, asking each person to write down a number between 0 and 100. All the numbers will then be collected and a prize will be given to the person who will have written the closest number to half of the average of all the numbers. What number would you write on your piece of paper?</p><p>Game theory <ref type="bibr" target="#b15">[16]</ref> provides a powerful and elegant framework to predict the outcome of situations like the one described above. Intuitively speaking, in order to choose a number to write on her piece of paper, a person in the room could think as follows: Since numbers are constrained to be between 0 and 100, their average will be a number between 0 and 100 as well, so half of the average will be a number between 0 and 50; hence, if I am rational I would definitely not write a number larger than 50. Now, if everyone in the room is rational and writes a number not larger than 50, then the average itself will be not larger than 50 and half of the average will be not larger than 25; so, if I am rational and I believe that all other people in the room are rational, I would not write a number larger than 25. If everyone writes a number not larger than 25, . . . A few more steps of reasoning along the above lines and we have the game theoretic prediction of the outcome: Every person in the room writes 0 on her piece of paper. This is indeed the unique Nash equilibrium <ref type="bibr" target="#b13">[14]</ref> of that game. However, if you try it out in any real scenario, you will see by yourself how far from the truth is such a prediction. The main reason is that it relies on the very strong assumption that rationality is common knowledge <ref type="bibr" target="#b5">[6]</ref> among the agents (everyone is rational, everyone knows that everyone is rational, everyone knows that everyone knows that everyone is rational, and so forth).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Bounded rationality and the Logit dynamics</head><p>Several branches of game theory try to relax the rationality assumption in order to get to more reliable predictions. For example, Evolutionary game theory <ref type="bibr" target="#b14">[15]</ref> applies the game theoretic framework to repeated games in biological contexts and introduces new solution concepts (evolutionary stable strategies); Behavioral game theory <ref type="bibr" target="#b8">[9]</ref> proposes a more experimental approach: Observe the behavior of real players and appropriately extend the theory to give explanation to the outcomes.</p><p>A simple and elegant approach to model players with limited rationality in the case of repeated games is the Logit dynamics, introduced in <ref type="bibr" target="#b7">[8]</ref> and inspired by statistical mechanics: Starting from some initial strategy profile, at each round a player is selected uniformly at random and she updates her current strategy according to a probability distribution biased toward strategies promising higher payoffs. More formally, let n be the number of players, for i = 1, . . . , n let S i be the set of strategies for player i and let u i :</p><formula xml:id="formula_0">S 1 × • • • × S n → R be her utility function. If x = (x 1 , . . . , x n ) ∈ S 1 × • • • × S n</formula><p>is the current profile of strategies and player i is selected for the update, then she will play strategy y ∈ S i with probability proportional to e βui(x−i,y) , where (x −i , y) is the vector obtained from x by replacing the i-th entry x i with y and β 0 is a parameter measuring the rationality level of the players (it is indeed easy to see that, for β = 0 players play one of their strategies uniformly at random, i.e., they have no rationality at all, while for β → ∞ players tend to best respond to the current strategies of the other players, i.e., they are fully rational ). Logit dynamics defines an ergodic Markov chain over the space of strategy profiles (more precisely, a family of ergodic Markov chains indexed by β) whose unique stationary distribution π β we proposed <ref type="bibr" target="#b4">[5]</ref> as the equilibrium solution concept for the game. The "prediction" of such a solution concept is thus that, for any subset A ⊆ S 1 × • • • × S n of the state space, the fraction of times that the system spends in A is given by its stationary probability π β (A), in the long run. This is certainly a more rough prediction than that of a Nash equilibrium but it is likely much more realistic in several cases (as much as a statement like "The next economic crisis will happen in 2059" cannot be considered a serious prediction today, while a statement like "We should expect at least two global economic crises for each century" is certainly weaker but also much more credible while still containing some non-negligible information).</p><p>Once we model an evolving system of agents as an ergodic Markov chain, the unique stationary distribution of the chain becomes a powerful solution concept for the system, since it allows us to make predictions on its behavior in the long run, regardless of the starting state. However, the strength of this solution concept is also its weakness: The predictive appeal of the stationary distribution essentially vanishes if the mixing time <ref type="bibr" target="#b12">[13]</ref> (the time it takes the chain to get close to its stationary distribution, starting at an arbitrary state) is too long with respect to the time-scale we are interested in. In <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref> we classified some classes of strategic games with respect to their Logit dynamics' mixing time, distinguishing between polynomial and exponential (in the number of players) mixing. For the games whose Logit dynamics has exponential mixing time, in order to be able to make predictions at polynomial time-scales, in <ref type="bibr" target="#b3">[4]</ref> we introduced a generalization of the concept of stationary distribution for a Markov chain and we called it metastable probability distribution.</p><p>Metastability is a word that can have slightly different meanings in different scientific disciplines. Nevertheless, all the meanings are somehow related to evolving systems hanging around "persistent" configurations but out of their main equilibrium. Our definition in <ref type="bibr" target="#b3">[4]</ref> is no exception: Roughly speaking, we define a probability distribution µ to be metastable for a Markov chain with transition matrix P , if µP is close (with respect to an appropriate distance function) to µ. Among all possible metastable distributions, the interesting ones are those quickly reached from some subset of the state space. As a simple example, consider the following process: Start with a sequence of n bits (x 1 . . . , x n ) ∈ {0, 1} n ; at each round pick one index i ∈ [n] uniformly at random and replace x i with either 0 or 1 with probability 1/2; stop when you reach either the sequence with all zeros 0 = (0, . . . , 0) or the sequence with all ones 1 = (1, . . . , 1). This is a lazy random walk on the hypercube (see, e.g., <ref type="bibr" target="#b9">[10]</ref>) modified by making 0 and 1 absorbing states. Clearly, starting from any state x ∈ {0, 1} n , eventually the process will end up in one of the two absorbing states. However, for every initial state x = 0, 1, the process will run for an exponential number of rounds, in expectation, before being absorbed; can't we say anything on the distribution xP t at a shorter time-scale? Yes, indeed. It is easy to see that, after t = Θ(n log n) rounds, xP t will get close to the uniform distribution U ∼ {0, 1} n , and it will stay close to U for an exponential number of rounds.</p><p>In <ref type="bibr" target="#b3">[4]</ref> we introduced and studied metastable probability distributions as solution concepts for strategic games. However, the usefulness of looking at the "metastable phase" of evolving systems goes beyond the domains of game theory and Markov chains. In the next section we briefly describe an example of a simple dynamics whose metastable phase analysis allowed us to solve an intriguing computational task in a distributed way <ref type="bibr" target="#b6">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Community detection via averaging</head><p>Consider the following simple rules executed synchronously, in discrete rounds, by the nodes of an undirected graph: Each node initially holds a value (say, a rational number) and then, at each round, it updates its value to the average of the values held by its neighbors. It comes as no surprise that, under mild assumptions on the graph (namely, it has to be connected and non-bipartite) all nodes will converge to the same value, in the long run. May be it is much less obvious that, in its metastable regime, this simple dynamics can be effectively used to efficiently solve the community detection problem <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b0">1]</ref> in a fullydistributed way.</p><p>Let us consider a graph formed by two good expanders connected by a sparse cut. For the sake of concreteness and simplicity, let us assume we have a "very regular" graph G = (V, E) with an even number n of nodes partitioned in two blocks of equal size, V 1 and V 2 , and that each node has exactly a neighbors in its own block and exactly b neighbors in the other block, for some positive integers a and b, with a b. Let µ 1 and µ 2 be the averages of the initial values of the nodes in V 1 and V 2 , respectively. Intuitively speaking, by running the averaging dynamics on a graph like that the following happens: In a first phase, the values of all nodes in V 1 quickly converge to a value close to µ 1 , while those of nodes in V 2 to a value close to µ 2 . After that initial phase, the system enters in a metastable regime in which all values slowly converge toward the global average (µ 1 + µ 2 )/2; if the two averages µ 1 and µ 2 are different, say µ 1 &lt; µ 2 , in this second phase the value of each node in V 1 increases at every round and the value of each node in V 2 decreases. Thus, if we augment the averaging dynamics with a "clustering criterion" asking each node to raise, at each round, a blue flag or a red flag depending on whether its value has increased or decreased from the last round, then as soon as the system enters the metastable regime, all nodes in one of the blocks raise the red flag and all nodes in the other block raise the blue one. We thus have a fully-distributed protocol that allows the nodes to perform a perfect reconstruction of the two clusters of the graph.</p><p>To make the intuitions in the above paragraph a bit more real, we can use some linear algebra: Let us name x (t) = (x (t) u , u ∈ V ) the vector where x (t) u is the value of node u at round t. It is easy to see that the averaging dynamics is defined by the recursion x (t+1) = P x (t) , where P = A/(a + b) and A is the adjacency matrix of the graph. The vector of values at round t is thus x (t) = P t x (0) . Since P is a symmetric matrix with real entries, all its eigenvalues are real. Since P is stochastic (the entries of each row sum up to 1), all its eigenvalues are between −1 and 1 and vector 1 = (1, u ∈ V ), i.e., the vector with all entries equal to 1, is an eigenvector with eigenvalue 1. Moreover, for our "very regular" graph G, the partition indicator vector χ = (1, u ∈ V 1 ; −1, u ∈ V 2 ), taking value +1 on all the nodes of one of the blocks and −1 on all the nodes of the other block, is also an eigenvector and its eigenvalue is (a − b)/(a + b). Under the assumption a b, that formalizes the fact that the two blocks are internally well-connected with respect to the cut between them, (a − b)/(a + b) is the second-largest eigenvalue, λ 2 , of P . The vector of values at round t can thus be written as</p><formula xml:id="formula_1">x (t) = α 1 1 + α 2 λ t 2 χ + e (t)</formula><p>, where α 1 and α 2 are suitable coefficients depending only on the initial values (namely, α 1 is the global average (µ 1 + µ 2 )/2 of the initial values and α 2 is half of the difference of the initial averages (µ 1 − µ 2 )/2) and e (t) is a vector orthogonal to 1 and χ. When we consider the difference between the vectors in two consecutive rounds, the component in the direction of 1 cancels, and what is left is x (t−1) − x (t) = α 2 λ t−1 2 (1 − λ 2 )χ + e (t−1) − e (t) . Since e (t) , for t = 0, 1, . . . , are vectors orthogonal to 1 and χ, the norm of e (t) goes to zero at least as fast as the third eigenvalue of P , λ t 3 . Hence, if the coefficient α 2 is non-zero, after a number of rounds depending on 1/(λ 2 − λ 3 ) it holds that |α 2 λ t−1 2 (1 − λ 2 )| &gt; |e (t−1) (u) − e (t) (u)| for all nodes u. This implies that, for each node u, the sign of x</p><formula xml:id="formula_2">(t−1) u − x (t)</formula><p>u is equal to the sign of α 2 χ: In other words, it is positive for all the nodes in one of the blocks and negative for all the nodes in the other block. Finally, in order to ensure in a distributed way that α 2 = (µ 1 − µ 2 )/2 is non-zero, we can use randomness: If each node chooses its initial value as +1 or −1 with probability 1/2, then the two initial averages differ for Θ(1/ √ n) with high probability.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Some final remarks</head><p>Evolving systems governed by simple local rules that produce observable global phenomena, can be fruitfully studied from a computational perspective. On the one hand, they seem suitable for modeling natural phenomena emerging in different fields (physics, economy, biology); on the other hand, they can be used as design principles and building blocks for lightweight distributed protocols for relevant computational tasks.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Randomness plays a fundamental role in this kind of processes, either in a Bayesian sense, to model the intrinsic uncertainty about the local rules executed by the agents of the system, or as a distributed symmetry-breaking tool.</p><p>The most interesting phenomena often cannot be studied by means of limiting behaviors and equilibria, because they appear at shorter time-scales, i.e., when systems are in their metastable phases.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Community detection in general stochastic block models: Fundamental limits and efficient algorithms for recovery</title>
		<author>
			<persName><forename type="first">Emmanuel</forename><surname>Abbe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Colin</forename><surname>Sandon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 56th IEEE Ann. Symp. on Foundations of Computer Science (FOCS&apos;15)</title>
				<meeting>of the 56th IEEE Ann. Symp. on Foundations of Computer Science (FOCS&apos;15)</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="670" to="688" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Logit dynamics with concurrent updates for local interaction potential games</title>
		<author>
			<persName><forename type="first">Vincenzo</forename><surname>Auletta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Diodato</forename><surname>Ferraioli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Pasquale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Penna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giuseppe</forename><surname>Persiano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Algorithmica</title>
		<imprint>
			<biblScope unit="volume">73</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="511" to="546" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Convergence to equilibrium of Logit dynamics for strategic games</title>
		<author>
			<persName><forename type="first">Vincenzo</forename><surname>Auletta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Diodato</forename><surname>Ferraioli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Pasquale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Penna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giuseppe</forename><surname>Persiano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Algorithmica</title>
		<imprint>
			<biblScope unit="volume">76</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="110" to="142" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Metastability of Logit dynamics for coordination games</title>
		<author>
			<persName><forename type="first">Vincenzo</forename><surname>Auletta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Diodato</forename><surname>Ferraioli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Pasquale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giuseppe</forename><surname>Persiano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the ACM-SIAM Symp. on Discrete Algorithms (SODA&apos;12)</title>
				<meeting>of the ACM-SIAM Symp. on Discrete Algorithms (SODA&apos;12)</meeting>
		<imprint>
			<publisher>SIAM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1006" to="1024" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Mixing time and stationary expected social welfare of Logit dynamics</title>
		<author>
			<persName><forename type="first">Vincenzo</forename><surname>Auletta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Diodato</forename><surname>Ferraioli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Pasquale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giuseppe</forename><surname>Persiano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theory of Computing Systems</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="3" to="40" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Agreeing to disagree</title>
		<author>
			<persName><forename type="first">Robert</forename><forename type="middle">J</forename><surname>Aumann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Annals of Statistics</title>
				<imprint>
			<date type="published" when="1976">1976</date>
			<biblScope unit="page" from="1236" to="1239" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Find your place: Simple distributed algorithms for community detection</title>
		<author>
			<persName><forename type="first">Luca</forename><surname>Becchetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrea</forename><surname>Clementi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Emanuele</forename><surname>Natale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Pasquale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luca</forename><surname>Trevisan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the ACM-SIAM Symp. on Discrete Algorithms (SODA&apos;17)</title>
				<meeting>of the ACM-SIAM Symp. on Discrete Algorithms (SODA&apos;17)</meeting>
		<imprint>
			<publisher>SIAM</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="940" to="959" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The statistical mechanics of strategic interaction</title>
		<author>
			<persName><forename type="first">Lawrence</forename><forename type="middle">E</forename><surname>Blume</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Games and Economic Behavior</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="387" to="424" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Behavioral game theory: Experiments in strategic interaction</title>
		<author>
			<persName><forename type="first">Colin</forename><surname>Camerer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2003">2003</date>
			<publisher>Princeton University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Asymptotic analysis of a random walk on a hypercube with many dimensions</title>
		<author>
			<persName><forename type="first">Persi</forename><surname>Diaconis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ronald</forename><forename type="middle">L</forename><surname>Graham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><forename type="middle">A</forename><surname>Morrison</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Random Structures &amp; Algorithms</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="51" to="72" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Community detection in graphs</title>
		<author>
			<persName><forename type="first">Santo</forename><surname>Fortunato</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Physics reports</title>
		<imprint>
			<biblScope unit="volume">486</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="75" to="174" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Community structure in social and biological networks</title>
		<author>
			<persName><forename type="first">Michelle</forename><surname>Girvan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><forename type="middle">E J</forename><surname>Newman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proc. of the National Academy of Sciences</title>
		<imprint>
			<biblScope unit="volume">99</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="7821" to="7826" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Markov chains and mixing times</title>
		<author>
			<persName><forename type="first">David</forename><forename type="middle">A</forename><surname>Levin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yuval</forename><surname>Peres</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elizabeth</forename><forename type="middle">L</forename><surname>Wilmer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>American Mathematical Society</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Non-cooperative games</title>
		<author>
			<persName><forename type="first">John</forename><surname>Nash</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Mathematics</title>
		<imprint>
			<biblScope unit="page" from="286" to="295" />
			<date type="published" when="1951">1951</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">The theory of games and the evolution of animal conflicts</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Maynard</forename><surname>Smith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Theoretical Biology</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="209" to="221" />
			<date type="published" when="1974">1974</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">Oskar</forename><surname>John Von Neumann</surname></persName>
		</author>
		<author>
			<persName><surname>Morgenstern</surname></persName>
		</author>
		<title level="m">Theory of games and economic behavior</title>
				<imprint>
			<publisher>Princeton University Press</publisher>
			<date type="published" when="1944">1944</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
