<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Artificial Prediction Markets as a tool for Syndromic Surveillance</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Fatemeh</forename><surname>Jahedpari</surname></persName>
							<email>f.jahedpari@bath.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bath</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Julian</forename><surname>Padget</surname></persName>
							<email>j.a.padget@bath.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bath</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marina</forename><surname>De Vos</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bath</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Benjamin</forename><surname>Hirsch</surname></persName>
							<email>benjamin.hirsch@kustar.ac.ae</email>
							<affiliation key="aff1">
								<orgName type="laboratory">EBTIC</orgName>
								<orgName type="institution">Khalifa University</orgName>
								<address>
									<country key="AE">United Arab Emirates</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Artificial Prediction Markets as a tool for Syndromic Surveillance</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">85F002009A4739C5DF80B738A3BA6EBB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T10:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A range of data sources across the internet, such as google search terms, twitter topics and Facebook messages, amongst others, can be viewed as kinds of sensors from which information might be extractable about trends in the expression of matters of concern to people. We focus on the problem of how to identify emerging trends after the original textual data has been processed into a quantitative form suitable for the application of machine learning techniques. We present some preliminary ideas, including an agent-based implementation and some early results, about the application of artificial prediction markets to such data, taking the specific domain of syndromic surveillance (early stage recognition of epidemics) as an example, using publicly available data sets.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>This paper outlines some early stage research into the application of prediction markets to syndromic surveillance. Prediction markets are seen as a mechanism to forecast the outcome of future events by aggregating public opinion, in which market participants trade so-called securities that represent different probabilities about the (expected) outcome of a scenario. We describe prediction markets in more detail in section 2 and compare them with alternative approaches in section 5.</p><p>Syndromic surveillance monitors population health indicators which are apparent before confirmatory diagnostic tests become available, in order to predict a disease outbreak within a society at the earliest possible moment, with the aim of protecting community health. Clearly, the earlier a health threat within a population is detected, the lower the morbidity and the higher the number of lives that may be saved. Syndromic surveillance data sources include, but are not limited to, coding of diagnoses at admission or discharge emergency department, chief complaints, medical encounter prediagnostic data, absentee rates at schools and workplaces, over-the-counter pharmacy sales, Internet and open source information such as people post in social media. Each of these types of data can generate a signal during a disease development. Therefore, given the vast amount of these data sources, a proper mechanism is necessitated to integrate them as soon as they become available.</p><p>In this research, we focus on developing a novel syndromic surveillance technique by integrating different data sources inspired by the crowd-sourcing behaviour of prediction markets. To achieve our goal, we train a multiagent system in an artificial prediction market in a semi-supervised manner.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Prediction Markets</head><p>Prediction markets have been used to forecast accurately the outcome of political contests, sporting events, and economic outcomes <ref type="bibr" target="#b18">[19]</ref>. In this research, we use an artificial prediction market as a mechanism to integrate several syndromic surveillance data sources to predict a level of disease activity within a population on a specific date. This section briefly explains the preliminaries of prediction markets.</p><p>The prediction market, also known as an information market, originated at the Iowa Electronic Marketplace (IEM) in 1988 as a means to bet on presidential elections. A prediction market aims to utilise the aggregated wisdom of the crowd in order to predict the outcome of a future event <ref type="bibr" target="#b15">[16]</ref>. In these markets, traders' behaviour has the effect of externalising their private information and beliefs about the possible outcomes, and can hence be used to forecast an event accurately <ref type="bibr" target="#b11">[12]</ref>. Prediction markets are increasingly being considered as approaches for collecting, summarising and aggregating dispersed information by governments and corporations <ref type="bibr" target="#b9">[10]</ref>.</p><p>In prediction markets, traders bet on the outcome of future events by trading securities. A security is a financial instrument, like a financial stock, that pays a profit (or makes a loss) based on the outcome of the event. Each outcome of an event has a security associated with it. Traders can buy or sell any number of securities before the expiry time of the security. A security expires when the outcome of the event is realised. To illustrate a simple case, a prediction market can be used to predict "if candidate 'X' will win the election" by offering two securities of 'Yes' and 'No'. Assuming the market finally ends with candidate 'X' winning the election, all traders will receive $1 payoff for each 'Yes' security they own and $0 for their 'No' securities, losing the money they spent on buying them.</p><p>The aggregated monetary bets made by market traders dynamically determine the price of each security before the market ends. The market price of a security represents the price at which the security can be bought or sold. Also, it can be interpreted as representing the probability of that outcome occurring by fusing the beliefs of all the market participants. Arguably, the price that an agent would pay to buy a security indicates how confident s/he is in the outcome of the event. For example, if a trader believes that the chance of candidate 'X' winning is 80%, s/he then would be willing to buy a 'Yes' security at any price up to $0.80.</p><p>A prediction market is run by a market-maker who is the company or individual that interacts with traders to buy and sell securities. The market-maker determines the market price using a market trading protocol. The logarithmic market scoring rule (LMSR) designed by <ref type="bibr">Hanson [9]</ref> is an automated market maker. Using LMSR, the price and cost of a security is calculated as follows:</p><formula xml:id="formula_0">C(q i ) = b * log( m i=1</formula><p>e qi/b ) and P (q i ) = exp(q i /b) m j=1 exp(q j /b) respectively, where m is number of securities that market offers, each for one possible outcome and q i ∈ (q 1 , q 2 , . . . , q m ) represents the number of units of security i held by market traders The larger the value of b, the more money the market maker can lose. It also means that traders can purchase additional quantities of a security without causing significant price swings. Note that the price of a security only applies for buying a infinitesimal number of shares and the price of the security immediately changes as soon as a traders start trading. In order to calculate the cost of a trading X securities, the market makers must calculate C(q + X) − C(q).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Agent-Based Simulation Architecture</head><p>In order to explore empirically the application of artificial prediction markets to syndromic surveillance, we have developed an agent-based simulation, which we now describe, followed by some preliminary results in section 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Agents and Strategies</head><p>Our model integrates data and beliefs of different data streams by simulating an artificial prediction market to predict the outcome of an event, which in this case is the disease activity level on a specific date. Each data stream includes the quantitative value of a particular disease activity level for a specific place for different period of time. Each agent is responsible for one data stream and trades securities in various prediction markets based on its capital and belief about the disease activity level of the market event date. Trading agents will in due course (see below) learn from each market based on the revenue they receive and the losses they make when the market closes. Consequently, they can update their strategy, beliefs and confidence for the future markets.</p><p>The system has a market-maker that uses a scoring rule to calculate the market price for each security in the market, and a data distributor that provide agents with the data stream for which they are responsible, and trading agents. The simulation mechanism, specified in Algorithm 1, is as follows. At the beginning of the simulation, all the trading agents are awarded an equal amount of initial capital. For each training example, a prediction market (let us say prediction market for week T ) is established. At this time, the Data Distributor Agent provides available data to the trading agents, according to their role. Then, the trading agents participate in the market according to their available capital, beliefs, and trading strategies. Agents can trade any number of securities before the deadline for the closing of the market. Once the market deadline is reached, the market-maker reveals the winning security and rewards the winning security holders with $1 for each winning security they own. These revenues are added to their capital. However, the agents who own losing securities, lose capital equal to the amount spent on purchasing them.</p><p>During the simulation, agents with superior data, strategy and analysis algorithms are likely to accumulate greater capital and hence affect market prices and eventually the outcome. In other words, important -by these metrics -agents are identifiable and have greater influence in predicting the outcome of the event. This increased influence of the more successful agents should increase the accuracy of the system overall: the agents are not in competition per se, so we do not care which agents are better, but we do want the better ones to have more effect on the prediction mechanism.</p><p>The first agent strategy is based on zero intelligence <ref type="bibr" target="#b6">[7]</ref>, and so has no scope for learning: buy-sell and security choices are random subject to the constraint of not trading at a loss. For the second strategy, we add a basic learning mechanism, following the design of zero intelligence plus <ref type="bibr" target="#b2">[3]</ref>, in which agents update their trading strategy and beliefs based on the reward they received from the market in order to improve their reward in future markets. This is achieved by incorporates a simple machine learning mechanism (Widrow-Hoff) to adapt their individual behaviour to the market trend.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Market Instantiation</head><p>Different data sources have different timeliness in detecting a disease outbreak. For example, some data sources such as social media data can signal a disease activity level perhaps two weeks earlier than physician data. Therefore, for the system to be capable of forecasting the outcome at the earliest possible moment and not wait for all the agents' data to arrive to start prediction, we will run multiple concurrent markets for consecutive prediction weeks. For example, if the simulation week number is 1, then 4 further markets for weeks 2 to 5 will also be open. Once the deadline of the first market (week 2) is reached, then that market (week 2) closes and another market after the last market is opened (week 6). Consequently, the agents who have data for those further markets can start trading earlier in those markets and take advantage of cheaper prices, which will lead to the updating of market prices as early as possible. In addition, all data with different timeliness ranging from 4 weeks to one day before the event date will be incorporated in each market and at the same time agents can use their knowledge achieved from the previous market when predicting the outcome of a given market. For the sake of simplicity, Algorithm 1 considers just the one market, as does our current implementation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Preliminary Results</head><p>The configuration of the controlling parameters of our system need thorough investigation through running a large number of simulation experiments. These settings include but are not limited to constraints on number of market participants, required time for each market, initial capital for each agent, type of monitored diseases, constraints and requirements for agents to trade, and the minimum required number of training examples. We have only just begun to explore this parameter space.</p><p>As discussed in Section 1, there is a vast number of syndromic surveillance data sources. Much research has been done to compare these data sources with the actual value of disease activity level for a specific disease in a particular place. For example, Culotta <ref type="bibr" target="#b4">[5]</ref> stated that he could track influenza rates in the United States using Twitter messages with 95% correlation. Corley <ref type="bibr" target="#b3">[4]</ref> could track flu rates in the United States with a correlation of 76% by examining the proportion of blogs containing the two keywords of "influenza" and "flu". Google Flu Trend <ref type="bibr" target="#b5">[6]</ref> can predict 'flu activity level with a 97% correlation by analysing queries sent to the Google search engine.</p><p>In the first two batches of experiments, we have tried two well-known trading strategies for trading agents: Zero Intelligence (ZI) <ref type="bibr" target="#b6">[7]</ref> and Zero Intelligence Plus (ZIP) <ref type="bibr" target="#b2">[3]</ref>, the former to provide a baseline behaviour and the latter to investigate the effect of a simple learning mechanism on the trading decision. We now discuss each of these in more detail.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Preliminary Results with ZI</head><p>In the first set of experiments, agents use a Zero Intelligence strategy (adapted from <ref type="bibr" target="#b6">[7]</ref>) when trading securities in the market. In this model, agents consider a limit price, according to their data, for each security of the market. In each day of the market, they choose one security randomly and purchase a random quantity of that security, if its limit price is higher than its market price or they sell a random quantity of that security if its limit price is less than its market price. In both situations, the agents considers the maximum number of securities that can be traded, based on their available capital and the securities they own. LMSR, described in Section 2, is used as market scoring rule, as it provides infinite liquidity <ref type="bibr" target="#b7">[8]</ref> and does not suffer in thin markets where the number of traders are small. Each agent in the experiment is awarded $10 at start up and one market is established for each training example. The winning security is chosen based on the United State influenza-like illnesses rate from 30 September 2002 to 01 September 2003 <ref type="foot" target="#foot_0">3</ref> . Each market offers eight securities, corresponding to one security for each standard deviation from the mean, covering from −4 to +4 standard deviations.</p><p>Figure <ref type="figure">1</ref> shows how extending the period of a prediction market can help agents to predict the outcome of the event better. It demonstrates that accuracy goes up and the mis-classification rate falls as the duration of the market increases. As can be seen from the figure, accuracy increased from 82% for 10-day-long markets to 93% for 90-daylong markets, after which it is almost flat. From this, we conclude that 110 days seems a sufficient period for each market and hence our subsequent experiments at this stage use this market length. Longer duration markets provide agents with sufficient time to trade enough numbers of the desired securities and approach more closely the equilibrium price.</p><p>The purpose of the experiments reported in this paper is primarily to establish confidence in the behaviour of the simulation, by providing tailored data feeds with known properties and then observing whether the agents achieve their expected level of performance, given that data and their (known) strategy. These experiments have a population of 20 agents, each receiving data from data streams with a specified correlation with the United States influenza-like illnesses rate. The agent names in the following figures represents the type of data the agent is receiving. For instance, a95 denotes an agent that receives data with a 95% correlation with the United States influenza-like illnesses rate.</p><p>Since all the agents in these experiments are essentially identical in terms of strategy, the difference in their data sources should lead to them obtaining different amounts of revenue in each market. Figure <ref type="figure">2</ref> shows their revenue when each market ends. As the Figure shows, a100 agent, which has complete information about all the events of the experiment, earns high revenue in all markets with the exception of 31/07/2003. In this case, it was not making the wrong choice -it cannot -but was trying to purchase a large number of securities and since the price of a security increases as a result of its purchase, the agent (as explained in Section 2), did not have sufficient capital to complete the deal. In other words, for agents without perfect information, the enforced random choice of security to trade (the ZI strategy) means the agent cannot select the most appropriate one, but rather the one that chance dictates and hence it makes a loss.</p><p>Figure <ref type="figure" target="#fig_2">3</ref> shows the capital held by each agent at the end of each market in one experiment and Figure <ref type="figure" target="#fig_3">4</ref> shows the average capital of agents over 50 runs. The main observation from these figures is that, as expected, agents with higher quality data are able to achieve higher levels of revenue. As can be seen from these figures, agent a100 accumulates more capital than other agents even those with high quality data such as a99. The reason for this is that a100 never makes a mistake while the other agents do and as soon as one agent predicts an outcome, it dedicates most of its capital to purchase the corresponding security. Therefore, once an agent predicts a wrong outcome, it loses all its capital, while agent a100 keeps earning revenue in each market and accumulates more capital and hence invests more on upcoming markets and earns more revenue again. Also, agent a100 causes the price of the correct security to increase rapidly as it purchase a large quantity of it and, therefore, makes it difficult for other agents to buy significant quantities of that security due its high price.</p><p>Clearly, more comprehensive experimentation is necessary, backed up with appropriate statistical confidence tests. In this section, we have only used the most basic of strategies and one that has known flaws <ref type="bibr" target="#b2">[3]</ref>. However, it provides both a useful baseline performance, as well as a setting in which initial hypotheses about the effectiveness of the prediction market model can be validated (such as the agent with 100% correlated data dominating the market and all others losing all their investments).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Preliminary Results with ZIP</head><p>In the second set of experiments, we have changed the agent strategy from ZI to ZIP, by which the agent uses data about trends in the market in order to adjust their behaviour to   be less random (as in ZI) and more in line with the market valuation of a given security. Although ZIP and its variants have been shown to be effective strategies in terms of profit making, there are two reasons why this approach is likely to be ineffective in the context of prediction markets:</p><p>1. The ZIP strategy depends upon both buyer and seller employing this strategy, but in the prediction market, the two parties are the buyer and the market-maker, of which the latter has no interest in profit and which has no strategy as such. Consequently, only one party in the market is 'learning'. This still has a positive effect as discussed below, but starts to underline the difference between trading markets (with bilateral strategies) and prediction markets (with unilateral strategies). 2. The ZIP strategy aims at trading for profit regardless of the (financial) instrument being traded, leading to the establishment of an equilibrium price, whereas the point of a prediction market is to choose the right instrument, rather than the currently most profitable.</p><p>The experiments are run with the same data as for ZI. Thus Figures <ref type="figure" target="#fig_5">6 and 7</ref> show the results from a randomly chosen 110 day market, as was done for ZI. It is notable that ZIP achieved higher accuracy -nearly 97% -with market durations of &gt; 30 days, than in the ZI experiment. However, as before a100 agent dominates the market.</p><p>The story in terms of revenue (Figure <ref type="figure">6</ref>) is much the same as for ZI, although a99 stops making a profit much sooner with ZIP. As can be seen from the figure, the agent obtains revenue until 04/11/2002, which is the first time that it makes a mistake and loses the majority of its capital. Consequently, the agent has little remaining capital, but continues earning money for the following two markets, but its second mistake (on 25/11/2002) bankrupts it, after which it cannot invest further. This scenario applies to all other agents and causes a100 to dominate the market, as it never makes any mistake.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Related Work</head><p>Many syndromic surveillance systems exist worldwide, each designed for a specific country, region or state <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b16">17]</ref>. We refer to them as traditional, since they do not utilise internet based data. While these systems can detect an outbreak with high accuracy, they suffer from slow response times. For example, the Centers for Disease Control and Prevention (CDC) publishes USA national and regional data typically with a 1-2 week reporting lag. It monitors over 3,000 health providers nationwide to report the proportion of patients seen that exhibit influenza-like illnesses (ILI)<ref type="foot" target="#foot_1">4</ref>  <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>.</p><p>On the other hand, modern syndromic surveillance systems appeal to internet based data such as search engine queries, health news, and peoples' posts on social networks   to predict an outbreak earlier <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b3">4]</ref>, albeit with necessarily lower precision. While some of them claim that they could achieve high accuracy, they are vulnerable to false alarms <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b0">1]</ref> due to their dependence on a single data stream and disregarding the benefits from fusing different data sources. Ginsberg et al <ref type="bibr" target="#b5">[6]</ref> state, regarding Google Flu Trends, that "Despite strong historical correlations, our system remains susceptible to false alerts caused by a sudden increase in ILI-related queries. An unusual event, such as a drug recall for a popular cold or flu remedy, could cause such a false alert".</p><p>To the best of our knowledge, there is no system that fuses both traditional and internet based data sources. This could be due to the different timescales that these data sources have and the consequent issues of appropriate synchronisation.Prediction markets can overcome this problem as traders can trade securities as soon as they receive new information and impact the price and consequent probability of an event outcome. It is interesting to note that Polgreen et al <ref type="bibr" target="#b13">[14]</ref> report on the use a prediction market with human health care expert participants to forecast infectious disease activity 2-4 weeks in advance.</p><p>Moreover, internet based system are only suitable for places where sufficient source data is available. For example, twitter-based systems cannot have a high accuracy on places where using twitter is not very common, if even accessible. In addition, even if sufficient data is available, system accuracy cannot be guaranteed worldwide since peoples' behaviour changes from place to place, reflecting differing (digital) cultures. For example, people in a particular city may seek a physician as soon as they encounter the symptoms of a disease and do not trust online information, while people in another city may defer visiting a doctor and seek out online information in order to cure themselves at the early stages of their sickness. Furthermore, peoples' behaviour may change over time. For example, a particular social media may become less popular and cede its role to newer technology over the time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Discussion</head><p>Since we are in an early stage of this research, a substantial part of the work is to come. We have numerous ideas that have yet to be implemented, including: (i) the learning capability of agents, (ii) consideration of the confidence of agents, (iii) of the different timeliness of data streams, and (iv) the effect of a heterogeneous population of agents with different trading strategy and risk prediction model -among other characteristics. The very preliminary results we have meet our broad expectations for the behaviour of prediction markets, but it is too early to say whether they can be a general-purpose tool with useful levels of precision and recall across a range of domains. The ZI strategy, being essentially random under the constraint of not making a loss, establishes a useful performance baseline, as well as a framework against which to validate the basic system hypotheses. The ZIP strategy, while appropriate for bilateral markets seeking to establish equilibrium prices, is inappropriate -at least, as conventionally formulated -for prediction markets, although the dampening effect of the learning mechanism does lead to higher prediction rates and smoother overall behaviour.</p><p>We welcome feedback on the appropriateness of the approach and the above directions for development as well as alternative mechanisms that might be incorporated in the prediction market setting.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Algorithm 1 : 7 while</head><label>17</label><figDesc>Agent-Based Simulation Architecture Algorithm 1 Give start up capital to each agent 2 Simulation-Current-Week C; 3 Market-Date T; 4 for T ← 1 to end do 5 Data Distributor disseminate data, which are accessible by week C, to each agent according to agent expertise; 6 Start Prediction Market for Week-T; Market deadline is not reached do 8 Wait(); 9 In here, agents will decide the level of disease activity in week T and trade security according to their belief and strategy; 10 end 11 End Prediction Market; 12 Reveal the winning security (Based on the label of training examples); 13 Each agent new capital ← previous capital + revenue gained in this market − amount spent for purchasing securities; 14 Now, according to the utility received in this market, agents should update their trading strategy and beliefs ;</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 1 .Fig. 2 .</head><label>12</label><figDesc>Fig. 1. ZI: Comparing accuracy (s.d.: 0.017-0.047) and mis-classification (s.d.: 0.101-0.289) on the y-axis vs. duration of the prediction market (x-axis). Each data point is the average of 50 experiments with same parameter settings</figDesc><graphic coords="8,134.77,134.91,190.21,106.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. ZI: Comparing capital of agents (y-axis) at the end of each market (x-axis), for an example run chosen at random.</figDesc><graphic coords="8,134.77,385.04,190.21,147.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. ZI: Comparing capital of agents (y-axis) at the end of the experiment (averaged over 50 runs).</figDesc><graphic coords="8,134.77,533.12,190.20,110.74" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 5 .Fig. 6 .</head><label>56</label><figDesc>Fig. 5. ZIP: Comparing accuracy (s.d.: 0.010-0.018) and mis-classification (s.d.: 0.062-0.109) on the y-axis vs. duration of the prediction market (x-axis). Each data point is the average of 50 experiments with same parameter settings</figDesc><graphic coords="10,134.77,152.32,224.79,125.74" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 7 .</head><label>7</label><figDesc>Fig. 7. ZIP: Comparing capital of agents (y-axis) at the end of each market (x-axis) for an example run chosen at random.</figDesc><graphic coords="10,134.77,391.16,224.78,105.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 8 .</head><label>8</label><figDesc>Fig. 8. ZIP: Comparing capital of agents (y-axis) at the end of the experiment (averaged over 50 runs).</figDesc><graphic coords="10,134.77,497.86,224.78,128.57" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">http://www.cdc.gov/flu/weekly/fluviewinteractive.htm</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1">http://www.cdc.gov/flu/weekly/fluactivity.htm</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Predicting flu trends using twitter data</title>
		<author>
			<persName><forename type="first">Harshavardhan</forename><surname>Achrekar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Avinash</forename><surname>Gandhe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ross</forename><surname>Lazarus</surname></persName>
		</author>
		<author>
			<persName><surname>Ssu-Hsin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Benyuan</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Communications Workshops (INFO-COM WKSHPS), 2011 IEEE Conference on</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="702" to="707" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Google trends: a web-based tool for real-time surveillance of disease outbreaks</title>
		<author>
			<persName><forename type="first">Herman</forename><surname>Anthony</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Carneiro</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Eleftherios</forename><surname>Mylonakis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Clinical infectious diseases</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="1557" to="1564" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Minimal-intelligence agents for bargaining behaviours in market-based environments</title>
		<author>
			<persName><forename type="first">Dave</forename><surname>Cliff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Janet</forename><surname>Bruten</surname></persName>
		</author>
		<idno>HPL-97-91</idno>
		<ptr target="retrieved20071115" />
		<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
		<respStmt>
			<orgName>Hewlett-Packard Laboratories</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Text and structural data mining of influenza mentions in web and social media</title>
		<author>
			<persName><forename type="first">Diane</forename><forename type="middle">J</forename><surname>Courtney D Corley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Armin</forename><forename type="middle">R</forename><surname>Cook</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Karan</forename><forename type="middle">P</forename><surname>Mikler</surname></persName>
		</author>
		<author>
			<persName><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International journal of environmental research and public health</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="596" to="615" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Detecting influenza outbreaks by analyzing twitter messages</title>
		<author>
			<persName><forename type="first">Aron</forename><surname>Culotta</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1007.4748</idno>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Detecting influenza epidemics using search engine query data</title>
		<author>
			<persName><forename type="first">Jeremy</forename><surname>Ginsberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rajan</forename><forename type="middle">S</forename><surname>Matthew H Mohebbi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lynnette</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><forename type="middle">S</forename><surname>Brammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Larry</forename><surname>Smolinski</surname></persName>
		</author>
		<author>
			<persName><surname>Brilliant</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">457</biblScope>
			<biblScope unit="page" from="1012" to="1014" />
			<date type="published" when="2008">7232. 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Allocative efficiency of markets with zerointelligence traders: Market as a partial substitute for individual rationality</title>
		<author>
			<persName><forename type="first">K</forename><surname>Dhananjay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shyam</forename><surname>Gode</surname></persName>
		</author>
		<author>
			<persName><surname>Sunder</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of political economy</title>
		<imprint>
			<biblScope unit="page" from="119" to="137" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Combinatorial information market design</title>
		<author>
			<persName><forename type="first">Robin</forename><surname>Hanson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Systems Frontiers</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="107" to="119" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Logarithmic market scoring rules for modular combinatorial information aggregation</title>
		<author>
			<persName><forename type="first">Robin</forename><surname>Hanson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Prediction Markets</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="3" to="15" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Information aggregation and manipulation in an experimental market</title>
		<author>
			<persName><forename type="first">Robin</forename><surname>Hanson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ryan</forename><surname>Oprea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Porter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Economic Behavior &amp; Organization</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="449" to="459" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Disease outbreak detection system using syndromic data in the greater Washington DC area</title>
		<author>
			<persName><forename type="first">Julie</forename><forename type="middle">A</forename><surname>Michael D Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jay</forename><forename type="middle">L</forename><surname>Pavlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sheilah</forename><surname>Mansfield</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Louis</forename><forename type="middle">G</forename><surname>Obrien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yevgeniy</forename><surname>Boomsma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Patrick</forename><forename type="middle">W</forename><surname>Elbert</surname></persName>
		</author>
		<author>
			<persName><surname>Kelley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">American journal of preventive medicine</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="180" to="186" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A strategic model for information markets</title>
		<author>
			<persName><forename type="first">Evdokia</forename><surname>Nikolova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rahul</forename><surname>Sami</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 8th ACM conference on Electronic commerce</title>
				<meeting>the 8th ACM conference on Electronic commerce</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="316" to="325" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Experimental surveillance using data on sales of over-the-counter medications-japan, November 2003</title>
		<author>
			<persName><forename type="first">Yasushi</forename><surname>Ohkusa</surname></persName>
		</author>
		<author>
			<persName><surname>Shigematsu</surname></persName>
		</author>
		<author>
			<persName><surname>Taniguchi</surname></persName>
		</author>
		<author>
			<persName><surname>Okabe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">MMWR Morb Mortal Wkly Rep</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="47" to="52" />
			<date type="published" when="2004-04">April 2004. 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Use of prediction markets to forecast infectious disease activity</title>
		<author>
			<persName><forename type="first">Philip</forename><forename type="middle">M</forename><surname>Polgreen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Forrest</forename><forename type="middle">D</forename><surname>Nelson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">George</forename><forename type="middle">R</forename><surname>Neumann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Clinical Infectious Diseases</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="272" to="279" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m">Emergency department syndromic surveillance system: England &amp; Northern Ireland</title>
				<imprint>
			<date type="published" when="2013-09">September 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Prediction markets and the financial &quot;wisdom of crowds</title>
		<author>
			<persName><forename type="first">Russ</forename><surname>Ray</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Behavioral Finance</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="2" to="4" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<ptr target="retrieved20131216" />
		<title level="m">Communicable and respiratory disease report for england &amp; wales</title>
				<imprint>
			<date type="published" when="2013-10">October 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">The use of twitter to track levels of disease activity and public concern in the U.S. during the Influenza A H1N1 Pandemic</title>
		<author>
			<persName><forename type="first">Alessio</forename><surname>Signorini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><forename type="middle">Maria</forename><surname>Segre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Philip</forename><forename type="middle">M</forename><surname>Polgreen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PLoS ONE</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="5" to="2011" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">Erik</forename><surname>Snowberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Justin</forename><surname>Wolfers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eric</forename><surname>Zitzewitz</surname></persName>
		</author>
		<ptr target="http://www.nber.org/papers/w18222" />
		<title level="m">Prediction markets for economic forecasting</title>
				<imprint>
			<date type="published" when="2012-07">July 2012</date>
		</imprint>
		<respStmt>
			<orgName>National Bureau of Economic Research</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Working Paper 18222</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Technical description of rods: a real-time public health surveillance system</title>
		<author>
			<persName><forename type="first">Fu-Chiang</forename><surname>Tsui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeremy</forename><forename type="middle">U</forename><surname>Espino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Virginia</forename><forename type="middle">M</forename><surname>Dato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Per</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Judith</forename><surname>Gesteland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">M</forename><surname>Hutman</surname></persName>
		</author>
		<author>
			<persName><surname>Wagner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Medical Informatics Association</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="399" to="408" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
