<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Saliency Model Predicts Fixations in Web Interfaces</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Jeremiah</forename><forename type="middle">D</forename><surname>Still</surname></persName>
							<email>jstill2@missouriwestern.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Psychology</orgName>
								<orgName type="institution">Missouri Western State University</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Christopher</forename><forename type="middle">M</forename><surname>Masciocchi</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Department of Psychology</orgName>
								<orgName type="institution">Iowa State University</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">A Saliency Model Predicts Fixations in Web Interfaces</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">B33188386FB707EDA814122D1CCE24BA</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T00:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Saliency</term>
					<term>Interface Development</term>
					<term>Design</term>
					<term>Model H.1.2 User/Machine Systems; I.2.10 Vision and Scene Understanding</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>User interfaces are visually rich and complex. Consequently, it is difficult for designers to predict which locations will be attended to first within a display. Designers currently depend on eye tracking data to determine fixated locations, which are naturally associated with the allocation of attention. A computational saliency model can make predictions about where individuals are likely to fixate. Thus, we propose that the saliency model may facilitate successful interface development during the iterative design process by providing information about an interface's stimulus-driven properties. To test its predictive power, the saliency model was used to render 50 web page screenshots; eye tracking data were gathered from participants on the same images. We found that the saliency model predicted fixated locations within web page interfaces. Thus, using computational models to determine regions high in visual saliency during web page development may be a cost effective alternative to eye tracking.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>INTRODUCTION Saliency, Search and Design</head><p>Some visual designs guide users to the locations of important information, while others mislead users. Visual saliency, inherent in a complex interface, cues users to certain spatial regions over others. If employed correctly by designers, salient cues may reduce information search times and facilitate task completion <ref type="bibr">[cf. 18]</ref> by implicitly communicating to users where they ought to start their visual search <ref type="bibr" target="#b15">[16]</ref>. In order to be considered salient, a feature must be visually unique relative to its surroundings. For example, text that is underlined amongst nonunderlined text "pulls" the reader's attention to it. However, many interfaces, like web pages, are rich with visual media, such as text, pictures, logos and bullets, making the determination of salient features a complicated task. Given this complexity, designers are often left making best guesses about which spatial regions are salient within an interface. Previous research on visual search in web pages defines entry points as regions within a page where users typically begin their visual search. In this article, we will argue that these entry points are heavily influenced by visual saliency, that is, users will often begin searching web pages at the location of highest saliency. In related research examining cognitive processing these implicit and low level cues that guide a viewer's visual search are referred to as stimulus-driven properties -certain characteristics of the stimulus quickly "drive", or direct attention to certain locations over others. Currently, no consensus has been reached as to which visual characteristics, or stimulusdriven properties, make for effective entry points.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Measuring Overt Attention through Eye Tracking</head><p>Given the over abundance of visual information in our environments and our working memory limitations, attention must be selective, only allowing a limited amount of information into consciousness, for our cognitive system to function properly <ref type="bibr" target="#b7">[8]</ref>. It has been suggested that the programming of eye movements has a direct and natural relationship with visual attention in that attention is often directed to whichever item is fixated <ref type="bibr" target="#b9">[10]</ref>. Only information that falls directly on the fovea during a fixation is encoded with high resolution and only a limited amount of this high resolution information is processed, while the rest falls into rapid decay [see 4]. Thus, it is critical that users fixate on relevant visual information or that content will not reach users' awareness.</p><p>It is no surprise then, that designers often monitor eye movements to evaluate a web page's saliency, or entry points. Eye tracking systems allow designers to test whether their web pages actually guide users' fixations to important locations. However, eye tracking has a number of recognized costs. Eye tracking systems are often expensive, not easily accessible, time consuming to employ and they gradually lose calibration <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b14">15]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Stimulus and Goal Driven Searches</head><p>In this article we investigate the influence of stimulusdriven saliency on attention within the context of a web page. Stimulus-driven saliency guides attention quickly and without explicit intention, thus some might question its role during a purposeful search on a web page. There is ample evidence to suggest that goals do influence the guidance of attention. For example, web page eye tracking research has shown that changing the task (or goal) during a search, or seeking navigational or informational indicators, changes observers' fixation patterns <ref type="bibr" target="#b2">[3]</ref>. Additional research has shown that, given enough time, expectations can cause a consistent pattern of fixations -F-shaped pattern or reading patterns (e.g., left-right/top-bottom) <ref type="bibr" target="#b13">[14]</ref>. However, these goal-driven effects interact with stimulus-driven effects, making the stimulus-driven influences more difficult to examine <ref type="bibr">[cf. 11]</ref>. Also, it is often the case that only a few seconds are spent on a web page (even with a goal in mind) making the understanding of stimulus-driven processing, which is believed to influence attention very rapidly, critical. For instance, when searching for information observers often only skim through approximately 18 words, and spend 4 to 9 seconds, per web page <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b11">12]</ref>. One way to investigate the pure influence of stimulus-driven guidance is to use a computational saliency model designed to make predictions about what properties or features of a web page attention ought to select within complex media, or scenes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Predicting Fixations through a Saliency Model</head><p>Visually salient items often draw observers' attention. To better understand the influences of saliency, or stimulusdriven selection, on attention, <ref type="bibr" target="#b8">Koch and Ullman (1985)</ref> developed a model to compute an image's visual saliency without any semantic input (i.e., meaning of objects). Their model is based on the assumption that eye movement programming is driven by local image contrast leading to logical serial searches through complex spatial environments. These serial searches are guided by low level primitives extracted from a scene. The saliency model was developed under the pretense that low level visual features (i.e., color, light intensity, orientation) are processed preattentively in humans and, in turn, rapidly influence overt attention. Thus, the underlying assumption is that visual saliency is used to guide the fovea to unique areas within a scene that might provide the most efficient processing <ref type="bibr" target="#b4">[5]</ref>.</p><p>The computational model is implemented on a computer using digital pictures as stimuli to produce a pre-attentional or "saliency" map <ref type="bibr" target="#b8">[9]</ref>. To create a saliency map, the model receives input from pixels within a digital picture. Then, it extracts three feature channels -color, intensity, orientation -at eight different spatial scales. These three channels are normalized and differences of center-surround are calculated for each separate channel. The separate channels are additively combined to form a single saliency map. An image's saliency map provides predictions of where spatial attention should be deployed [for detailed explanations refer to 6, 13]. In essence, the model makes predictions about which regions in an image have the most and least likely chance to be attended based purely on stimulusdriven properties. The saliency model is available for download from &lt;SaliencyToolbox.net&gt; as a collection of Matlab functions and scripts <ref type="bibr" target="#b16">[17]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Testing a Saliency Model within Web Pages</head><p>Designers recognize the need to predict and identify where users' attention will be guided on a web page. For example, it is well known that one should avoid using poor designs that increase the likelihood of users missing important interface features such as branding, navigational or informational symbols. But, using an eye tracking system to monitor guidance of attention -as is traditional -can be expensive, difficult to employ and time consuming within the context of a practical iterative design process. Thus, we investigated the utility of a computational saliency model in predicting the guidance of attention in web page screenshots. This new method is benchmarked and compared to another set of data in which participants' eye movements were tracked while they viewed the same web page screenshots.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>METHOD Participants</head><p>The data from eight undergraduate participants are examined. All participants reported extensive web site experience.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Stimuli and Equipment</head><p>The images were 50 screenshots of various web pages. Each participant saw each screenshot only once.</p><p>Participants' eye movements were recorded by an ASL eye tracker with a sampling rate of 120 Hz. Screenshots were shown on a Samsung LCD monitor, which had a viewing area of approximately 38.0 cm × 30.0 cm. A chin rest maintained a viewing distance of approximately 80 cm. Images subtended approximately 26.7 0 x 21.2 0 visual angle.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Procedure</head><p>Participants first read and signed an informed consent document, and were then seated in front of the monitor with their chin in the chin rest. The experiment began and concluded with a 9-point calibration sequence to calibrate the eye tracker and estimate the amount of tracking error.</p><p>Participants were told that they would view a series of web page screenshots, and that they should, "look around the image like you normally would if you were surfing the internet." A fixation cross was presented at the center of the screen to signal the beginning of a trial. After a delay of approximately 1 second, a randomly selected web page screenshot was presented for 5 seconds. The fixation cross then reappeared to signal the beginning of the next trial.</p><p>The experiment took approximately 15 minutes to complete. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>RESULTS</head><p>We used a similar technique to Parkhurst, Law, and Niebur (2002) to determine whether salient regions in web pages were fixated more often than would be expected by chance. Specifically, the values of the saliency map at the location of each participant's first ten fixations were extracted. For example, the x, y coordinates of the first fixation for each participant was determined for every screenshot and the value at the same location in the corresponding saliency map was extracted. This process was repeated for fixations two through ten. These values formed the Observed Distribution of participant responses (Figure <ref type="figure" target="#fig_1">2</ref>).</p><p>To determine the likelihood that salient regions would be fixated by chance, we repeated the process used to find the Observed Distribution after rearranging the fixations and saliency maps for all screenshots. For example, the values from the saliency map for screenshots 2 to 50 were extracted at the fixated locations from screenshot 1. The saliency values of all other screenshots were extracted at the location of the first ten fixations for all subjects for each screenshot. These values formed the Shuffled Distribution. Figure <ref type="figure" target="#fig_1">2</ref> shows the means for the Observed and Shuffled Distributions of the first ten fixations for each screenshot. An analysis of variance was conducted with fixation number (1-10) as a within-subjects variable and distribution (observed, shuffled) as a between-subjects variable, to determine whether any differences between the distributions varied as a function of fixation number. The main effect of fixation number was reliable, F(9, 882) = 6.39, MSE = 19.03, p &lt; .001. Pairwise comparisons revealed that the values for the first fixation were higher than all other values, and that the values of the tenth fixation were lower than all other values. This indicates that early fixations tend to occur at regions of higher salience than those of later fixations. More importantly, the main effect of distribution was also reliable, F(1, 98) = 4.86, MSE = 397.95, p &lt; .05, indicating that the values of Observed Distribution were larger than those of the Shuffled Distribution. This difference confirms that participants fixated regions higher in saliency than would be expected by chance, showing that the saliency model is effective at predicting fixations. Distribution x Fixation number was not significant, F &lt; 1. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Two examples of web page screenshots and their corresponding saliency maps. Creation of saliency maps Saliency maps were created using the algorithms developed by Itti, Koch, and Niebur (1998). The model was run on each image individually and the output was normalized by dividing all values by the maximum value for that map, and multiplying all values by 100. To simplify data analysis, the size of the saliency maps was increased to be identical to the size of the screenshots (1024 x 768 pixels). As described in the Introduction, these saliency maps are 2-D representations of areas in the screenshot that show the relative saliency of locations in the image. Figure 1 shows an example of two web page screenshots and their corresponding saliency maps. Low values (dark areas in the image) indicate regions of the image that are low in saliency, while high values (light areas in the image) indicate regions high in saliency.</figDesc><graphic coords="3,45.59,107.95,242.95,193.99" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Mean saliency values for the observed ('X') and shuffled ('o') distributions for the first ten fixations.</figDesc><graphic coords="3,308.39,516.37,268.09,175.99" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The method used to create this distribution controls for spatial biases that may inflate correlations between fixations and salient regions. If the values of the Shuffled Distribution are larger than those of the Observed Distribution, it would indicate that participants fixated on regions that are lower in saliency than what is expected by chance. If, however, the values of the Observed Distribution are larger than those in the Shuffled Distribution, it would indicate that participants fixated regions that are higher in saliency than what is expected by chance.</figDesc><table /></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>DISCUSSION</head><p>Eye tracking is a commonly employed method for examining the guidance of overt attention within interfaces (e.g., web pages). However, it has several drawbacks. We propose that a web page's saliency, stimulus-driven properties, may be revealed through the use of a computational saliency model. Therefore, we compared the performance of the model to eye tracking data collected from human observers. We were able to demonstrate that, indeed, the saliency model predicts the deployment of overt attention within a web page interface.</p><p>Previous research has shown a modest correlation between saliency and eye fixations in natural and artificial scenes <ref type="bibr" target="#b12">[13]</ref>. We have extended this research by showing that even in web pages, which may contain more semantic information (e.g., meaningful: text or images) than nature scenes, fixations are correlated with saliency. Specifically, participants were more likely to fixate on regions in the web pages with a higher saliency value than predicted by chance.</p><p>Our data suggest that saliency maps alone can provide reasonable predictions of overt attention. In addition, saliency maps can be generated quickly, and require no additional equipment or participants. Even with these positive attributes, one may be hesitant to abandon eye tracking altogether. Our recommendation to designers is to choose the method most appropriate for your project given your constraints and needs. It is often the case that developing effective interfaces requires many levels of analysis. For example, during the early formative testing process it would be appropriate to begin by using the saliency model to ensure that regions identified as being important are also visually salient. Then, during the 'final' prototype development stage, employ the eye tracking method to verify that your participants are actually looking at the critical elements in the design.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Usability tool for analysis of web designs using mouse tracks</title>
		<author>
			<persName><forename type="first">E</forename><surname>Arroyo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Selker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer-Human Interaction extended abstracts on human factors in computing systems</title>
				<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="484" to="489" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">What can a mouse cursor tell us more?: Correlation of eye/mouse movements on web browsing</title>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sohn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer-Human Interactions extended abstracts on human factors in computing systems</title>
				<imprint>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="281" to="282" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">What are you looking for? An eye-tracking study of information usage in web search</title>
		<author>
			<persName><forename type="first">E</forename><surname>Cutrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Guan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI conference on human factors in computing systems</title>
				<meeting>the SIGCHI conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="407" to="416" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Visual attention: Control representation and time course</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">E</forename><surname>Egeth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yantis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annual Review of Psychology</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="269" to="297" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A saliency-based search mechanism for overt and covert shifts of visual attention</title>
		<author>
			<persName><forename type="first">L</forename><surname>Itti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Koch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Vision Research</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="issue">10-12</biblScope>
			<biblScope unit="page" from="1489" to="1506" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A model of saliencybased fast visual attention for rapid scene analysis</title>
		<author>
			<persName><forename type="first">L</forename><surname>Itti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Koch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Niebur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="1254" to="1259" />
			<date type="published" when="1998-11">November 1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Do we need eye trackers to tell where people look?</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Hansen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Computer-Human Interaction extended abstracts on human factors in computing systems</title>
				<meeting>Computer-Human Interaction extended abstracts on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="923" to="928" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Selective attention</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">A</forename><surname>Johnston</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">J</forename><surname>Dark</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annual Review of Psychology</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="43" to="75" />
			<date type="published" when="1986">1986</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Shifts in selective visual attention: Towards the underlying neural circuitry</title>
		<author>
			<persName><forename type="first">C</forename><surname>Koch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ullman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Human Neurobiology</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="219" to="227" />
			<date type="published" when="1985">1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The role of attention in the programming of saccades</title>
		<author>
			<persName><forename type="first">E</forename><surname>Kowler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Dosher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Blaser</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Vision Research</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="1897" to="1916" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Can I have the menu please? An eyetracking study of design conventions</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Mccarthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Sasse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Riegelsberger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Human-Computer Interaction</title>
				<meeting>Human-Computer Interaction</meeting>
		<imprint>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="401" to="414" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">How little do users read? Retrieved</title>
		<author>
			<persName><forename type="first">J</forename><surname>Nielsen</surname></persName>
		</author>
		<ptr target="http://www.useit.com/alertbox/percent-text-read.html" />
		<imprint>
			<date type="published" when="2008-05">2008. May. May 12, 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Modeling the role of salience in the allocation of overt visual attention</title>
		<author>
			<persName><forename type="first">D</forename><surname>Parkhurst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Law</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Niebur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Vision Research</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="107" to="123" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Eye movements in reading and information processing: 20 years of research</title>
		<author>
			<persName><forename type="first">K</forename><surname>Rayner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological Bulletin</title>
		<imprint>
			<biblScope unit="volume">124</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="372" to="422" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">The enhanced restricted focus viewer</title>
		<author>
			<persName><forename type="first">P</forename><surname>Tarasewich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pomplun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fillion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Broberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="35" to="54" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Perceptual grouping and attention in visual search for features and for objects</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Treisman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Experimental Psychology: Human Perception and Performance</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="194" to="214" />
			<date type="published" when="1982">1982</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Modeling attention to salient proto-objects</title>
		<author>
			<persName><forename type="first">D</forename><surname>Walther</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Koch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="1395" to="1407" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Guided Search 4.0: Current Progress with a model of visual search</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Wolfe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Integrated Models of Cognitive Systems</title>
				<editor>
			<persName><forename type="first">W</forename><surname>Gray</surname></persName>
		</editor>
		<meeting><address><addrLine>New York; Oxford</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="99" to="119" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
