<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Modelling auditory spatial attention with constraints</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Edward J. Golob</string-name>
          <email>egolob@tulane.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>K. Brent Venable</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maxwell T. Anderson</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jesse A. Benzell</string-name>
          <email>jbenzell@tulane.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jaelle Scheuerman</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science Tulane University</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Psychology Tulane University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>It is well-established that spatial attention can be allocated as a gradient that diminishes from a central focus. In this paper we consider auditory attention and we develop a model for how it is distributed in space following basic ideas of top-down and bottom-up attentional control from verbal models [6, 12]. There are three main components of our model: a goal map, a saliency map, and a priority map. The goal map models the distribution of attention which is allocated by choice (top-down component). The saliency map, as the name suggests, models attention related to the saliency of auditory stimuli (bottom-up component) and the priority map synthesizes the other two maps in an overall distribution of the attentional bias. We model the three maps and their interaction using the well established AI framework of constraint satisfaction problems. We study several hypotheses on the maps and we contrast the results in terms of data obtained running different kinds of experiments. Our computational model, is to the best of our knowledge is the first which targets specifically the auditory system. Our constraint-based approach is very flexible in terms of embedding and testing different hypotheses on the components and constraint propagation techniques allow both to focus on single components as well as to consider the system dynamics as whole. The predictions arising from our model well fit the experimental data, are cognitive plausible and provide new interesting insights to the mechanism of attention control.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction and motivation</title>
      <p>Audition is distinguished from the other senses by the ability to panoramically monitor
the environment for things happening at a distance, behind obstructions, and out of
sight. These considerations make the auditory system particularly useful for shifting
attention to events that are important to survival and reproduction. For example, hearing
the sound of a snapped twig from a predator hiding in the brush can quickly elicit a fight
or flight response. The overall goal of this project is to better understand at the cognitive
and neural levels of analysis how auditory attention is allocated over space. The focus
is on the interplay between top-down and bottom-up spatial attention biases that govern
shifting attention to distractors during performance of a simple spatial attention task. We
take an interdisciplinary approach by using behavioral and neural stimulation methods
to test and refine a computational model of auditory spatial attention.</p>
      <p>
        Previous work has established the idea of an auditory attention gradient [
        <xref ref-type="bibr" rid="ref27 ref33">27, 33</xref>
        ].
Both reports gave subjects a cue on where to attend that changed across trials, unlike
natural situations where attention is engaged for longer times. Our task mimics
everyday life and connects to the ecological significance of the auditory system in orienting
attention to occasional unexpected, but potentially important, environmental events. We
have strong preliminary data showing the novel result that when spatial attention shifts
are examined over a wide range (180 ) reaction time slows following spatial shifts but
then speeds-up at the distant location that was tested (180 from the currently attended
location).
      </p>
      <p>
        Our recent work has defined behavioral and neural measures of auditory attention
gradients [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. The behavioral tasks in this proposal mirror everyday life by having
subjects attend to one location for at least several minutes, as when conversing or listening
to music. Auditory processing of distractors at different regions of space were probed
and attentional gradients using EEG measures were defined relative to the current
focus of attention. Variables such as task demands, stimulus properties, normal aging,
and neural stimulation of important cortical nodes of the hypothesized network were
examined. We are now in a position to construct an explicit computational model of
auditory spatial attention, which will be used to test existing hypotheses and make new
predictions subject to experimental testing.
      </p>
      <p>Our aim is to develop a rigorous theory of auditory spatial attention that relates to
current work on dorsal and ventral neural attention systems. We foresee that our work
will help advance the understanding of basic issues in attention regardless of modality,
such as top-down and bottom up interactions, capacity limitations, vigilance, and will
inform debate on supramodal attention processing in the brain. There are also multiple
applications, such as topics in human factors, improved audio communication systems,
and brain-computer interface control using spatial attention.</p>
      <p>
        Our lab studies aspects of hearing that are particularly important to humans, such
sound location, speech, and music, and how auditory processing is affected by
attention, short-term memory, and action planning and execution. The common
denominator is that these studies contribute to an emerging framework termed cognitive hearing
science, which examines the role of the auditory system in higher-level cognition and
action [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In addition to addressing basic science issues we also use the auditory
system as a model to better understand the cognitive and neurobiological changes that
accompany normal aging, Alzheimer’s disease, and speech fluency disorders.
      </p>
      <p>
        We combine these novel parametric behavioral measures to map-out auditory
attention over space with a computational model to explain how specific top-down and
bottom-up mechanisms jointly determine the shape of auditory spatial attention
gradients. Recent modeling work focuses on saliency, particularly when there is more than
one sound happening at the same time [
        <xref ref-type="bibr" rid="ref13 ref29 ref38">38, 13, 29</xref>
        ]. Kayser and colleagues have a model
of acoustic saliency based on non-spatial features (e.g. intensity, envelope), but did not
examine spatial features. In contrast, we use soft constraint AI computational methods
to model auditory spatial processing and overlapping top-down and bottom-up
interactions.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <p>In this sections we provide a brief background on psychology literature related to spatial
attention and its computational models. We also give some fundamental information
concerning constraints.
2.1</p>
      <p>
        Spatial Attention
Almost all attention models from the inception of Psychology as a formal science have
distinguished attention that is directed by personal choice from attention that is directed
to an event by virtue of it having a salient property, such as a loud sound [
        <xref ref-type="bibr" rid="ref21 ref30">21, 30</xref>
        ]. This
dichotomy is intuitive and has many names in the literature (e.g. top-down/bottomup,
endogenous/exogenous, controlled/automatic [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Here we use the terms top-down and
bottom-up. Top-down control regulates information flow based on the current situation
and goals in short-term memory by generating a task set to bias processing towards
information useful for goal attainment. Bottom-up refers to attention capture that is
not guided by the top-down task set. Although the top-down and bottom-up distinction
is meaningful, as a practical matter they are highly interactive [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. The difficulty of
cleanly separating the two processes motivates us to use a computational model, which
can examine topdown and bottom-up functions in isolation. Next, we briefly review
work on auditory spatial attention at a cognitive level of analysis, and draw from the
visual literature when needed to present major points relevant to auditory spatial
attention.
      </p>
      <p>
        Attention can be expressed as a spatial gradient relative to an attended location
[
        <xref ref-type="bibr" rid="ref10 ref26">26, 10</xref>
        ]. Gradients are presumably a byproduct of limited perceptual input capacity,
although limitations in behavioral output may also be relevant [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The spatial extent of
attentional processing is variable [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ], and can be modified by directly cuing different
size areas [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], or manipulations of perceptual or memory loads [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Splitting
attention between locations and multi-object tracking are also possible [
        <xref ref-type="bibr" rid="ref5 ref9">5, 9</xref>
        ]. The ability to
deliver attentional benefits rapidly diminishes over time, a phenomenon called the
vigilance decrement [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. This is important because in everyday life attention is commonly
deployed over relatively long time periods (e.g. conversation, listening to music).
      </p>
      <p>
        Auditory spatial cuing decreases reaction times to subsequent targets at a cued
location relative to uncued locations [
        <xref ref-type="bibr" rid="ref32 ref33 ref36 ref39">32, 36, 39, 33</xref>
        ]. Both Mondor and Zatorre (1995)
and Rorden and Driver (2001) found that target reaction times increased monotonically
with greater distance between the cued and target locations. Visual studies suggest that
gradients may have a more complex shape, with reaction times increasing and then
decreasing away from the cued location [
        <xref ref-type="bibr" rid="ref28 ref8">28, 8</xref>
        ](Mexican-hat shape). This is, as we will
see, similar to our preliminary findings in the auditory modality, but the auditory results
have a much larger spatial range.
2.2
      </p>
      <p>
        Constraints and Computational models of auditory attention
Computational models of cognitive processes are beneficial because they require an
explicit theory, can reveal hidden assumptions or logical inconsistencies, and
simulations can establish proof-of-principle much faster than pilot experiments [
        <xref ref-type="bibr" rid="ref20 ref23">20, 23</xref>
        ]. Our
model uses basic ideas of top-down and bottom-up attention control from prominent
verbal models [
        <xref ref-type="bibr" rid="ref12 ref6">6, 12</xref>
        ]. The novelty of our approach is the application to auditory spatial
attention, which is not dealt with in detail in the general models. Our model is
distinguished by focusing on auditory spatial attention and how it emerges from top-down
and bottom-up interactions, which has general importance because a balance must be
struck between top-down goal focus and bottom-up receptivity to unexpected events or
ideas (stability-flexibility dilemma, [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]). Moreover, the models of attention mentioned
above are designed as ad hoc mathematical descriptions of the considered phenomena,
while we opt to cast our model into a more general artificial intelligence setting.
      </p>
      <p>
        Constraint programming [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ] is a powerful paradigm for modeling and solving
combinatorial search problems currently applied with success to many domains, such
as scheduling, planning, vehicle routing, configuration, networks, and bioinformatics.
The basic idea in constraint programming is that the user states the constraints and a
general-purpose constraint solver is used to solve them. Constraint solvers take a
realworld problem, represented in terms of decision variables and constraints, and find an
assignment to all the variables that satisfies the constraints. Constraints concern subsets
of variables and define which simultaneous assignments to those variables are allowed.
For example, in scheduling activities in a company, the decision variables might be the
starting times and the durations of the activities and the resources needed to perform
them, and the constraints might be on the availability of the resources and on their use
by a limited number of activities at a time.
      </p>
      <p>Solutions are found by searching the solution space either systematically, as with
backtracking or branch and bound algorithms, or use forms of local search which may
be incomplete, that is there is no guarantee they will return a solution. Systematic
methods often interleave search and inference, where inference consists of propagating the
information contained in one constraint to other constraints via shared variables.</p>
      <p>The rich variety of finely-tuned algorithms available for constraint problems has
made the effort of translating real world problems into this framework an efficient
solving approach.</p>
      <p>
        Constraints have been used before in the context of human cognition for example
to model skilled behavior [
        <xref ref-type="bibr" rid="ref1 ref7">1, 7</xref>
        ] and learning [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Recently an implementation of the
cognitive architecture ACT-R [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] based on constraint handling rules, which are a closely
related to constraints, has been proposed in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. To the best of our knowledge, it is
however the first time they are employed at this level of cognitive modeling and in the
context of attention. Casting our model into a well-established AI framework will, on
one side, facilitate future generalization to other aspects concerning attention as well as
enable an easy embedding in cognitive architectures such as ACT-R, for example.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>The computational model</title>
      <p>Figure 1 depicts the overall hypothesis on the interplay between top-down and
bottomup spatial attention processing. There are three main components: goal map, saliency
map, and priority map. Each map is a 1-D vector of attentional bias in normalized units
(0-1) across the semicircular horizontal frontal plane (from -90 on the far left to +90 on
the far right, 2 increments, as shown in Figure 2). The given inputs to the model are (1)
attended location, which is a goal map parameter and (2) sound location, which is input
to the saliency map. The output is a priority map representation of attentional bias across
the 180 semi-circle (in 2 increments). Areas of greater attentional bias are assumed
to relate to measurable data by having faster reaction times, more sensitive sensory
thresholds, and increased accuracy relative to locations with less bias. We emphasize
that this is a model of information processing at the cognitive level. It is designed to help
interpret behavioral results and inspire new experiments to test and refine the model. It
is not intended to model how neural activity relates to attention. The gray boxes show
inputs and outputs that interface with other cognitive functions.</p>
      <p>
        Our computational model adopts a constraint-based approach to cast the interactions
among the three maps into a constraint solving problem that can be efficiently solved
with the rich algorithmic machinery which has been developed for constraints[
        <xref ref-type="bibr" rid="ref34">34</xref>
        ].
Constraint models have three main components: variables, a domain of possible values for
each variable, and the constraints. Constraints are defined by the relevant variables and
by specifying the simultaneous variable assignments that satisfy the constraint. In our
model, there is one input variable corresponding to attended location (A) with the
domain being locations (2 increments) in the semicircle f-90,-88,...,0,...,88,90g. In Figure
3 we depict (partially) the constraint graph of our model, where variables correspond to
nodes and constraints to edges.
      </p>
      <p>We remark how this constraint-based representation is very flexible in terms of
modeling different hypothesis on the attentional bias distributions and on the interaction of
the maps. In our initial setting, for example, we don’t have any interaction between the
goal and the saliency map. However, we could model it in a straight forward way
simply by defining new constraints that connect variables of the two maps. Moreover, this</p>
      <p>+90
highly decoupled model, which has one variable per location, allows for the
implementation of local phenomena, as, for example, we foresee may occur in case of habituation
to stimuli coming from a fixed location. More broadly, the ability to model various types
of interactions among these subprocesses is relevant to issue of modularity in human
cognitive systems.</p>
      <p>In what follows we denote with VGi , VSi and VPi respectively, the i-th variable of the
goal, saliency and priority map. We note that i ranges in f-90,...,+90g, and the domain
to quantify attentional bias uses normalized units (0-1, in .01 increments). The goal
map indexes top-down attention bias, and is a function of the central executive in verbal
models. It models top-down, voluntary focus of attention to a location, and has a
progressive, symmetrical decrease in attentional bias away from the attended location. We
currently consider three options for modeling the attentional bias in the goal map given
that location A = a is (voluntarily) attended. We express them as a sets of constraints
each of which is defined over variable A and VGi . In what follows we indicate the tuple
of values which is allowed by the constraint.</p>
      <p>– Standard Gaussian Distribution:
(A = a; VGi = GGe 2jad2i 2
j
G )
where dG is the standard deviation of the goal map and G is the height of its peak.
– Modified Gaussian distribution with inhibition:
(A = a; VGi = (GGe 2jad2Gi12 )) + (GG
j</p>
      <p>GGe 2jad2Gi22 ));
j
notice that this obtained as the sum of a Gaussian and an inverted Gaussian. G is
the maximum of the two functions and is a parameter that we use to weight the
A"en%on
Loca%on</p>
      <p>Constraints</p>
      <p>Goal Map Variables</p>
      <p>Saliency Map Variables</p>
      <p>Priority Map Variables
components. We also have two standard deviations for the components which are
denoted by dG1 and dG2. In this way we obtain the desired shape, which has a peak
at the attended location, then dips down to an area of lower attentional bias and
then increases and stabilizes as we move far away from the attended location (see
Figure 4 (a)).
– Constant function:</p>
      <p>(A = a; VGi = k);
where k is a constant value.</p>
      <p>The different shapes for the goal map when the attended location is 0 are shown in
Figure 4 (a). We note that in Figure 3 the standard Gaussian distribution is shown on
top of the variables corresponding to the goal map.</p>
      <p>Similarly we consider two options for the saliency map, which models how attention
is allocated to a stimulus given how salient its characteristics are. Again, in our model,
this amounts to defining constraints between variable A and each saliency map variable.
The two options are:
– Inverted Gaussian distribution:
(A = a; VSi = GS</p>
      <p>GS e 2jad2ij2</p>
      <p>S )
where dS is the standard deviation for the saliency map, and G is its maximum
value.
– Constant function:</p>
      <p>(A = a; VSi = k);
Inverted Gaussian</p>
      <p>Constant
0°
(a)
-90°</p>
      <p>and between 0 and 1.</p>
      <p>We elaborate on the cognitive interpretation of these hypothesis in Section 5.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Behavioral experiments</title>
      <p>
        We developed a behavioral task to map attentional gradients and to test the above model.
It is a hybrid of our spatial target detection task [
        <xref ref-type="bibr" rid="ref17 ref31">31, 17</xref>
        ] and work on distraction from
changing a task-irrelevant stimulus feature [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]. White noise is presented from the 5
locations in the frontal plane ( 90 ; 45 ; 0 ; +45 ; +90 ), and subjects respond in
each trial by discriminating a non-spatial feature (amplitude modulation (AM) rate, 25
or 75 Hz). The slow AM rate sounds like a deck of cards being shuffled while the faster
rate is perceived as a buzz. Most stimuli come from a standard location (p = :84) but
sometimes shift to a distractor location (p = :04=location). Separate blocks have the
standard at 90 ; 0 ; or + 90 (counterbalanced).
      </p>
      <p>Figure 5 plots reaction times x location for each standard condition in absolute space
(A), as well as the deviant location relative to the standard location (B). There were
two main results. First, all conditions had slower responses to distractors vs. standards
(p &lt; :001), indicating attention shift costs. The reaction time x location function is more
prominent for the left vs. the right standard (p &lt; :01), suggesting that it is faster to shift
auditory attention from right-to-left than from left-to-right. The 0 standard has an
increase at near 45 locations, similar to the left standard, but a decrease for the 90</p>
      <p>Fig. 5. Basic Attention Task: Reaction Time Results &amp; Modeling
locations, similar to the right standard (p &lt; :001). Accuracy was very high (&gt; 95%).
The basic results were replicated in new subjects (n = 12; p &lt; :01). (C). Second, in
each condition reaction times sped-up for the farthest distractor location (p &lt; :001).
This was seen in each subject’s first block, so is not due to carry-over effects from
previous standard locations. The faster responses at far distractors cannot be accounted for
by a graded reduction in bias from the attended location (goal map alone). Instead, the
heightened bias to far distractors is modeled by the saliency map. A control condition
in new subjects and equal probability at all locations had no differences in reaction
time (n = 20; p = :83), ruling out accounts based on perceptual differences among
locations. In D the inverse of the reaction times is given to show how attention bias is
theorized to relate to reaction time (greater bias ! faster reaction times).
5</p>
    </sec>
    <sec id="sec-5">
      <title>Results</title>
      <p>We have considered two combinations of options for the goal and saliency map and
we have compared how well the emerging priority map fits the behavioral experiments
data. The combinations that we have considered are
– Hypothesis 1: standard Gaussian centered for the goal map and inverted Gaussian
for the saliency map, both centered at the attention location;
– Hypothesis 2: modified Gaussian with inhibition for the goal map and inverted
Gaussian for the saliency map, both centered at the attention location;.</p>
      <p>We recall that the goal map models the voluntary focus of attention. Both options
that we consider model a peak of attention around the attended location and then a
symmetrical region of lower attentional bias away from the attended location. In addition,
the modified Gaussian assumes an area of inhibited attention around the peak. As far
as the saliency map, the inverted Gaussian shape is consistent with results which we
have observed in our experiments above. which suggest that bottom-up attentional bias
progressively increases away from the attended location.</p>
      <p>We have, in addition, assumed the same maximum level G = GG = GS for the
goal and the saliency map. This corresponds to saying that peak attention levels
generated by the top-down and bottom-up component are the same, which is reasonable,
in particular if the components are thought of as independent of each other. Another
similar constraint which we plan to consider in the future is fixing the overall amount
of attentional bias to be constant across the maps.</p>
      <p>We also assume = , thus taking the view that the top-down and bottom
components equally contribute to the overall attentional bias. We call this parameter . We
have fitted the data by using a stochastic local search approach on the parameters of the
functions, which we recall are: G, dG, dS and for hypothesis 1 and G, dG1, dG2,dS ,
and for hypothesis 2.</p>
      <p>The evaluation function we have used is the sum of squared errors:</p>
      <p>E(p) =</p>
      <p>X
where dx is the bias associated to location x in the experimental data and p(x) is
the value associated to x by the priority map. We performed the fit for all three attended
locations, that is, -90 , 0 and +90 .</p>
      <p>The best results for all three locations have been obtained by Hypothesis 1 with
fitting values (1 E(p)) equal to: 0.943 for 0 , 0.739 for +90 and 0.904 for -90 .
The corresponding fitting values for Hypothesis 2 are instead: 0.903 for 0 , 0.501 for
+90 and 0.850 for -90 . As it can be seen Hypothesis 1 outperforms Hypothesis 2 in
particular in the +90 case. This is in part due to the asymmetry between +90 and
90 data. However, Hypothesis 1 fits better in both cases despite being symmetric. In
Figure 6 we show all the maps corresponding to the best fit superposed with the data.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Future directions</title>
      <p>We have a rich agenda of future directions.</p>
      <p>Impact on goal map of short-term memory load. We hypothesize that, relative to no
load, the addition of a short-term memory load, such as first memorizing three words
and then performing several trials of the attention task, increases reaction times to
distractors near the attended location, decreases reaction times at far locations, and
increases inter-trial variability. Future experiments will assess the role of short-term
memory loads on the goal map. We conjecture that memory load should impair
topdown control as task-specific information and load both rely on short-term memory.
The rationale is that the predicted reaction time effects would be in range of the goal
map but not at far locations (relative to the attended location) which are mediated by
the saliency map. Load effects will be modeled by introducing a probability distribution
over the goal map options and assuming that memory load results in some trials with
equal attentional biases across locations (constant function). This should also increase
reaction time variability.
Loudness and attention gradient. We plan to investigate whether intensity changes
decrease reaction times near the standard location, with progressively smaller increases
at more distant locations. Loud sounds induce automatic orienting, which our model
would represent with the saliency map attentional bias. Saliency, and attentional
orienting, can also follow from the absence of an expected sound, such as an engine
unexpectedly stopping (or give a better example). The study will test whether attentional
bias due to changes in stimulus intensity follows from saliency (increases or decreases
bias attention) or loudness (only increases bias attention). Behavioral experiments will
distinguish between an alternative hypothesis that intensity changes slow responding
due to shifting attention. We expect locations near the standard to receive little
benefit from saliency due to proximity to the goal map focus, and would be enhanced
by intensity-based saliency bias. At far locations the saliency map is already tuned to
bottom-up inputs, and has less to gain by intensity changes. Intensity effects are
expected to vary by standard location ( 90 &gt; 0 &gt; +90 ). The basic task will be used
with manipulations of stimulus intensity. Intensity effects will be modeled by the
introduction of new variable having as values the intensity levels of the stimulus. We will
modify the inverted Gaussian option for the saliency map with a new parameter
obtainja ij2
ing K(G Ge 2 d2S ), where the new parameter is denoted by K. Moreover, we will
add new constraints connecting the location variable A, the new intensity variable and
the saliency map variables.</p>
      <p>
        Location probability and attention gradients. Finally we plan to understand if, as we
conjecture, the probability of a stimulus at a given location is negatively associated
with reaction time. It has been shown that attentional bias is strongly dependent on
expectations (Itti and Baldi, 2009) and that the degree of expectation depends on base
rate, with unlikely events having large evoked potentials [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Our preliminary data
show that if distractors are improbable reaction times increase and then decrease across
locations, but if equiprobable they do not differ among locations. Two experiments
will test the role of stimulus probability in attention gradients. One will manipulate
distractor base rate in equal increments (p = :04; :12; :20), separate blocks. The other
experiment will maintain the usual standard probability (p = :84) and tests whether
increasing distractor probability near the standard location (nearest 45 p = :07, other 3
distractors p = :03) will shift the reaction time curve away from the standard location
and if decreased probability near 45 (p = :01, other 3 distractors p = :05) shifts it
toward the standard. In terms of the computational model we will have to incorporate a
way to handle sequences of stimuli. This could be done in different ways. For example
we might introduce dedicated new location and intensity variables indexed with the
position of the stimulus in the sequence. Another option would be to have a unique
“stimulus” variable with a more structured domain, for example comprising or triple
(position in stimuli sequence, location, intensity).
7
      </p>
    </sec>
    <sec id="sec-7">
      <title>Conclusions</title>
      <p>We have presented a constraint-based model of auditory spatial attention. Our model
is based on a well established decomposition in top-down and bottom-up components
and, to the best of our knowledge, it is the first one focusing on the auditory system.
Constraints allow for a high degree of flexibility in terms of hypotheses testing. Our
initial results in terms of fitting experimental data are very promising and bring interesting
insight on the role and interplay of the two components in the distribution of auditory
attention in space. Acknowledgements. This work is supported by NIH under grant
number R01-DC015736.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Vera</surname>
            <given-names>AH</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Howes</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McCurdy</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <article-title>and Lewis RL. A constraint satisfaction approach to predicting skilled interactive cognition</article-title>
          .
          <source>In Proceedings of the 2004 Conference on Human Factors in Computing Systems (CHI</source>
          <year>2004</year>
          ), pages
          <fpage>121</fpage>
          -
          <lpage>128</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Allen</given-names>
            <surname>Allport</surname>
          </string-name>
          .
          <source>Foundations of Cognitive Science</source>
          . pages
          <fpage>631</fpage>
          -
          <lpage>682</lpage>
          . MIT Press, Cambridge, MA, USA,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.R.</given-names>
            <surname>Anderson</surname>
          </string-name>
          .
          <article-title>Rule of the Mind</article-title>
          .
          <source>Laurence Erlbaum Assoc</source>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Stig</given-names>
            <surname>Arlinger</surname>
          </string-name>
          , Thomas Lunner, Bjrn Lyxell, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kathleen</surname>
          </string-name>
          Pichora-Fuller.
          <article-title>The emergence of cognitive hearing science</article-title>
          .
          <source>Scandinavian Journal of Psychology</source>
          ,
          <volume>50</volume>
          (
          <issue>5</issue>
          ):
          <fpage>371</fpage>
          -
          <lpage>384</lpage>
          ,
          <year>October 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Awh</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Pashler</surname>
          </string-name>
          .
          <article-title>Evidence for split attentional foci</article-title>
          .
          <source>Journal of Experimental Psychology. Human Perception and Performance</source>
          ,
          <volume>26</volume>
          (
          <issue>2</issue>
          ):
          <fpage>834</fpage>
          -
          <lpage>846</lpage>
          ,
          <year>April 2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Alan</given-names>
            <surname>Baddeley</surname>
          </string-name>
          .
          <article-title>Working memory</article-title>
          .
          <source>Current Biology</source>
          ,
          <volume>20</volume>
          (
          <issue>4</issue>
          ):
          <fpage>R136</fpage>
          -
          <lpage>R140</lpage>
          ,
          <year>February 2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Duncan</surname>
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Brumby</surname>
            ,
            <given-names>Andrew</given-names>
          </string-name>
          <string-name>
            <surname>Howes</surname>
            , and
            <given-names>Dario D.</given-names>
          </string-name>
          <string-name>
            <surname>Salvucci</surname>
          </string-name>
          .
          <article-title>A Cognitive Constraint Model of Dual-task Trade-offs in a Highly Dynamic Driving Task</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07</source>
          , pages
          <fpage>233</fpage>
          -
          <lpage>242</lpage>
          , New York, NY, USA,
          <year>2007</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Serge</given-names>
            <surname>Caparos</surname>
          </string-name>
          and
          <string-name>
            <surname>Karina J. Linnell.</surname>
          </string-name>
          <article-title>The spatial focus of attention is controlled at perceptual and cognitive levels</article-title>
          .
          <source>Journal of Experimental Psychology. Human Perception and Performance</source>
          ,
          <volume>36</volume>
          (
          <issue>5</issue>
          ):
          <fpage>1080</fpage>
          -
          <lpage>1107</lpage>
          ,
          <year>October 2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Patrick</given-names>
            <surname>Cavanagh</surname>
          </string-name>
          and
          <string-name>
            <given-names>George A.</given-names>
            <surname>Alvarez</surname>
          </string-name>
          .
          <article-title>Tracking multiple targets with multifocal attention</article-title>
          .
          <source>Trends in Cognitive Sciences</source>
          ,
          <volume>9</volume>
          (
          <issue>7</issue>
          ):
          <fpage>349</fpage>
          -
          <lpage>354</lpage>
          ,
          <year>July 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Cave</surname>
          </string-name>
          and
          <string-name>
            <given-names>N. P.</given-names>
            <surname>Bichot</surname>
          </string-name>
          .
          <article-title>Visuospatial attention: beyond a spotlight model</article-title>
          .
          <source>Psychonomic Bulletin &amp; Review</source>
          ,
          <volume>6</volume>
          (
          <issue>2</issue>
          ):
          <fpage>204</fpage>
          -
          <lpage>223</lpage>
          ,
          <year>June 1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Marvin</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Chun</surname>
            ,
            <given-names>Julie D.</given-names>
          </string-name>
          <string-name>
            <surname>Golomb</surname>
          </string-name>
          , and Nicholas B.
          <string-name>
            <surname>Turk-Browne</surname>
          </string-name>
          .
          <article-title>A taxonomy of external and internal attention</article-title>
          .
          <source>Annual Review of Psychology</source>
          ,
          <volume>62</volume>
          :
          <fpage>73</fpage>
          -
          <lpage>101</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N.</given-names>
            <surname>Cowan</surname>
          </string-name>
          .
          <article-title>Evolving conceptions of memory storage, selective attention, and their mutual constraints within the human information-processing system</article-title>
          .
          <source>Psychological Bulletin</source>
          ,
          <volume>104</volume>
          (
          <issue>2</issue>
          ):
          <fpage>163</fpage>
          -
          <lpage>191</lpage>
          ,
          <year>September 1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Bert De Coensel and Dick Botteldooren</surname>
          </string-name>
          .
          <article-title>A model of saliency-based auditory attention to environmental sound</article-title>
          .
          <source>In Proceedings of the 20th International Congress on Acoustics (ICA - 2010)</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Susan</surname>
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Epstein</surname>
          </string-name>
          .
          <article-title>For the right reasons: The FORR architecture for learning in a skill domain</article-title>
          .
          <source>Cognitive Science</source>
          ,
          <volume>18</volume>
          (
          <issue>3</issue>
          ):
          <fpage>479</fpage>
          -
          <lpage>511</lpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Folk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Remington</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Johnston</surname>
          </string-name>
          .
          <article-title>Involuntary covert orienting is contingent on attentional control settings</article-title>
          .
          <source>Journal of Experimental Psychology. Human Perception and Performance</source>
          ,
          <volume>18</volume>
          (
          <issue>4</issue>
          ):
          <fpage>1030</fpage>
          -
          <lpage>1044</lpage>
          ,
          <year>November 1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Daniel</given-names>
            <surname>Gall</surname>
          </string-name>
          and Thom W. Fru¨hwirth.
          <article-title>Exchanging conflict resolution in an adaptable implementation of ACT-R</article-title>
          . TPLP,
          <volume>14</volume>
          (
          <issue>4-5</issue>
          ):
          <fpage>525</fpage>
          -
          <lpage>538</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Edward</surname>
            <given-names>J. Golob and John L. Holmes.</given-names>
          </string-name>
          <article-title>Cortical mechanisms of auditory spatial attention in a target detection task</article-title>
          .
          <source>Brain Research</source>
          ,
          <volume>1384</volume>
          :
          <fpage>128</fpage>
          -
          <lpage>139</lpage>
          ,
          <year>April 2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Greenwood</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Parasuraman</surname>
          </string-name>
          .
          <article-title>Scale of attentional focus in visual search</article-title>
          .
          <source>Perception &amp; Psychophysics</source>
          ,
          <volume>61</volume>
          (
          <issue>5</issue>
          ):
          <fpage>837</fpage>
          -
          <lpage>859</lpage>
          ,
          <year>July 1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Sabine</given-names>
            <surname>Grimm</surname>
          </string-name>
          and
          <string-name>
            <given-names>Carles</given-names>
            <surname>Escera</surname>
          </string-name>
          .
          <article-title>Auditory deviance detection revisited: evidence for a hierarchical novelty system</article-title>
          .
          <source>International Journal of Psychophysiology: Official Journal of the International Organization of Psychophysiology</source>
          ,
          <volume>85</volume>
          (
          <issue>1</issue>
          ):
          <fpage>88</fpage>
          -
          <lpage>92</lpage>
          ,
          <year>July 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L.</given-names>
            <surname>Itti</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Koch</surname>
          </string-name>
          .
          <article-title>Computational modelling of visual attention</article-title>
          .
          <source>Nature Reviews. Neuroscience</source>
          ,
          <volume>2</volume>
          (
          <issue>3</issue>
          ):
          <fpage>194</fpage>
          -
          <lpage>203</lpage>
          ,
          <year>March 2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>William</given-names>
            <surname>James</surname>
          </string-name>
          .
          <article-title>The principles of psychology</article-title>
          . New York : Holt,
          <year>1890</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Nilli</given-names>
            <surname>Lavie</surname>
          </string-name>
          .
          <article-title>Distracted and confused?: selective attention under load</article-title>
          .
          <source>Trends in Cognitive Sciences</source>
          ,
          <volume>9</volume>
          (
          <issue>2</issue>
          ):
          <fpage>75</fpage>
          -
          <lpage>82</lpage>
          ,
          <year>February 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Stephan</given-names>
            <surname>Lewandowsky</surname>
          </string-name>
          and
          <string-name>
            <given-names>Simon</given-names>
            <surname>Farrell</surname>
          </string-name>
          .
          <article-title>Computational Modeling in Cognition: Principles and Practice</article-title>
          .
          <source>SAGE Publications</source>
          ,
          <year>November 2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Hans</given-names>
            <surname>Liljenstrom</surname>
          </string-name>
          .
          <article-title>Neural Stability and Flexibility: A Computational Approach</article-title>
          . Neuropsychopharmacology,
          <volume>28</volume>
          (
          <issue>S1</issue>
          ):
          <fpage>S64</fpage>
          -
          <lpage>S73</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Mackworth</surname>
          </string-name>
          .
          <article-title>The breakdown of vigilance during prolonged visual search</article-title>
          .
          <source>The Quarterly Journal of Experimental Psychology</source>
          ,
          <volume>1</volume>
          :
          <fpage>6</fpage>
          -
          <lpage>21</lpage>
          ,
          <year>1948</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Mangun</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Hillyard</surname>
          </string-name>
          .
          <article-title>Spatial gradients of visual attention: behavioral and electrophysiological evidence</article-title>
          .
          <source>Electroencephalography and Clinical Neurophysiology</source>
          ,
          <volume>70</volume>
          (
          <issue>5</issue>
          ):
          <fpage>417</fpage>
          -
          <lpage>428</lpage>
          ,
          <year>November 1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Mondor</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Zatorre</surname>
          </string-name>
          .
          <article-title>Shifting and focusing auditory spatial attention</article-title>
          .
          <source>Journal of Experimental Psychology. Human Perception and Performance</source>
          ,
          <volume>21</volume>
          (
          <issue>2</issue>
          ):
          <fpage>387</fpage>
          -
          <lpage>409</lpage>
          ,
          <year>April 1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Notger</surname>
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Mller</surname>
            , Maas Mollenhauer, Alexander Rsler, and
            <given-names>Andreas</given-names>
          </string-name>
          <string-name>
            <surname>Kleinschmidt</surname>
          </string-name>
          .
          <article-title>The attentional field has a Mexican hat distribution</article-title>
          .
          <source>Vision Research</source>
          ,
          <volume>45</volume>
          (
          <issue>9</issue>
          ):
          <fpage>1129</fpage>
          -
          <lpage>1137</lpage>
          ,
          <year>April 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Damiano</surname>
            <given-names>Oldoni</given-names>
          </string-name>
          , Bert De Coensel, Michiel Boes, Michal Rademaker, Bernard De Baets, Timothy Van Renterghem,
          <string-name>
            <given-names>and Dick</given-names>
            <surname>Botteldooren</surname>
          </string-name>
          .
          <article-title>A computational model of auditory attention for use in soundscape research</article-title>
          .
          <source>The Journal of the Acoustical Society of America</source>
          ,
          <volume>134</volume>
          (
          <issue>1</issue>
          ):
          <fpage>852</fpage>
          -
          <lpage>861</lpage>
          ,
          <year>July 2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>W. B.</given-names>
            <surname>Pillsbury</surname>
          </string-name>
          . Attention.
          <article-title>Half-title: Library of philosophy</article-title>
          . Ed. by
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Muirhead</surname>
          </string-name>
          . S. Sonnenschein &amp; Co., ltd.
          <source>The Macmillan co.</source>
          , London, New York,
          <year>1908</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Rader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Holmes</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Golob</surname>
          </string-name>
          .
          <article-title>Auditory event-related potentials during a spatial working memory task</article-title>
          .
          <source>Clinical Neurophysiology: Official Journal of the International Federation of Clinical Neurophysiology</source>
          ,
          <volume>119</volume>
          (
          <issue>5</issue>
          ):
          <fpage>1176</fpage>
          -
          <lpage>1189</lpage>
          , May
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>Gillian</given-names>
            <surname>Rhodes</surname>
          </string-name>
          .
          <article-title>Auditory attention and the representation of spatial information</article-title>
          .
          <source>Perception &amp; Psychophysics</source>
          ,
          <volume>42</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          ,
          <year>January 1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rorden</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Driver</surname>
          </string-name>
          .
          <article-title>Spatial deployment of attention within and across hemifields in an auditory task</article-title>
          .
          <source>Experimental Brain Research</source>
          ,
          <volume>137</volume>
          (
          <issue>3-4</issue>
          ):
          <fpage>487</fpage>
          -
          <lpage>496</lpage>
          ,
          <year>April 2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>Francesca</surname>
            <given-names>Rossi</given-names>
          </string-name>
          , Peter van Beek, and Toby Walsh, editors.
          <source>Handbook of Constraint Programming. Elsevier Science Inc</source>
          ., New York, NY, USA,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>E.</given-names>
            <surname>Schrger</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Wolff</surname>
          </string-name>
          .
          <article-title>Attentional orienting and reorienting is indicated by human eventrelated brain potentials</article-title>
          .
          <source>Neuroreport</source>
          ,
          <volume>9</volume>
          (
          <issue>15</issue>
          ):
          <fpage>3355</fpage>
          -
          <lpage>3358</lpage>
          ,
          <year>October 1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>Charles</given-names>
            <surname>Spence</surname>
          </string-name>
          and
          <string-name>
            <given-names>Jon</given-names>
            <surname>Driver</surname>
          </string-name>
          .
          <article-title>Audiovisual links in exogenous covert spatial orienting</article-title>
          .
          <source>Perception &amp; Psychophysics</source>
          ,
          <volume>59</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          ,
          <year>January 1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Wachtel</surname>
          </string-name>
          .
          <article-title>Conceptions of broad and narrow attention</article-title>
          .
          <source>Psychological Bulletin</source>
          ,
          <volume>68</volume>
          (
          <issue>6</issue>
          ):
          <fpage>417</fpage>
          -
          <lpage>429</lpage>
          ,
          <year>December 1967</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <surname>Stuart</surname>
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Wrigley</surname>
            and
            <given-names>Guy J. Brown.</given-names>
          </string-name>
          <article-title>A computational model of auditory selective attention</article-title>
          .
          <source>IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council</source>
          ,
          <volume>15</volume>
          (
          <issue>5</issue>
          ):
          <fpage>1151</fpage>
          -
          <lpage>1163</lpage>
          ,
          <year>September 2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Zatorre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Mondor</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Evans</surname>
          </string-name>
          .
          <article-title>Auditory attention to space and frequency activates similar cerebral systems</article-title>
          .
          <source>NeuroImage</source>
          ,
          <volume>10</volume>
          (
          <issue>5</issue>
          ):
          <fpage>544</fpage>
          -
          <lpage>554</lpage>
          ,
          <year>November 1999</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>