<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using Neural Network Models to Model Cerebral Hemispheric Differences in Proc- essing Ambiguous Words</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Larry Manevitz</string-name>
          <email>manevitz@cs.haifa.ac.il</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hananel Hazan</string-name>
          <email>hhazan01@cs.haifa.ac.il</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science Haifa University University of Haifa</institution>
          ,
          <addr-line>Mount Carmel,Haifa 31905,Israel [manevitz</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Orna Peleg and Zohar Eviatar Institute of Information Processing and Decision Making Haifa University University of Haifa</institution>
          ,
          <addr-line>Mount Carmel, Haifa 31905, Israel [opeleg</addr-line>
        </aff>
      </contrib-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>In our laboratory, we have attempted to measure subtle
differences in human subjects partially by using the richness
of Hebrew in both homophonic and heterophonic
homographs (in standard orthography Hebrew is written without
vowels) and measuring the difference in response when
presenting homographs directly to one hemisphere or the other.
To compare our human results with computational ones, we
designed and present here a connectionist (neural network)
model of each hemisphere for lexical disambiguation based
on the well-known Kawamoto [1993] model.</p>
      <p>Our model includes two separate networks, one for
each hemisphere. One network incorporates Kawamoto's
version in which the entire network is completely
connected. (Thus orthographic, phonological and semantical
"neurons" are not distinguished architecturally.) This
network successfully simulated the time course of lexical
disambiguation in the Left Hemisphere. In the other network,
direct connections between orthographic and phonological
units are removed. The speed of convergence in resolving
ambiguities were studied in these two networks under a
variety of conditions simulating various kinds of priming. The
comparative results presented are analagous to the results
obtained under our human subject testing thereby
strengthening our belief in the correctness of our psychological
explanation of the processing.</p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <p>Neuropsychological studies have shown that both cerebral
hemispheres process orthographic, phonological and
semantic aspects of written words, albeit in different ways.
Behavioral studies have shown that the LH is more influenced
by the phonological aspect of written words whereas lexical
processing in the RH is more sensitive to visual form. In
addition, semantically ambiguous words (e.g., "bank") were
found to result in different time-lines of meaning activation
in the two hemispheres. However, computational models of
reading in general and of lexical ambiguity resolution in
particular, have not incorporated this asymmetry into their
architecture.</p>
      <p>A large amount of psycholinguistic literature indicates
that readers utilize both frequency and context to resolve
lexical ambiguity [e.g., Duffy, Morris &amp; Rayner 1988;
Titone 1998; Peleg, Giora &amp; Fein 2001, 2004]. The idea that
multiple sources of evidence (relative frequency as well as
context) affect the degree to which a particular meaning is
activated and the eventual outcome of the resolution, as well
as the process, can be nicely captured within a neural
network (connectionist) approach to language processing. In
connectionist terminology, the computation of meaning is a
constraint satisfaction problem: the computed meaning is
that which satisfies the multiple constraints represented by
the weights on connections between units in different parts
of the network.
2.1</p>
    </sec>
    <sec id="sec-3">
      <title>Kawamoto Model</title>
      <p>A connectionist account of lexical ambiguity resolution
was presented by Kawamoto [1993]. In his fully recurrent
network, ambiguous and unambiguous words are
represented as distributed pattern of activity over a set of simple
processing units. Each lexical entry is represented over a
216 - bit vector divided into separate sub-vectors
representing the “spelling”, ”pronunciation”, "part of speech" and
“meaning”. The network is trained with a simple error
correction algorithm by presenting it with the pattern to be
learned. The result is that these patterns (the entire word
including its orthographic, phonological and semantic
features) become attractors in the 216-dimensional
representational space. The network is tested by presenting it with just
part of the lexical entry (e.g., its spelling pattern) and testing
how long various parts of the network take to settle into a
pattern corresponding to a particular lexical entry.
Kawamoto trained his network in such a way that the more
frequent combination for a particular orthographic
representation was the "deeper" attractor; i.e. the completion of the
other features (semantic and phonological) would usually
fall into this attractor. (This was accomplished by biasing
the learning process of the network.). However, using a
technological analogy of "priming" to bias the appropriate
completion, the resulting attractor could in fact be the less
frequent combination – which corresponds nicely to human
behavioral data. Indeed, consistent with human empirical
results, after the network was trained, the resolution process
was affected by the frequency of the different lexical entries
(reflected in the strength of the connections in the network)
and by the context.</p>
      <p>Kawamoto’s model uses perhaps the simplest architecture
that can suffice for LH processing during reading in general
and ambiguity resolution in particular. Thivierge, Titone and
Schultz (2005) recently presented a connectionist model of
LH involvement during ambiguity resolution, in which the
representations of the words were identical to the vectors
used by Kawamoto. (Other computational models of reading
have included interconnections between orthographic,
phonological, and semantic representations [e.g., Seidenberg &amp;
McClelland 198]). The model proposed below incorporates
two networks, the first architectural identically to
Kawamoto’s original model, and the second architecturally
modified in order to account for RH language processing.</p>
      <p>Note that Kawamoto's network, however, does not model
hemispheric differences.</p>
    </sec>
    <sec id="sec-4">
      <title>2.2 Two-Hemisphere Model</title>
      <p>In this paper, we present a preliminary model for lexical
disambiguation in the two cerebral hemispheres that is
based on the above work of Kawamoto. The model includes
two separate networks. One network incorporates
Kawamoto’s version, and successfully simulates the time course
of lexical disambiguation in the LH. In the other network
based on the behavior of the disconnected RH of split brain
patients [Zaidel &amp; Peters, 1982], we made a change in
Kawamoto's architecture, removing the direct connections
between orthographic and phonological units. Taken together,
the two networks produce processing asymmetries
comparable to those found in the behavioral studies.</p>
    </sec>
    <sec id="sec-5">
      <title>2.3 The effect of frequency and context on semantic ambiguity resolution in the two cerebral hemispheres.</title>
      <p>In Latin orthographies (such as English), the orthographic
representation (the spelling) of a word is usually associated
with one phonological representation. Thus, most studies of
lexical ambiguity have used homophonic homographs
(homonyms - a single orthographic and phonological
representation associated with two meanings). As a result,
models of hemispheric differences in lexical processing have
focused mainly on semantic organization [e.g., Beeman
1998]. We suggest that this reliance on homonyms may
have limited our understanding of hemispheric involvement
in meaning activation, neglecting the contribution of
phonological asymmetries to hemispheric differences in semantic
activation and has limited the range of models proposed to
describe the process of reading in general.</p>
      <p>Visual word recognition studies demonstrate that, even
though both hemispheres have access to orthographic and
phonological representations of words, the LH is more
influenced by the phonological aspects of a written word [e.g.,
Zaidel, 1982; Zaidel &amp; Peters 1981; Lavidor and Ellis
2003], whereas lexical processing in the RH is more
sensitive to the visual form of a written word [e.g., Marsollek,
Kosslyn &amp; Squire, 1992; Marsolek, Schacter &amp; Nicholas
1996; Lavidor and Ellis 2003]. Given that many
psycholinLH:
RH:
guistic models suggest that silent reading always includes a
phonological factor [e.g., Berendt &amp; Perfetti, 1995; Frost
1998; Van Orden, Pennington &amp; Stone, 1990; Lukatela and
Turvey 1994], it is conceivable that such asymmetries may
also impact the assignment of meaning to written words
during on-line sentence comprehension.</p>
      <p>This study takes advantage of Hebrew orthography that in
contrast to less opaque Latin orthographies, offers an
opportunity to compare different types of ambiguities within the
same language [e.g., Frost and Bentin 1992].</p>
      <p>In Hebrew, letters represent mostly consonants, and
vowels can optionally be superimposed on consonants as
diacritical marks. Since the vowel marks are usually omitted,
readers frequently encounter words with more than one
possible interpretation. Thus, in addition to semantic
ambiguities (a single orthographic and phonological form associated
with multiple meanings), the relationship between the
orthographical and the phonological forms of a word is also
frequently ambiguous. For example, the printed letter string
"חלמ" in Hebrew has two different pronunciations (/melach/
or /malach/), each of which has a different meaning (‘salt’
or ‘sailor’).
3</p>
    </sec>
    <sec id="sec-6">
      <title>The Model</title>
      <p>We propose a model that incorporates a right hemisphere
structure (i.e. network) and a left hemisphere structure (i.e.
network) that differ in the coordination and relationships
between orthographic, phonological and semantic processes.
The two structures are homogeneous in the sense that all
computations involve the same sources of information.
However, the time course of meaning activation and the
relative influence of different sources of information at
different points in time during this process is different, because
these sources of information relate to each other in different
ways. A graphic representation of the model is presented
below:</p>
    </sec>
    <sec id="sec-7">
      <title>3.1 The Split Reading Model</title>
      <sec id="sec-7-1">
        <title>Orthography Phonology Semantics</title>
      </sec>
      <sec id="sec-7-2">
        <title>Orthography</title>
        <p>Phonology</p>
        <p>Semantics</p>
        <p>LH Structure: Orthographic, phonological and semantic
codes are fully connected. The connections between these
different sources of information are bi-directional and the
different processes may very well run in parallel. However,
the model incorporates a sequential ordering of events that
results from some processes occurring faster than others.
For example, in the LH, orthographic codes are directly
related to both phonological and semantic codes. However,
because orthography is more systematically related to
phonology than to semantics, the phonological computation of
orthographic representations is faster than the semantic
computation of these same representations. As a result,
meaning activation in the LH is initially influenced
primarily by phonology [e.g., Lavidor &amp; Ellis, 2003] resulting in
immediate exhaustive activation of all meanings related to a
given phonological form, regardless of frequency or
contextual information [e.g., Burgess &amp; Simpson 1988; Titone
1998; Swinney &amp; Love, 2002].</p>
        <p>RH Structure: Phonological codes are not directly
related to orthographic codes and are activated indirectly via
semantic codes. This organization predicts a different
sequential ordering of events in which the phonological
computation of orthographic representations begins later than
the semantic computation of these same representations. As
a result, lexical access in the RH is initially influenced by
orthography [e.g., Lavidor &amp; Ellis, 2003] and by semantic
information, so that less frequent or contextually
inappropriate meanings are not immediately activated.
Nevertheless, these meanings can be activated later when
phonological information becomes available [e.g., Burgess &amp; Simpson
1988; Titone 1998].
4</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Testing the Model:</title>
      <p>This model is tested according to the philosophy describe
in the abstract in two complementary ways:
(i) By psychophysical experiments with human subjects.
(ii) By a computational neural network model.
(In this paper we mainly describe the computation network
and its results).</p>
      <p>If our ideas are correct and orthographic codes activate
phonological codes directly in the LH and indirectly in the
RH, we should observe that the distinction in processing the
two kinds of word types (i.e. homophonic and heterophonic
homographs) should occur at different stage in processing in
the LH and RH.</p>
      <p>Specifically within the LH these differences will be seen
in the early stage of lexical access, where as with RH, these
differences will only be seen at a later point in time.</p>
    </sec>
    <sec id="sec-9">
      <title>4.1 Brief Description of Preliminary Human Results</title>
      <p>In our lab, we have recently investigated the role
phonology plays in silent reading by examining the activation of
dominant and subordinate meanings of homophonic and
heterophonic homographs (a single orthographic
representation associated with two phonological representation, each
associated with a different meaning) in the two
hemispheres. We used a divided visual field paradigm that
allows the discernment of differential hemispheric processing
of tachistoscopically presented stimuli. Heterophonic and
homophonic homographs were used as primes in a lexical
decision task, where the target words were either related to
the dominant meaning or to the subordinate meaning of the
ambiguous word, or were unrelated. We measured semantic
facilitation by response times. A significant interaction
between visual field of presentation (right or left), type of
stimulus (heterophonic or homophonic homograph) and
g
inm100
ir 80
p
to 60
e 40
u
d 20
n
tio 0
tia-20
l
ic-40
a
F
RH
LVF
* *
LH</p>
      <p>RVF
g Heterophonic Homographs
n
i
im100
rp 80
to 60
edu 2400 *
n
tio 0
tiilac--2400 RH
aF LVF</p>
      <p>LH
RVF
dom
sub
dom
sub</p>
      <p>The units in the LH and RH network were implemented
as described by Kawamoto [1993] with the following
changes: (a) the original 48 4-letters words were replaced
with 48 patterns representing 24 pairs of polarized Hebrew
3-letter homographs, half heterophonic and half
homophonic. (b) 45 features (instead of 48) represented the
word's spelling and 60 features (instead of 48) represented
its pronunciation. This is because the pronunciation includes
the vowels that were omitted from the spelling. The
representation for "part of speech" (all nouns) and "meaning"
remains the same as in the original model. Overall, each
entry is represented as a vector of 270 binary-valued
features. Both networks were trained on the same input with a
simple error correction algorithm [1, 2]:
Δ W ij = η (t i − i i )t j
ii = ∑ W ij t j
j
[1]
[2]
Where η is a scalar learning constant fixed to 0.0015, ti and
tj are the target activation levels of units i and j, and ii is the
net input to unit i. The magnitude of the change in
connectype of target words suggested that heterophonic and
homophonic homographs were disambiguated differently in the
two visual fields, and by implication, in the two
hemispheres. With homophonic homographs, targets related to
both dominant and subordinate meanings were activated in
the RVF/LH, while in the LVF/RH only dominant meanings
evoked facilitated responses (panel A in Figure 1).
Alternatively, with heterophonic homographs only dominant
meanings evoked facilitated responses, and only in the LVF/RH
(panel B in Figure 1).</p>
      <p>Homophonic Homographs
tion strength is determined by the magnitude of the learning
constant and the magnitude of the error (ti - ii.).</p>
      <p>The activity of a single unit in both networks is represented
as a real value ranging between -1.0 and + 1.0.</p>
      <p> 1

LIMIT =  − 1
The activity of a unit is computed from three different
sources: the 1st is the sum of all outputs of other units in the
net; the 2nd is the direct input from the external
environment; and the 3rd is the output of the unit in the previous
iteration multiplied by the decay rate.</p>
      <p>Since all units are mutually connected these influences lead
to changes in the activity of a unit as a function of time
(where time changes in discrete steps). That is, the activity
of a unit (a) at time t + 1 is:</p>
      <p>   
a(t + 1) = LIMIT δa(t ) + ∑ wij (t )a j (t ) + si (t ) [4]
  j  
Where δ is a decay variable that changes from 0.7 to 1. si(t) is
the influence of the input stimulus on unit ai at time (t+1),
and LIMIT bounds the activity to the range from -1.0 to +1.0.</p>
      <p>  
a(t + 1) = LIMIT δa(t ) + ∑ wij (t )a j (t )
  j 
[5]</p>
      <p>In each simulation, 12 identical LH and RH networks
were used to simulate 12 subjects in an experiment. Each
network was trained on 1300 learning trials. On each
learning trial an entry was selected randomly from the lexicon.
Dominant and subordinate meanings were selected with a
ratio of 5 to 3. After the networks were trained they were
tested by presenting just the spelling part of the entry as the
input (to simulate neutral context) or by presenting part of
the semantic sub-vector together with the spelling (to
simulate prior contextual bias). In each simulation the input sets
the initial activation of the units. The level was set to +0.25
if the corresponding input feature was positive, -0.25 if it
was negative and 0 otherwise. In order to assess lexical
access, the number of iterations through the network for all the
units in the spelling, pronunciation or meaning fields to
become saturated, was measured. A response was considered
an error if the pattern of activity did not correspond with the
input, or if all the units did not saturate after 50 iterations.</p>
    </sec>
    <sec id="sec-10">
      <title>4.2.1 Results and Discussion</title>
      <p>hetero
17.69</p>
      <sec id="sec-10-1">
        <title>Hetero</title>
        <p>18.58</p>
        <p>When homographs are presented without a biasing
context, only the dominant meaning is accessed in both
networks. However, in the LH network, meanings are accessed
faster. This is consistent with LH advantage for lexical
processing reported in the literature. More importantly,
homophonic and heterophonic homographs are processed
differently in the two networks. In the LH network, lexical
access is longer for heterophonic homographs then for
homophonic homographs (Table 1) due to the
timeconsuming competition between the two phonological
representations. Indeed, more iterations were needed for the
phonological units to become saturated in the case of
heterophonic homographs than for homophonic homographs
(Table 2). This is consistent with the idea that in the LH,
phonological information guides early stages of meaning
activation. Alternatively, in the RH network, phonological
differences are less pronounced (Table 2) and processing
times of homophonic and heterophonic homographs are
similar (Table 1). This is consistent with the idea that in the
RH, orthographic and semantic sources of information exert
their influence earlier than phonological information.</p>
        <p>When homographs are presented with a biasing context,
only the contextually compatible meaning is accessed in
both networks, In addition dominant meanings in dominant
contexts are accessed faster than subordinate meanings in
subordinate contexts (Table 1). Interestingly, in the LH
network, homophonic advantage in processing time disappears
when a biasing context is provided. Moreover, when
homographs are presented with a subordinate context, it
takes longer to access the subordinate meaning of
homophones homographs compare to heterophones homographs
(Table 1). In both cases, as predicted phonological
disambiguation precedes meaning disambiguation (Table 2).</p>
        <p>Because heterophonic homographs have different
pronunciations, these homographs involve the mapping of a
single orthographic code onto two phonological codes. As a
result, when no context is presented, the speed of lexical
access is slower for heterophonic homographs then for
homophonic homographs. On the other hand, when context
is provided, the single phonological code of homophonic
homographs is still associated with both meanings, whereas
the phonological representation of heterophonic
homographs is associated with only one meaning. As a result,
when homographs are presented in a subordinate context, a
longer period of competition between dominant and
subordinate meanings is observed in the case of homophonic
homographs. In contrast, in the case of heterophonic
homographs, meanings are accessed immediately after a
phonological representation is computed.
5</p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>Summary</title>
      <p>These results have important implications for the role
phonology plays in accessing the meaning of words in silent
reading. One class of models suggests that printed words
activate orthographic codes that are directly related to
meanings in semantic memory. An alternative class of models
asserts that access to meaning is mediated by phonology [for
reviews see Frost 1998; Van Orden and Kloos 2005]. Our
results supports the idea that in the LH words are read more
phonologically (from orthography to phonology to
meaning), whereas in the RH, words are read more visually (from
orthography to meaning).</p>
      <p>Overall, the two networks produce processing
asymmetries comparable to those found in behavioral studies. In
the LH network, orthographic units are directly related to
both phonological and semantic units. However, because
orthography is more systematically related to phonology
than to semantics, the phonological computation of
orthographic representations is faster than the semantic
computation of these same representations. As a result, meaning
activation in the LH is initially influenced primarily by
phonology. In the RH network, phonological codes are not
directly related to orthographic codes and are activated
indirectly via semantic codes. This organization results a
different sequential ordering of events in which the phonological
computation of orthographic representations begins later
than the semantic computation of these same
representations. As a result, lexical access in the RH is initially more
influenced by orthography and by semantic.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>[Beeman</surname>
            <given-names>M 1998</given-names>
          </string-name>
          ]
          <article-title>Coarse semantic coding and discourse comprehension</article-title>
          . In:
          <string-name>
            <surname>Beeman</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiarello</surname>
            <given-names>C</given-names>
          </string-name>
          , editors.
          <article-title>Right hemisphere language comprehension: Perspectives from cognitive neuroscience. Mahwah (N</article-title>
          .J.): Lawrence Erlbaum Associates.
          <fpage>255</fpage>
          -
          <lpage>284</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>[Berent</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Perfetti</surname>
            ,
            <given-names>C. A.</given-names>
          </string-name>
          <year>1995</year>
          ]
          <article-title>A rose is a REEZE: The two-cycles of phonology assembly in reading English</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>102</volume>
          ,
          <fpage>146</fpage>
          -
          <lpage>184</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Burgess,
          <string-name>
            <given-names>C.</given-names>
            &amp;
            <surname>Simpson</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. B.</surname>
          </string-name>
          ,
          <year>1988</year>
          ]
          <article-title>Cerebral hemispheric mechanisms in the retrieval of ambiguous word meanings</article-title>
          .
          <source>Brain and Language</source>
          ,
          <volume>33</volume>
          ,
          <fpage>86</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Frost,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>1998</year>
          ]
          <article-title>Toward a strong phonological theory of visual word recognition: True issues and false trails</article-title>
          .
          <source>Psychological Bulletin</source>
          ,
          <volume>123</volume>
          ,
          <fpage>71</fpage>
          -
          <lpage>99</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [Frost,
          <string-name>
            <given-names>R.</given-names>
            &amp;
            <surname>Bentin</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <year>1992</year>
          ]
          <article-title>Processing phonological and semantic ambiguity: Evidence from semantic priming at different SOAs</article-title>
          .
          <source>Journal of Experimental Psychology: Learning, Memory, and Cognition</source>
          ,
          <volume>18</volume>
          ,
          <fpage>58</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>[Hopfield</surname>
            <given-names>J. J</given-names>
          </string-name>
          <year>1982</year>
          ]
          <article-title>Neural networks and physical systems with emergent collective computational abilities</article-title>
          .
          <source>Proceedings of the National Academy of Science</source>
          , USA,
          <volume>79</volume>
          ,
          <fpage>2554</fpage>
          -
          <lpage>2558</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>[Kawamoto</surname>
            ,
            <given-names>A. H.</given-names>
          </string-name>
          <year>1993</year>
          ]
          <article-title>Nonlinear dynamics in the resolution of lexical ambiguity: A parallel distributed processing account</article-title>
          .
          <source>Journal of Memory and Language</source>
          ,
          <volume>32</volume>
          ,
          <fpage>474</fpage>
          -
          <lpage>516</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [Lavidor,
          <string-name>
            <given-names>M .</given-names>
            &amp;
            <surname>Ellis</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. W.</surname>
          </string-name>
          <year>2003</year>
          ]
          <article-title>Orthographic and phonological priming in the two cerebral hemispheres</article-title>
          .
          <source>Laterality</source>
          ,
          <volume>8</volume>
          ,
          <fpage>201</fpage>
          -
          <lpage>223</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Lukatela,
          <string-name>
            <given-names>G.</given-names>
            , &amp;
            <surname>Turvey</surname>
          </string-name>
          , M.T. 1994a]
          <article-title>Visual access is initially phonological. 1: Evidence from associative priming by words, homophones, and pseudohomophones</article-title>
          .
          <source>Journal of Experimental Psychology: General</source>
          ,
          <volume>123</volume>
          ,
          <fpage>107</fpage>
          -
          <lpage>128</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [Lukatela,
          <string-name>
            <given-names>G.</given-names>
            , &amp;
            <surname>Turvey</surname>
          </string-name>
          , M.T. 1994b]
          <article-title>Visual access is initially phonological. 2: Evidence from associative priming by homophones, and pseudohomophones</article-title>
          .
          <source>Journal of Experimental Psychology: General</source>
          ,
          <volume>123</volume>
          ,
          <fpage>331</fpage>
          -
          <lpage>353</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Marsolek,
          <string-name>
            <given-names>C. J.</given-names>
            ,
            <surname>Kosslyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            , &amp;
            <surname>Squire</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. R.</surname>
          </string-name>
          <year>1992</year>
          ]
          <article-title>Form-specific visual priming in the right cerebral hemisphere</article-title>
          .
          <source>Journal of Experimental Psychology: Learning, Memory, and Cognition</source>
          ,
          <volume>18</volume>
          ,
          <fpage>492</fpage>
          -
          <lpage>508</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [Marsolek,
          <string-name>
            <given-names>C. J.</given-names>
            ,
            <surname>Schacter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. L.</given-names>
            , &amp;
            <surname>Nicholas</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. D.</surname>
          </string-name>
          <year>1996</year>
          ]
          <article-title>Form-specific visual priming for new associations in the right cerebral hemisphere</article-title>
          .
          <source>Memory and Cognition</source>
          ,
          <volume>24</volume>
          ,
          <fpage>539</fpage>
          -
          <lpage>556</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>[Peleg</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giora</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fein</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <year>2001</year>
          ]
          <article-title>Salience and context effects: Two are better than one</article-title>
          .
          <source>Metaphor and Symbol</source>
          ,
          <volume>16</volume>
          ,
          <fpage>173</fpage>
          -
          <lpage>192</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>[Peleg</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giora</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fein</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <year>2004</year>
          ]
          <article-title>Contextual strength: The Whens and hows of context effects</article-title>
          .
          <source>In I. Noveck &amp; D. Sperber (Eds.)</source>
          ,
          <source>experimental Pragmatics</source>
          (pp.
          <fpage>172</fpage>
          -
          <lpage>186</lpage>
          ). Basingstoke: Pagrave.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [Seidenberg,
          <string-name>
            <given-names>M.S.</given-names>
            ,&amp;
            <surname>McClelland</surname>
          </string-name>
          <string-name>
            <surname>J.L.</surname>
          </string-name>
          <year>1989</year>
          ]
          <article-title>A distributed developmental model of word recognition and naming</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>96</volume>
          ,
          <fpage>523</fpage>
          -
          <lpage>568</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>[Swinney</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Love</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <year>2002</year>
          ]
          <article-title>Context effects on lexical processing during auditory sentence comprehension; on the time course and neurological bases of a basic comprehension process</article-title>
          . In: Witruk, Friederici, Lachmann (Eds.).
          <source>Basic Functions of Language</source>
          , Reading and Reading Disability , Kluwer Academic (
          <article-title>Section 2, ch 1</article-title>
          ,
          <fpage>pp25</fpage>
          -
          <lpage>40</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>[Titone</surname>
            ,
            <given-names>D. A.</given-names>
          </string-name>
          <year>1998</year>
          ]
          <article-title>Hemispheric differences in context sensitivity during lexical ambiguity resolution</article-title>
          .
          <source>Brain and Language</source>
          ,
          <volume>65</volume>
          ,
          <fpage>361</fpage>
          -
          <lpage>394</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [Thivierge,
          <string-name>
            <given-names>J.P.</given-names>
            ,
            <surname>Titone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            , &amp;
            <surname>Shultz</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.R.</surname>
          </string-name>
          <year>2005</year>
          ]
          <article-title>Simulating frontotemporal pathways involved in lexical ambiguity resolution</article-title>
          .
          <source>Poster Proceedings of the Cognitive Science Society</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>[Van Orden</surname>
            ,
            <given-names>G. C,</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Kloos</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <year>2005</year>
          ]
          <article-title>The question of phonology and reading</article-title>
          . In M. S. Snowling,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hulme</surname>
          </string-name>
          , &amp; M. Seidenberg (Eds.).
          <article-title>The science of reading: A handbook</article-title>
          .
          <source>Blackwell Pub</source>
          .
          <fpage>39</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>[Van Orden</surname>
            ,
            <given-names>G. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pennington</surname>
            ,
            <given-names>B. F.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Stone</surname>
            ,
            <given-names>G. O.</given-names>
          </string-name>
          <year>1990</year>
          ]
          <article-title>Word identification in reading and the promise of subsymbolic psycholinguistics</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>97</volume>
          ,
          <fpage>488</fpage>
          -
          <lpage>522</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>[Zaidel</surname>
            ,
            <given-names>E</given-names>
          </string-name>
          <year>1982</year>
          ]
          <article-title>Reading in the disconnected right hemisphere: An aphasiological perspective Dyslexia: Neuronal, Cognitive</article-title>
          and Linguistic Aspects Oxford, Pergamon Press 35:
          <fpage>67</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>[Zaidel</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Peters</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <year>1981</year>
          ]
          <article-title>phonological encoding and ideographic reading by the disconnected right hemisphere: Two case Studies</article-title>
          .
          <source>Brain &amp; Language</source>
          ,
          <volume>14</volume>
          ,
          <fpage>205</fpage>
          -
          <lpage>234</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>