<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Behavioral Insights on Influence of Manual Action on Object Size Perception</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Annalisa Bosco, Patrizia Fattori Department of Pharmacy and Biotechnology University of Bologna Bologna</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2016</year>
      </pub-date>
      <fpage>21</fpage>
      <lpage>24</lpage>
      <abstract>
        <p>- Visual perception is one of the most advanced function of human brain. The study of different aspects of human perception currently contributes to machine vision applications. Humans estimate the size of objects to grasp them by perceptual mechanisms. However, the motor system is also able to influence the perception system. Here, we found modifications of object size perception after a reaching and a grasping action in different contextual information. This mechanism can be described by the Bayesian model where action provides the likelihood and this latter is integrated with the expected size (prior) derived from the stored object experience (Forward Dynamic Model). Beyond the action-modulation effect, the knowledge of subsequent action type modulates the perceptual responses shaping them according to relevant information required to recognize and interact with objects. Cognitive architectures can be improved on the basis of these processings in order to amplify relevant features of objects and allow to robot/agent an easy interaction with them.</p>
      </abstract>
      <kwd-group>
        <kwd>visual perception</kwd>
        <kwd>object recognition</kwd>
        <kwd>motor output</kwd>
        <kwd>human functions</kwd>
        <kwd>context information</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>The majority of machine vision and object recognition
systems today apply mechanistic or deterministic template
matching, edge detection or color scanning approach for
identifying different objects in the space and also to guide
embodied artificial intelligent systems to interaction with
them. However, fine disturbances in the workspace of a robot
can lead to failures, and thus slow down their performance in
identification, recognition, learning and adapting to noisy
environment, compared to human brain. To go beyond these
limitations robots with intelligent behavior must be provided
with a processing architecture that allows them to learn and
reason about responses to complex goals in a complex world.
The starting point for the development of such intelligent
systems is the study of human behavior. Humans frequently
estimate the size of objects to grasp them. In fact, when
performing an action, our perception is focused towards object
visual properties that enable us to execute the action
successfully. However, the motor system is also able to
influence perception, but only few studies reported evidence
for action-induced visual perception modifications related to
hand movements [1–4]. For example, the orientation
perception is enhanced during preparation of grasping action
compared with a pointing for which object orientation is not
important [5,6]. This “enhanced perception” is triggered by the
intention to grasp and is important to examine objects with the
maximum possible accuracy. If we consider the effects of
action execution on visual perception of object features, there
is ample evidence for visual perception changes in the
oculomotor system, but little is known about the perceptual
changes induced by different types of hand movements. In
order to evaluate the influence of different hand movement on
visual perception, we tested a feature-specific modulation on
object size perception after a reaching and a grasping action in
different contexts.</p>
    </sec>
    <sec id="sec-2">
      <title>II. MATERIALS AND METHODS</title>
      <p>A total of 16 right-handed subjects (11 females and 5 males,
ages 21–40 years; with normal or corrected-to-normal vision)
took part in the experiment. The experiment was performed by
two groups of participants. One group of 8 subjects performed
the Prior knowledge of action type experiment (PK condition)
and the other group (8 participants) performed the No prior
knowledge of action type (NPK condition). All subjects were
naive to the experimental purpose of the study and gave
informed consent to participate in the experiment. Procedures
were approved by the Bioethical Committee of the University
of Bologna and were in accordance with the Declaration of
Helsinki.</p>
      <sec id="sec-2-1">
        <title>A. Apparatus and Setup</title>
        <p>Participants were seated in an environment with dim
background lighting and viewed a touchscreen monitor (ELO
IntelliTouch, 1939L), which displayed target stimuli within a
visible display of 37.5 X 30.0 cm. To stabilize head position,
the participants placed their heads on a chin rest located 43 cm
from the screen, which resulted in a visual field of 50 x 40
deg. The display had a resolution of 1152 X 864 pixels and a
2
frame rate of 60 Hz (15,500 touch points/cm ). For stimulus
presentation, we used MATLAB (The MathWorks) with the
Psychophysics toolbox extension [7]. The stimuli were white,
red and green dots with a radius of 1.5 mm and 10 differently
sized white, red and green bars all 9 mm large and whose
length was: 30, 33.6, 37.2, 40.8, 44.4, 48, 51.6, 55.2, 58.8,
62.4 mm. Hand position was measured by a motion capture
system (VICON, 460; frequency of acquisition 100 Hz),
which follows the trajectory of the hand in three dimensions
by recording infrared light reflection on passive markers.
Participants performed 10 blocks of 10 trials each. Each trial
consisted of three successive phases: Pre-size perception,
Reaching or Grasping movement, Post-size perception (Fig.
1). In Pre-size perception and Post-size perception phases
(phases 1 and 3), a white or green central fixation target stayed
on the screen for 1 s; then, a white or green bar was presented,
for 1 s, 12 deg on the left or on the right side of the central
fixation target and, after an acoustic signal, it disappeared. The
participants were required to manually indicate the perceived
horizontal size of the bar. All participants indicated the bar
sizes by keeping the hand within the starting hand position
square and the distance between subject eyes. In the Reaching
or Grasping movement phase (phase 2), after 1 s, the white or
green central fixation point was followed by a bar identical for
position and size to that of phases 1 and 3. Participants were
required to perform a reaching (closed fist) or grasping action
(extension of thumb and index fingers to “grasp” the
extremities of the bar) towards the bar after the acoustic
signal, respectively. The type of actions was instructed by the
colors of the stimuli (fixation point and bar). In fact, if the
color of the stimuli was white, participants were required to
perform a reaching movement whereas, if the color was green,
they were required to perform a grasping movement. In PK
condition, the color of fixation points and bars was white or
green in all three phases of trial and in this way the
participants knew in advance (from phase 1) which action type
was required in the movement phase (phase 2). In the NPK
condition, the sequence of the three phases was identically
structured as in the PK condition, but we changed colors of
fixation points and bars from white/green to red in phases 1
and 3. The color of stimuli during phase 2 remained white or
green according to the movement type, reaching or grasping
respectively. By this color manipulation, participants could not
know in advance the successive action type.</p>
      </sec>
      <sec id="sec-2-2">
        <title>B. Data analysis</title>
        <p>After data collection, finger position data were interpolated at
1000 Hz, then data were run though a fifth-order Butterworth
low-pass filter [8]. For data processing and analysis, we wrote
custom software in MATLAB to compute the distance
between index and thumb markers during the pre- and
postmanual estimation phases. Grip aperture was calculated
considering trial intervals in which the velocities of the index
and thumb markers remained &lt;5 mm/s [8]. Grip aperture was
defined as maximum distance within this interval. To evaluate
the effect of different hand movement on size perception, we
compared the manual perceptual responses before the
movements with those after the movements by using
twotailed t-test with independent samples.</p>
        <p>
          To evaluate the magnitude of the effect of NPK and PK
conditions on perceptual responses before the movement we
calculated the average difference between the two responses
and we compared the responses between the two conditions by
a t-test analysis. We extracted relevant features from the
perceptual responses before the movement and we used them
to predict the NPK and PK conditions. For this purpose, we
performed a linear-discriminant analysis (LDA-based
classifier), as implemented in Statistics and Machine Learning
toolbox (Matlab). Pre movement manual responses of NPK
and PK conditions were vertically concatenated to build the
feature space composed by 958 trials. Fivefold
crossvalidation was performed by using the 80% of trials for
training and the 20% for testing the data, so to ensure that the
classifier was trained and tested on different data. Specifically,
the classifier was trained on the training subset and the
obtained optimal decision criteria was implemented on the
testing subset. The prediction results were obtained for this
testing subset. This procedure was repeated 5 times, so that all
trials were tested and classified basing on models learned from
the other trials. The prediction results for all the trials were
taken together to give an averaged prediction result with
standard deviation. We considered statistically significant the
accuracies which standard deviations did not cross the
theoretical chance level of 50%. We used a LDA classifier as
decoder of the two conditions. LDA finds linear combination
of features that characterizes or separates two or more classes
of objects or event [
          <xref ref-type="bibr" rid="ref3 ref7 ref8">9,10</xref>
          ]. In fact, LDA explicitly attempts to
model the difference between the classes of data. For all
statistical analyses the significant criterion was set to P &lt; 0.05.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>III. RESULTS</title>
      <p>We assessed the effects of action execution on perceptual
responses comparing the single subject responses before the
movement with those after the movement and calculating the
difference between these. Fig. 2 shows these differences in
grey color for reaching movement on the horizontal axis
compared with those of grasping movement on vertical axis.
Filled and empty circles are referred to PK and NPK
condition, respectively. The majority of subjects fell below the
diagonal suggesting that they corrected the perceptual
estimation after the grasping movements with respect to the
reaching movement. In particular, they perceived significantly
smaller the bars after a grasping movement with respect to a
reaching movement (P &lt; 0.05). The averaged differences in
PK and NPK conditions are reported in Fig. 2 as black and
white dots, respectively. Both dots are below the diagonal
suggesting that, globally, subjects perceived smaller after a
grasping action compared with a reaching action.</p>
      <p>To analyze the effect of the NPK and PK conditions on size
perception, we focused the analyses on manual size reports
before the movement execution (Pre size perception phase).
We computed the difference between the Pre size perception
reports in PK condition and the Pre size perception reports in
NPK condition. This difference allowed to highlight the
amount of change in size perception in the two conditions
tested. As it is shown in Fig. 3A, we found that the amount of
change in reaching was -11.89 mm ±0.98 mm and in grasping
-11.36 mm ±1.08 mm, and in both cases, they were
significantly deviated from baseline (t-test, P &lt; 0.05).
Generally, the subjects tended to perceive smaller the sizes
presented in the condition where they were aware about the
subsequent action (PK condition) compared with the condition
where they were uncertain about the successive movement
(NPK condition). To evaluate whether the strength of this
effect was due to a perceptual bias or to different neural
processings, we used a LDA decoder to classify the manual
responses according to the NPK and PK condition (see
Material and Methods). In other words, we checked whether
we were able to predict the PK and NPK conditions from
perceptual responses before the movement execution, as this
technique represents a powerful method to reconstruct
experimental conditions and functional movements from
neural responses using different types of classifiers [11,12].
Fig. 3A shows decoding results as confusion matrix and the
corresponding mean accuracy expressed in percentage. We
found a good correlation between the real conditions and the
decoded conditions, as it is illustrated in Fig. 3B. The
accuracies of decoding were significantly higher of 50%
(66,8% for PK and 60.54% for NPK) as shown in Fig. 3C.
In the present study, we found direct evidence for a perceptual
modification of a relevant feature as object size before and
after the execution of two types of hand movement. These
changes depended on two factors: the knowledge of the
subsequent action type and the type of action executed.
Changes in perception were sharpened after a grasping action
compared with a reaching. Specifically, subjects perceived
objects smaller after a grasping movement than after a
reaching movement. The study of action effects exerted by the
skeletomotor system on perception has been focused on the
evidence that relevant features of objects, such as size or
orientation, prime the perceptual system in order to execute a
more accurate subsequent grasping movement.</p>
      <p>Indeed, Gutteling et al. [5] demonstrated an increased
perceptual sensitivity to object orientation during a grasping
preparation phase. The effect of action-modulated perception
has also been shown to facilitate visual search for orientation.
Bekkering and Neggers [2] analysed the performance of
subjects that were required to grasp or point to an object of a
certain orientation and color among other objects. They
demonstrated that fewer saccadic eye movements were made
to wrong orientations when subjects had to grasp the object
than point to it. Recently, Bayesian theory has been applied to
formalize processes of cue and sensorimotor integration
[13,14]. According to this view, the nervous system combines
prior knowledge about object properties gained through
former experience (prior) with current sensory cues
(likelihood), to generate appropriate object properties
estimations for action and perception. Hirsinger and
coworkers [15], by application of a size-weight illusion
paradigm, found that the combination of prior and likelihood
for size perception were integrated in a Bayesian way. Their
model consisted in a Forward Dynamic Model (FDM) that
represented the stored object experience. The FDM output was
the experience-based expected size and was referred as the
prior. The prior then was integrated with the likelihood, which
represented the afferent sensory information about object size.
A feedback loop with a specified gain provides the FDM with
the final estimate of size, which serves as learning signal for
adapting object experience. In the present study, we can apply
a similar model for size perception after an action execution.
In our case, the objects were visual, not real objects and no
haptic feedback was given after the execution of movement.
So, the likelihood was represented by the matching of the
fingers with the outer border of objects with/or the
proprioceptive signals coming from the hand posture that are
integrated with the prior.</p>
      <p>
        We found that the knowledge of action type was a factor
modulating size perception. In fact, subjects perceived smaller
the bars during the condition where they knew the subsequent
action (PK) compared with the other condition where they did
not know the subsequent action (NPK) for both reaching and
grasping. A further demonstration of that was related to the
possibility to predict with significant accuracy (&gt;50%) the two
conditions from perceptual responses before movement (see
Fig. 3B-C). This approach is typical for neural responses and
represents a novelty for this type of behavioral variables. The
significance of these results is in line with evidence from
behavioral research suggesting that motor planning processes
increase the weight of visual inputs.
Hand visual feedback has been found to have a greater impact
on movement accuracy when subjects prepare their
movements with the prior knowledge that vision will be
available during their reaches [16,17]. More interestingly,
motor preparation facilitates the processings of visual
information related to the target of movement. Similarly to
Gutteling et al. [5] for object orientation, Wykowska et al. [
        <xref ref-type="bibr" rid="ref1">18</xref>
        ]
reported that the detection of target size was facilitated during
the planning of grasping but not during the planning of
pointing. All these studies show the capacity of the brain to
modulate the weight of visual inputs and provide an
illustration of the importance of the context in visual
information processing. In line with all these studies, our
findings suggest that the knowledge or not of subsequent
movement type defines a context that modulates the
perceptual system. When subjects knew the subsequent
movement, the perceptual system was within a definite context
and perceived object smaller, scaling the measures according
to hand motor abilities. In the other case, subjects were in an
uncertain context about the successive action, and the
perceptual system used different rules to scale the size reports.
In both cases, the defined and undefined context can be
predicted. All the mechanisms described in the present study
could implement models of cognitive architecture of
visionbased reaching and grasping of objects located in the
peripersonal space of a robot/agent. Additionally, the evidence
that the perceptual system is dynamically modulated by
contextual information about subsequent movement type can
be used to improve cognitive architectures. For example one
or multiple focus of attention signals can be sent to the object
representation of robot/agent in order to amplify relevant
features and at the same time inhibits distractors.
      </p>
    </sec>
    <sec id="sec-4">
      <title>ACKNOWLEDGMENT</title>
    </sec>
    <sec id="sec-5">
      <title>We thank F. Daniele for helping in the data collection and in data analysis. This work was supported by Firb 2013 N. RBFR132BKP (MIUR) and by the Fondazione del Monte di Bologna e Ravenna.</title>
    </sec>
    <sec id="sec-6">
      <title>REFERENCES</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Craighero</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fadiga</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rizzolatti</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Umiltà</surname>
            <given-names>C</given-names>
          </string-name>
          .
          <article-title>Action for Perception A Motor - Visual Attentional Effect</article-title>
          .
          <source>J Exp Psychol Hum Percept Perform</source>
          .
          <year>1999</year>
          ;
          <volume>25</volume>
          :
          <fpage>1673</fpage>
          -.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Bekkering</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neggers</surname>
            <given-names>SFW</given-names>
          </string-name>
          .
          <article-title>Visual search is modulated by action intentions</article-title>
          .
          <source>Psychol Sci a J Am Psychol Soc / APS</source>
          .
          <year>2002</year>
          ;
          <volume>13</volume>
          :
          <fpage>370</fpage>
          -
          <lpage>374</lpage>
          . doi:
          <volume>10</volume>
          .1111/j.0956-
          <fpage>7976</fpage>
          .
          <year>2002</year>
          .
          <volume>00466</volume>
          .
          <string-name>
            <surname>x Hannus</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cornelissen</surname>
            <given-names>FW</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lindemann</surname>
            <given-names>O</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bekkering</surname>
            <given-names>H</given-names>
          </string-name>
          .
          <article-title>Selectionfor-action in visual search</article-title>
          .
          <source>Acta Psychol (Amst)</source>
          .
          <year>2005</year>
          ;
          <volume>118</volume>
          :
          <fpage>171</fpage>
          -
          <lpage>191</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.actpsy.
          <year>2004</year>
          .
          <volume>10</volume>
          .010 Fagioli S,
          <string-name>
            <surname>Hommel</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schubotz</surname>
            <given-names>RI</given-names>
          </string-name>
          .
          <article-title>Intentional control of attention: Action planning primes action-related stimulus dimensions</article-title>
          .
          <source>Psychol Res</source>
          .
          <year>2007</year>
          ;
          <volume>71</volume>
          :
          <fpage>22</fpage>
          -
          <lpage>29</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00426-005-0033-3
          <string-name>
            <surname>Gutteling</surname>
            <given-names>TP</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kenemans</surname>
            <given-names>JL</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neggers</surname>
            <given-names>SFW</given-names>
          </string-name>
          .
          <article-title>Grasping preparation enhances orientation change detection</article-title>
          .
          <source>PLoS One</source>
          .
          <year>2011</year>
          ;
          <volume>6</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>doi:10</source>
          .1371/journal.pone.0017675
          <string-name>
            <surname>Gutteling</surname>
            <given-names>TP</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            <given-names>SY</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kenemans</surname>
            <given-names>JL</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neggers</surname>
            <given-names>SFW</given-names>
          </string-name>
          .
          <article-title>TMS of the anterior intraparietal area selectively modulates orientation change detection during action preparation</article-title>
          .
          <source>J Neurophysiol</source>
          .
          <year>2013</year>
          ;
          <volume>110</volume>
          :
          <fpage>33</fpage>
          -
          <lpage>41</lpage>
          . doi:
          <volume>10</volume>
          .1152/jn.00622.2012
          <string-name>
            <given-names>Brainard</given-names>
            <surname>DH</surname>
          </string-name>
          .
          <source>The Psychophysics Toolbox. Spat Vis</source>
          .
          <year>1997</year>
          ;
          <volume>10</volume>
          :
          <fpage>433</fpage>
          -
          <lpage>436</lpage>
          . doi:
          <volume>10</volume>
          .1163/
          <string-name>
            <surname>156856897X00357 Bosco</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lappe</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fattori</surname>
            <given-names>P</given-names>
          </string-name>
          .
          <article-title>Adaptation of Saccades and Perceived Size after Trans-Saccadic Changes of Object Size</article-title>
          .
          <source>J Neurosci.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <year>2015</year>
          ;
          <volume>35</volume>
          :
          <fpage>14448</fpage>
          -
          <lpage>14456</lpage>
          . doi:
          <volume>10</volume>
          .1523/JNEUROSCI.0129-
          <fpage>15</fpage>
          .2015 Fisher RA.
          <article-title>The use of multiple measurements in taxonomic problems</article-title>
          . Ann Eugen.
          <year>1936</year>
          ;
          <volume>7</volume>
          :
          <fpage>179</fpage>
          -
          <lpage>188</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>McLachlan</given-names>
            <surname>GJ</surname>
          </string-name>
          .
          <article-title>Discriminant analysis and statistical pattern recognition</article-title>
          .
          <source>Wiley Intersci</source>
          .
          <year>2004</year>
          ;. ISBN 0
          <article-title>-4</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Schaffelhofer</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Agudelo-Toro</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scherberger</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Decoding</surname>
          </string-name>
          <article-title>a wide range of hand configurations from macaque motor, premotor, and parietal cortices</article-title>
          .
          <source>J Neurosci</source>
          .
          <year>2015</year>
          ;
          <volume>35</volume>
          :
          <fpage>1068</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>doi:10</source>
          .1523/JNEUROSCI.3594-
          <fpage>14</fpage>
          .2015
          <string-name>
            <surname>Townsend</surname>
            <given-names>BR</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subasi</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scherberger</surname>
            <given-names>H</given-names>
          </string-name>
          .
          <article-title>Grasp movement decoding from premotor and parietal cortex</article-title>
          .
          <source>J Neurosci</source>
          .
          <year>2011</year>
          ;
          <volume>31</volume>
          :
          <fpage>14386</fpage>
          -
          <lpage>98</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>doi:10</source>
          .1523/JNEUROSCI.2451-
          <fpage>11</fpage>
          .2011
          <string-name>
            <surname>Körding</surname>
            <given-names>KP</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolpert</surname>
            <given-names>DM</given-names>
          </string-name>
          .
          <article-title>Bayesian decision theory in sensorimotor control</article-title>
          .
          <source>Trends in Cognitive Sciences</source>
          .
          <year>2006</year>
          . pp.
          <fpage>319</fpage>
          -
          <lpage>326</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.tics.
          <year>2006</year>
          .
          <volume>05</volume>
          .003 Van
          <string-name>
            <surname>Beers</surname>
            <given-names>RJ</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolpert</surname>
            <given-names>DM</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haggard</surname>
            <given-names>P</given-names>
          </string-name>
          .
          <article-title>When feeling is more important than seeing in sensorimotor adaptation</article-title>
          .
          <source>Curr Biol.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <year>2002</year>
          ;
          <volume>12</volume>
          :
          <fpage>834</fpage>
          -
          <lpage>837</lpage>
          . doi:
          <volume>10</volume>
          .1016/S0960-
          <volume>9822</volume>
          (
          <issue>02</issue>
          )
          <fpage>00836</fpage>
          -9
          <string-name>
            <surname>Hirsiger</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pickett</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Konczak</surname>
            <given-names>J.</given-names>
          </string-name>
          <article-title>The integration of size and weight cues for perception and action: Evidence for a weight-size illusion</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>Exp</given-names>
            <surname>Brain Res</surname>
          </string-name>
          .
          <year>2012</year>
          ;
          <volume>223</volume>
          :
          <fpage>137</fpage>
          -
          <lpage>147</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00221-012-3247- 9
          <string-name>
            <surname>Zelaznik</surname>
            <given-names>HZ</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hawkins</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kisselburgh</surname>
            <given-names>L</given-names>
          </string-name>
          .
          <article-title>Rapid visual feedback processing in single-aiming movements</article-title>
          .
          <source>J Mot Behav</source>
          .
          <year>1983</year>
          ;
          <volume>15</volume>
          :
          <fpage>217</fpage>
          -
          <lpage>236</lpage>
          . doi:
          <volume>10</volume>
          .1080/00222895.
          <year>1983</year>
          .10735298
          <string-name>
            <surname>Elliott</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Allard</surname>
            <given-names>F.</given-names>
          </string-name>
          <article-title>The utilization of visual feedback information during rapid pointing movements</article-title>
          . Q
          <string-name>
            <given-names>J</given-names>
            <surname>Exp Psychol A Hum Exp Psychol</surname>
          </string-name>
          .
          <year>1985</year>
          ;
          <volume>37</volume>
          :
          <fpage>407</fpage>
          -
          <lpage>425</lpage>
          . doi:
          <volume>10</volume>
          .1080/14640748508400942 Wykowska A,
          <string-name>
            <surname>Schubö</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hommel</surname>
            <given-names>B</given-names>
          </string-name>
          .
          <article-title>How you move is what you see: action planning biases selection in visual search</article-title>
          .
          <source>J Exp Psychol Hum Percept Perform</source>
          .
          <year>2009</year>
          ;
          <volume>35</volume>
          :
          <fpage>1755</fpage>
          -
          <lpage>1769</lpage>
          . doi:
          <volume>10</volume>
          .1037/a0016798
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>