<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>BIOMIMETIC SPACE-VARIANT SAMPLING IN A VISION PROSTHESIS IMPROVES THE USER'S SKILL IN A LOCALIZATION TASK</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>B. Durette</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>L. Gamond</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>S. Hanneton</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>D. Alleysson</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>J. Hérault</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Images and Signals, Gipsa-Lab, CNRS UMR 5216, UJF, INPG</institution>
          ,
          <addr-line>Grenoble</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Laboratory of Neurophysics and Physiology, CNRS UMR 8119, Université Paris V</institution>
          ,
          <addr-line>Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Laboratory of Psychology et Neuro-cognition</institution>
          ,
          <addr-line>CNRS UMR 5105 , UPMF, Grenoble</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2007</year>
      </pub-date>
      <abstract>
        <p>In this experiment, we test the hypothesis that a 'retina-like' space variant sampling pattern can improve the efficiency of a visual prosthesis. Subjects wearing a visuo-auditory substitution system were tested for their ability to point at visual targets. The test group (space-variant sampling), performed significantly better than the control group (uniform sampling). The pointing accuracy was enhanced, as was the speed to find the target. Surprisingly, the time spanned to complete the training was also reduced, suggesting that this space-variant sampling scheme facilitates the mastering of sensorimotor contingencies.</p>
      </abstract>
      <kwd-group>
        <kwd>Visual prosthesis</kwd>
        <kwd>space-variant sampling</kwd>
        <kwd>design principles</kwd>
        <kwd>sensori-motricity</kwd>
        <kwd>learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Visual prosthesis are devices that interface a video-camera with the brain at different levels : either
directly implanted on the retina or on the cortex surface
        <xref ref-type="bibr" rid="ref12 ref15 ref20">(for a review, see e.g. Zrenner, 2002, Margalit
et al., 2002)</xref>
        , or by means of a sane substitute sense, most of the time the tactile sense
        <xref ref-type="bibr" rid="ref1 ref11 ref17">(Sampaio et
al., 2001, Bach-y-Rita et al., 2004, Kajimoto et al., 2006)</xref>
        , or the auditory sense
        <xref ref-type="bibr" rid="ref13 ref2">(Meijer, 1992, Auvray
et al, 2005)</xref>
        . Those latest devices are called « sensory substitution systems ».
      </p>
      <p>
        The main difference between visual prosthesis and natural vision systems is likely to be the number of
stimulation points available. As compared to the 6 million cones in the human eye or the 80000 pixels
of a video camera, visual prosthesis resolution, all categories considered, ranges from 64
        <xref ref-type="bibr" rid="ref6">(cortical
implant, Dobelle et al., 2000)</xref>
        to 896 synchronous stimulation points (VideoTact tactile array,
ForeThought Dev.). Wider arrays are under development, however their spatial resolution will probably
be limited soon, not because of technology, but by the sensitive substrate itself
        <xref ref-type="bibr" rid="ref20">(Zrenner, 2002)</xref>
        . The
gap between natural vision systems and interfacing solutions makes the question of resolution
reduction a critical point for vision prosthesis. Most of the time, the resolution reduction is done by a
uniform subsampling, either directly on the picture
        <xref ref-type="bibr" rid="ref13 ref19 ref3">(Bach-y-Rita et al., 1969, Meijer, 1992, Thompson
et al., 2003)</xref>
        , or after a preliminary signal processing stage like uniform averaging
        <xref ref-type="bibr" rid="ref17 ref9">(Sampaio et al.,
2001, Harvey &amp; Sawan, 1996)</xref>
        or edge detection
        <xref ref-type="bibr" rid="ref11 ref6">(Dobelle, 2000, Kajimoto et al., 2006)</xref>
        . However,
when applied to large fields of vision, uniform subsampling leads to a low global resolution.
To answer this problem, natural systems have adopted a space-variant sampling principle. The visual
system of primates, for instance, possesses a highly sampled “foveal” region, at the center of the
visual field (about 3° wide). Sampling distribution then rapidly decreases with eccentricity
        <xref ref-type="bibr" rid="ref16">(Osterberg,
1935)</xref>
        . This feature is generally understood as a focus/context strategy, the visual system being able
to roughly detect an object of interest in its field of vision and then to direct his fovea to it for
identification if necessary. Space-variant sampling has often been mentioned as a possible tool to
enhance visual prosthesis
        <xref ref-type="bibr" rid="ref14 ref7">(e.g. Eckmiller et al., 2005, Naghdy, 2006)</xref>
        . To our knowledge, it has been
implemented in only two devices: the PSVA
        <xref ref-type="bibr" rid="ref5">(Capelle et al., 1998)</xref>
        and the VAS
        <xref ref-type="bibr" rid="ref8">(Gonzalez-Mora,
2003)</xref>
        . However, the sampling distributions were determined empirically, and no comparison was
made with other possible distributions, like uniform.
      </p>
      <p>
        In this article, we show how recent advances into the comprehension of visual perception in terms of
sensorimotor contingencies
        <xref ref-type="bibr" rid="ref15">(O'Regan &amp; Noë, 2002)</xref>
        as well as knowledge of signal processing in the
primate early visual system
        <xref ref-type="bibr" rid="ref10">(Hérault &amp; Durette, 2007)</xref>
        give new arguments and new tools to address
the question of space-variant sampling in visual prosthesis. We then propose a particular sampling
distribution and test the hypothesis of whether it can improve the efficiency of a visual prosthesis.
Blindfolded subjects wearing a visuo-auditory substitution system
        <xref ref-type="bibr" rid="ref2">(TheVIBE, Auvray et al. 2005)</xref>
        were
tested for their ability to point at visual targets. The test group (space-variant sampling) performed
significantly better than the control group (uniform sampling). The pointing accuracy was enhanced, as
was the speed to find the target. Surprisingly, the time spanned to complete the training was also
reduced, suggesting that this space-variant sampling may facilitate the mastering of sensorimotor
contingencies.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Theoretical Aspects: Sampling, Sensorimotricity and Vision</title>
      <sec id="sec-2-1">
        <title>2.1. A biomimetic inspiration</title>
        <p>
          To address the question of space-variant sampling, we first needed to choose among all possible
sampling distribution laws. The law we choose is directly inspired from observation on primates visual
system. As described by
          <xref ref-type="bibr" rid="ref18">Schwartz et al. (1980)</xref>
          , mapping of the visual world onto the primate's primary
visual cortex is highly space-variant. In particular, the inverse ratio between a distance in the visual
world and its correlate in its cortical projection, often referred to as the “cortical magnification factor”,
strongly decreases with eccentricity. From neurophysiological observations, Schwartz et al. derived a
“global retinotopic mapping” of the visual information to the visual cortex described as the complex
logarithm of a linear function of eccentricity. With z being a point in the visual plane and w it's
projection on the cortical plane, transformation between z and w can be written w = log (z+a), a being
a constant. With z = ρeiθ and w = ρ'eiθ', this formula links the eccentricity ρ of a point in the visual world
to it's “eccentricity” ρ' in the visual cortex:
        </p>
        <p>ρ' = log(ρ + a)</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Sensori-motor arguments to a logarithmic mapping</title>
        <p>Biomimeticity and focus/context strategy are not the only arguments in favor of a logarithmic mapping
of the visual eccentricity: it may also bring new regularities in sensori-motor coupling. Let us compare
the properties of a logarithmic cortical mapping with respect to a linear one. An object of size dX is
positioned at the eccentricity X in a visual plane at a distance Z from the observer (fig. 1). It's correlate
is an object of size dY at the eccentricity Y in the cortex. The central lens symbolizes the eye. For</p>
        <p>Z
Assuming a logarithmic cortical mapping, eq. 1 becomes
simplification purpose, the distance between the central lens and the cortex plane is chosen as unit
distance. With linear mapping, we obtain (c index stands for logarithmic coordinates):</p>
        <p>X X ⎛ dX dZ ⎞
Y = (1) which, by differentiation, gives dY = ⎜ − ⎟ (2)</p>
        <p>Z ⎝ X Z ⎠
⎛ X ⎞ ⎛ dX dZ ⎞
Yc = log ⎜ ⎟ (3) which, by differentiation, gives dYc = ⎜ − ⎟ (4),</p>
        <p>⎝ Z ⎠ ⎝ X Z ⎠
Formula (4) brings new regularities in the link between the visual world and its cortical correlate with
respect to sensori-motor coupling, particularly in the case of a motion along the direction Z:
1- An object O contained in the vertical plane (dZ=0) has a constant size on the cortex regardless of
the viewing distance.</p>
        <p>⎛ dX ⎞
Indeed, dZ=0 in eq 3 gives dYc = ⎜ ⎟ (5). As dX and X are constant, dYc is also constant.</p>
        <p>⎝ X ⎠
2− When approaching at a constant velocity v toward an object O contained in the vertical plane
(dX/dt=0), the velocity of its projection on the cortical plane is inversely proportionnal to the time to
contact (T) between the object and the observer.</p>
        <p>Indeed, temporal derivation of eq. 3 with dX/dt=0 gives
Thus, logarithmic mapping of the visual world simplifies the relationship between the subject's motion
and the changes it implies in his sensations. It is the reason why we claim that it brings new
sensorimotor regularities. Our hypothesis is that the use of such a mapping in a visual prosthesis should
enhance its efficiency.
dt
dYc 1 dZ 1
= − =</p>
        <p>Z dt</p>
        <p>T
(6).</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Implementation of the logarithmic mapping</title>
        <p>The mapping function we use is a Michaelis-Menten law which is linear for small eccentricities,
logarithmic for medium ones and then saturates. One of its major advantages with respect to a
logarithmic law is that it is bounded, thus fitting with the finite cortical space. With ρ being the
eccentricity of a point in the visual space and ρ' its correlate in the cortical space, this law can be
written :</p>
        <p>ρ
ρ ′ = ρl′im ρ + ρ o (7)
ρ'lim and ρ0 are determined so that the image size
is preserved and that the central region
magnification factor (lately referred to as R0) is
adjustable. The central magnification factor R0 is
define as the ratio between sampling density at the
excentricity ρ=0 over global sampling density.</p>
        <p>Figure 2 : Mapping of the cortical eccentricity
ρ' as a function of the visual eccentricity ρ.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Apparatus: Building Space-Variant Retinas for TheVIBE</title>
      <sec id="sec-3-1">
        <title>3.1. TheVIBE auditory substitution system</title>
        <p>
          TheVibe device is an experimental system for the conversion of images into sound patterns
          <xref ref-type="bibr" rid="ref2">(Auvray et
al. 2005)</xref>
          . The image is sampled by a set of “receptive fields”. Each receptive field is a cluster of
random localized pixels. Those receptive fields compose TheVibe's virtual retina (fig 3). The auditory
output is composed of a sum of sinusoidal sounds produced by virtual sound "sources," each
corresponding to one of the retina's "receptive fields." The frequency and the inter-aural disparity of
each sources are determined by the co-ordinates of the receptive field's pixels in the image (Fig 3,
squares).The sound's amplitude is determined by the mean luminosity of the pixels of the
corresponding receptive field (Fig. 3, crosses). The ability to freely define the receptive field's position
and configuration makes TheVibe a particularly proper tool to address the question of space variant
mapping.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Uniform and space-variant retina design</title>
        <p>To design standard uniform retinas for TheVibe, we cut
the 320 x 240 image into a set of 16 x 12 cells, each
side 20x20 pixels. A receptive field origin is chosen at a
random position in each cells. Each receptive field is
composed of 10 sampling pixels chosen randomly in a
20 x 20 box centered on the receptive field origin so that
overlap with other receptive field is possible. Our
retinas were composed by 192 receptive fields.</p>
        <p>Frequency and inter-aural disparity followed a linear
mapping of the vertical (resp. horizontal) position.</p>
        <p>Frequencies ranged from 300 to 3000 Hz. To design
space-variant retinas, a logarithmic mapping as defined Fig 3 : A space-variant logarithmic (R0=2)
in section 2, with a central magnification factor R0=2, retina for TheVibe. Note that the extent of
was applied to uniform retinas. The result is illustrated in the receptive fields increases with
fig 3. Initial linear mapping in the auditory space was eccentricity. In particular, our mapping
kept. preserves overlap.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Performance Assessment: A 'Contact' Task</title>
      <p>
        The test protocol was inspired by the work of
        <xref ref-type="bibr" rid="ref1">Auvray (2004)</xref>
        which study immersion stages in a
sensory substitution system. Our task aims at testing the first stage of immersion, i.e. the 'contact'
stage, where the user learns sensorimotor rules to stabilize the stimulus and maintain contact with it.
We extended it to the ability to direct the camera toward the target in a stable configuration, i.e. to
consistently place a visual target at a systematical location in the visual field. This place was not
necessarily the objective center of vision, which was never mentioned in the experiment. The task was
thus totally non-supervized.
      </p>
      <sec id="sec-4-1">
        <title>4.1. Experimental setup</title>
        <p>
          The subject was equiped with a wide angle webcam (Logitech, Quickcam Pro 5000) at the top of his
head, connected to a Dell GX620 with two 3Ghz Pentium 4 processors PC. Auditory feedback was
transmitted via Sennheiser HD 280 headphones. The subject was placed in front of a screen were a
white target, 8° in diameter was presented on a black background at different locations (fig. 4). The
projected picture covers exactly the 78° x 58° field of view of the substitution device. Targets were
generated with PsychToolbox
          <xref ref-type="bibr" rid="ref4">(Brainard, 1997)</xref>
          .
        </p>
        <p>The experiment was separated in three stages divided by 5 mn breaks.The
first stage was “free exploration”. For 5 minutes, the subject, standing in
front of a unique target, freely explored it's visual field. He was allowed to
move his head and body as he wished. The next 5 minutes, subject was
asked to move to the right and to the left while keeping the target fixed with
respect to the vision device, first at ½ m of the screen and then at 1m. The
second stage was “masking”. In this task, the subject was seated in front of
the screen. For the first thirty randomized target positions, he had to point
to the target with his head and then to mask it with his hand, keeping his
arm extended (so he does not mask the whole camera aperture). For the
next thirty trials, after a 5 mn break, he had to mask the target with his hand Fig 4: Experimental
without pointing at it with the vision device. Duration of the whole second room and setup
stage was recorded.
The last stage was the test. Forty targets were presented on the screen at 20 different positions on the
subject's visual field (each one was presented twice). The subject, seated in front of the screen, was
requested to “place the target at the center of his perceptive field, as accurately and quickly as
possible”. He had to validate when he felt it was the case. Position of the target in the visual space
and time spent for each trial were recorded.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>Fourteen subjects, most of them students (m = 26 y, σ = 4 y) took part in the experiment; none of them
had ever used this type of device. Seven were in the space-variant condition, seven in the uniform
condition. Groups were paired for age and for gender. Results are described in fig 5.</p>
      <p>Accuracy</p>
      <p>Response time</p>
      <p>Training duration
Target localization: we computed the mean position for all trials, which we called the “subjective
center”. We then computed the distance between this subjective center and the actual position of the
target at the end of each trial which is the data we used for assessing accuracy. Our hypothesis was
that the accuracy should be enhanced when using a space-variant sampling. A unilateral Mann &amp;
Whitney test was applied to address this question. Subjects in space-variant condition (Log2)
performed significantly better (z = 4,99 ; puni&lt; 0.001) with mean mlog2 = 35 pix. and standard deviation
σlog2 = 29, against munif = 44 pix., σunif = 28 in the uniform condition.</p>
      <p>Time to focus the target: the mean time required to focus at a visual target in test stage has been
measured to test the efficiency of the device. Subjects in space-variant condition performed
significantly better (Unilateral Mann &amp; Whitney test, z = 6,96 ; puni&lt; 0.001) with mean mlog2 = 10,58 s
and standard deviation σlog2 = 6,68 s, against munif = 19 ,46 s, σunif = 17,47s in the uniform condition.
Training duration: the duration of the second stage of training was also measured in order to assess
difficulty in the device appropriation. As we had no idea about the direction of this effect, we performed
a bilateral Mann &amp; Whitney test. Subjects in space-variant condition performed significantly better (U=
47,5; pbil&lt; 0.01) with mean mlog2 = 23,4 min and standard deviation σlog2 = 4,6min, against munif = 36,3
min, σunif = 6,7 min in the uniform condition.</p>
      <p>Thus, with a shorter learning time and shorter response time, subjects in the space variant condition
performed significantly better in terms of accuracy.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>Although quite promising, these results need to be carefully considered, for two primary reasons.
When applied to a rectangular image, our mapping induces blank zones that are not sampled (at the
borders). The proportion of this effect is related to the magnification factor. For R0=2, blank zones are
approximately 10% of the picture. Although this aspect is not likely to explain our results, its effect can
not be excluded. We are currently designing circular retinas that will not endorse this effect. Another
possible caveat is the fact that the operator was aware of the subject's group when conducting the
experiment. Even though the operator could not have had direct influence on the measures, results
need to be confirmed with a double blind protocol.</p>
      <p>However, this study shows that new pathways exist which can possibly enhance vision prosthesis,
even though their resolution is bound to be limited. It provides an experimental framework to address
their practical performances and it shows that significant and even counter-intuitive effects may be
obtained. The fact that accuracy would be enhanced was natural since the sampling network was
denser at the foveal center. Shorter response time may also be understood as an effect of the
sampling variation, which provided additional information to locate a position in the visual field. On the
other hand, space-variant sampling could have complicated the mastering of the vision device: it
appears to be the opposite. This result leads to the idea that our mapping may facilitate the mastering
of sensori-motor laws. This last aspect needs to be addressed further to determine whether our
mapping is optimal or whether other kinds of space-variant sampling, a linearly decreasing distribution
law for instance, may have the same effect.</p>
      <p>Lastly, one may take a practical view of this study. To date, vision devices are far from giving blind
people new eyes. However, contrary to surgical approaches (cataract operations, retinal transplant),
electronic vision devices may easily be adapted to specific tasks. By giving the subject a better ability
to locate a visual target, space variant sampling may be of use in devices designed for orientation,
object finding and mobility assistance.
Acknowledgments: The “Sensory Substitution System for Visual Handicap” project is granted by the
Rhône-Alpes region and the association “Les Gueules Cassées”. Many thanks to K. O'Regan, S.
Chokron, M. Auvray and C. Schoonover for fruitful discussions, and to C. Costello for linguistic
support.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Auvray</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Immersion et perception spatiale : L'exemple des dispositifs de substitution sensorielle</article-title>
          .
          <source>PhD thesis</source>
          , Ehess, Paris.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Auvray</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hanneton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lenay</surname>
          </string-name>
          , and
          <string-name>
            <surname>K. O'Regan</surname>
          </string-name>
          (
          <year>2005</year>
          )
          <article-title>There is something out there: distal attribution in sensory substitution, twenty years later</article-title>
          .
          <source>Journal of Integrative Neuroscience</source>
          , vol.
          <volume>4</volume>
          , pp.
          <fpage>505</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>Bach-y-</article-title>
          <string-name>
            <surname>Rita</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , C.C. Collins,
          <string-name>
            <given-names>F.A.</given-names>
            <surname>Saunders</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>White</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Scadden</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          (
          <year>1969</year>
          ).
          <article-title>Vision substitution by tactile image projection</article-title>
          .
          <source>Nature</source>
          , vol.
          <volume>221</volume>
          (
          <issue>5184</issue>
          ), pp.
          <fpage>963</fpage>
          -
          <lpage>964</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Brainard</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>1997</year>
          ).
          <article-title>The psychophysics toolbox</article-title>
          .
          <source>Spatial Vision</source>
          , vol.
          <volume>10</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>433</fpage>
          --
          <lpage>436</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Capelle</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Trullemans</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Arno</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Veraart</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>A real-time experimental prototype for enhancement of vision rehabilitation using auditory substitution</article-title>
          .
          <source>IEEE Trans. BME.</source>
          , vol.
          <volume>45</volume>
          (
          <issue>10</issue>
          ), pp.
          <fpage>1279</fpage>
          -
          <lpage>93</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Dobelle</surname>
            ,
            <given-names>W.H.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Artificial vision for the blind by connecting a television camera to the visual cortex</article-title>
          .
          <source>ASAIO J</source>
          , vol.
          <volume>46</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>3</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Eckmiller</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Neumann</surname>
          </string-name>
          and
          <string-name>
            <given-names>O.</given-names>
            <surname>Baruth</surname>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Tunable retina encoders for retina implants: why and how</article-title>
          .
          <source>Journal of Neural</source>
          Engineering vol.
          <volume>2</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>S91</fpage>
          -
          <lpage>S104</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Gonzalez-Mora</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>VASIII: Development of an interactive device based on virtual acoustic reality oriented to blind rehabilitation</article-title>
          . Jornadas de Seguimiento de Proyectos en Tecnologías Informáticas. Spain.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Harvey</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Sawan</surname>
          </string-name>
          (
          <year>1996</year>
          ).
          <article-title>Image acquisition and reduction dedicated to a visual implant</article-title>
          .
          <source>Proc. Eng. in Medicine and Biology Society.</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Hérault</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Durette</surname>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Modeling visual perception for image processing</article-title>
          . in F. Sandoval et al., ed.,
          <source>IWANN</source>
          <year>2007</year>
          , LNCS 4507, Springer-Verlag Berlin Heidelberg, pp.
          <fpage>662</fpage>
          -
          <lpage>675</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Kajimoto</surname>
          </string-name>
          ; Kanno &amp;
          <string-name>
            <surname>Tachi</surname>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>A Vision Substitution System using Forehead Electrical Stimulation</article-title>
          . in 'Conf.
          <article-title>On Computer Graphics and Interactive Techniques (SIGGRAPH2006)'</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Margalit</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weiland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Greenberg</surname>
          </string-name>
          , G. Fujii,
          <string-name>
            <given-names>G.</given-names>
            <surname>Torres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Piyathaisere</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. O'Hearn</surname>
          </string-name>
          , W. Liu,G. Lazzi et al. (
          <year>2002</year>
          ).
          <article-title>Retinal prosthesis for the blind</article-title>
          .
          <source>Surv Ophthalmol</source>
          , vol.
          <volume>47</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>335</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Meijer</surname>
            ,
            <given-names>P.B.</given-names>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>An experimental system for auditory image representations</article-title>
          .
          <source>IEEE Trans Biomed Eng.</source>
          , vol.
          <volume>39</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>112</fpage>
          -
          <lpage>121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Naghdy</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Selecting the most effective visual information for retinal prosthesis</article-title>
          .
          <source>SPIE Newsroom.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>O'Regan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <given-names>A</given-names>
            .
            <surname>Noë</surname>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>A sensorimotor account of vision and visual consciousness</article-title>
          .
          <source>Behavioral and Brain Sciences</source>
          vol.
          <volume>24</volume>
          (
          <issue>05</issue>
          ), pp.
          <fpage>939</fpage>
          -
          <lpage>973</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Osterberg</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>1935</year>
          ).
          <article-title>Topology of the layer of rods and cones in the human retina</article-title>
          .
          <source>Acta Ophthalmol Suppl</source>
          <volume>6</volume>
          ,
          <fpage>1</fpage>
          --
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Sampaio</surname>
            , E.,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Maris</surname>
            and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Bach-</surname>
          </string-name>
          y-Rita (
          <year>2001</year>
          ).
          <article-title>Brain plasticity: 'visual' acuity of blind persons via the tongue</article-title>
          .
          <source>Brain Res</source>
          , vol.
          <volume>908</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>204</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Schwartz</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>1980</year>
          ).
          <article-title>Computational anatomy and functional architecture of striate cortex: a spatial mapping approach to perceptual coding</article-title>
          .
          <source>Vision Research</source>
          , vol.
          <volume>20</volume>
          (
          <issue>8</issue>
          ), pp.
          <fpage>645</fpage>
          -
          <lpage>669</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ; Barnett,
          <string-name>
            <surname>G.</surname>
          </string-name>
          ; Humayun,
          <string-name>
            <given-names>M.</given-names>
            &amp;
            <surname>Dagnelie</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>Facial Recognition Using Simulated Prosthetic Pixelized Vision</article-title>
          .
          <source>Investigative Ophthalmology &amp; Visual Science</source>
          , vol.
          <volume>44</volume>
          (
          <issue>11</issue>
          ), pp.
          <fpage>5035</fpage>
          --
          <lpage>5042</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Zrenner</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>Will retinal implants restore vision</article-title>
          ? Science vol.
          <volume>295</volume>
          (
          <issue>5557</issue>
          ), pp.
          <fpage>1022</fpage>
          -
          <lpage>1025</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>