<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Spike-Time Dependant Plasticity in a Spiking Neural Network for Robot Path Planning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohamed Nadjib Zennir</string-name>
          <email>zennir.med@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Benmohammed</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rima Boudjadja</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department, University Mentouri of Constantine</institution>
          ,
          <country country="DZ">Algeria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Computer Science Department, University of Bejaia</institution>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper will present a path planning technique for autonomous mobile robot, based on the representation of the environment as a cognitive map through a spiking neural network (SNN) of O'Keefe place cells. The method is based on the concept of the travelling wave. For this purpose, we use a biologically plausible neural model (Izhikevitch model) which is the medium of a travelling wavefront stabilized by the Spike-Time Dependant Plasticity (STDP) process. The obstacles are represented by inhibited neurons and the robot by the unique externally excited place cell that initiates the wave. This method produces a gradient map that allows fast and reliable calculation of a feasible path.</p>
      </abstract>
      <kwd-group>
        <kwd>autonomous mobile robot</kwd>
        <kwd>path planning</kwd>
        <kwd>spiking neural network</kwd>
        <kwd>travelling wave</kwd>
        <kwd>Spike-Time Dependant Plasticity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The path planning for autonomous mobile robots is a problem that has been
studied extensively for over than 30 years. The aim of this research is to develop
algorithms which allow robots to move in an environment, that may be static
or dynamic, fully or partially known, in a secure manner; ie according to a path
guaranteed without collisions.</p>
      <p>
        The first attempts to solve this problem were based on global methods which
can be considered as a search process for a path in a graph which represents
the accessible paths along objects. The most widely known global methods are
Roadmaps[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], visibility graph method[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], cell-decomposition[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
trapezoidaldecomposition[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and voronoi general decomposition[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The main drawback of
these methods is their increasing complexity when environment itself becomes
complex.
      </p>
      <p>
        In response to this problem, attempts based on local methods were proposed.
The idea was to consider only the goal and the direct close surrounding of the
robot, therefore reducing the complexity due to the global environment. The
Potential Fields[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ][
        <xref ref-type="bibr" rid="ref8">8</xref>
        ][
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] that consider the environment, the robot and the goal as
electric charges is a famous example. The drawback of these methods (and of any
local method) is the local minima[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Although several suggestions have been
presented to circumvent these problems[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ][
        <xref ref-type="bibr" rid="ref11">11</xref>
        ][
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], no algorithm which guarantees
an exact solution was presented.
      </p>
      <p>
        To solve the problems related to global method complexity and the local
minima, algorithms based on a environment discretization had been designed.
Firstly, these algorithms consider the environment as an occupancy grid, thus
solving the problem of the environment complexity. Secondly, the underlying idea
of these algorithms is to organise wave propagation in a way similar to waves
in water spreading, for instance, around a dropped stone. This expansion wave
carries information regarding the distance from it birth point allowing, later,
the calculation of a path between the wave birth point and any other point of
the environment. Several approaches had been proposed, such as the resistive
grids[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ][
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] or neural networks[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ][
        <xref ref-type="bibr" rid="ref19">19</xref>
        ][
        <xref ref-type="bibr" rid="ref20">20</xref>
        ][
        <xref ref-type="bibr" rid="ref21">21</xref>
        ][
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. The main challenge of these
methods is to guarantee a wave that grows stably without evolving to chaotic
patterns.
      </p>
      <p>
        Proposed neural networks are e↵ ective, nonetheless, biologically implausible.
Weidong et al.[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], Yang et al.[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ][
        <xref ref-type="bibr" rid="ref17">17</xref>
        ][
        <xref ref-type="bibr" rid="ref16">16</xref>
        ][
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], Glasius et al.[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and Lebedev et
al.[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] propose neural networks built on non-pulsating neurons (type Hopfield)
completely unrealistic, the wave stability is guaranteed by parameters checking
the Liapunov conditions. Qu et al.[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], propose a spiking neural network,
however; to stabilize the expansion of the wave it uses the internal changes in the
neurons (change in threshold) which were not observed in nature. Ponulak and
Hopfield [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] propose a spiking neural network representing a cognitive map
biologically plausible, despite that neurons are simulated with a very basic model
(Integrate and fire).
      </p>
      <p>Our work aims to propose a path planner based on a biologically plausible
neural network (Izhikevich) and which the wave is stabilized according to a
learning process actually implemented in the mammalian brain : the Spike-Time
Dependant Plasticity (STDP). We will demonstrate the feasibility of this method
through experiments and comparisons with other methods in the literature.</p>
      <p>This paper is organized as follows: section 2 outlines our approach to the
path-planning problem. Section 3 presents experimentation and results that show
the e ciency of our proposal. In Section 4, we discuss the results and perform
comparisons. Finally, we present our conclusions and perspectives in the last
section.
2</p>
      <p>
        The Navigation Model
The proposed model is based on a spiking neural network (SNN) in which each
neuron is modeled according to the Izhikevich neuron model[
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] and represents
a place cell, an elementary portion of the discretized environment. Place cells are
particular neurons in the hippocampus that respond to mostly unique spatial
locations. Place cells were discovered by O’Keefe in 1976 [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. Our SNN is a
connected network of place cells that forms a cognitive map and is a medium for
a traveling wave passing through the network with the successive activation of
di↵ erent neurons in a huge domino e↵ ect. The wave indirectly carries
information about the distance by generating a gradient field potential (thanks to the
potential of each neuron) around the starting point. This gradient fields allows
the calculation of a feasible path by ascending the gradient from any point of
the environment to the starting point.
      </p>
      <p>To calculate a consistent path, the wave must be stable; ie to have a single
activation front moving from the starting point to the outer limits of the
environment. The necessary condition for this stability is that each neuron should
fire (activate) a single time at the passage of the wave and should not fire in the
future. This is impossible without a control process because an activated parent
neuron will necessarily stimulate its neighbors which by emitting their pulses
will reactivate the parent neuron, causing the wave instability.</p>
      <p>The control process we are proposing in this model is the STDP (Spike-Time
Dependant Plasticity). Its a temporally asymmetric form of Hebbian learning
induced by tight temporal correlations between the spikes of pre- and
postsynaptic neurons. As with other forms of synaptic plasticity, it is widely believed
that it underlies learning and information storage in the brain.</p>
      <p>We will demonstrate that this rule is su cient to stabilize the traveling wave
in the SNN.
2.1</p>
      <p>Network Architecture
The network architecture consists in a grid X of N ⇥ N elements. Each element
is an excitatory neuron which represents a portion of the environment (place
cell). Each neuron xi is connected to a maximum of eight immediate neighbors
(neurons located in a Euclidean distance less than or equal to p 2)(Fig. 1). The
8 neighbors form a neighborhood set si.</p>
      <p>The initial connections between neurons are synaptic weighted links which
are inversely proportionals to the distance:
wij =
(1/dij if dij  p 2
0</p>
      <p>else
dij is the distance between the neuron i and neuron j on the grid. The neuron
of the grid has another synaptic link from the occupancy grid E. The occupancy
map is a N 2 vector. It also represents a discretization of the environment. Each
cell ei of the map (represented as a vector) has 3 states: free, occupied by an
obstacle or occupied by the robot.</p>
      <p>ei =</p>
      <p>
        The synaptic connection, between a cell of the occupancy grid and a neuron
of the grid, exist only if the cell and the neuron represent the same portion of
space. Its weight is wekxt = 80 knowing that k is the index of ek a cell of E and
xk a neuron of X. This connection may be inhibitory in the case of an obstacle
or excitatory in the case of a robot.
(1)
(2)
(3)
(4)
(5)
dv
dt
where v(t) represents the membrane potential, and u(t) represents membrane
recovery. The parameter a describes the time scale of the recovery variable u.
Smaller values result in slower recovery. The parameter b describes the
sensitivity of the recovery variable u to the subthreshold fluctuations of the membrane
potential v. The parameter c describes the after-spike reset value of the
membrane potential v. The parameter d describes after-spike reset of the recovery
variable u. Synaptic currents or injected dc-currents are delivered via the
variable Isyn. In this model, the synaptic current Isyn is the sum of two currents (Eq.
For this proposition, the Izhikevich neural model[
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] is used. This model
combines the biologically plausibility of Hodgkin-Huxley type [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] dynamics and
the computational e ciency of integrate-and-fire neurons. Bifurcation
methodologies [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] enable to reduce many biophysically accurate Hodgkin-Huxley type
neuronal models to a two-dimensional (2-D) system of ordinary di↵ erential
equations of the form
6): external (Iext) and internal (Iint) current relative to the fields of neurons.
Iext is the synaptic current from a cell in the occupancy map, Iint is the sum of
synaptic currents from the neighborhood in the neuron field.
      </p>
      <p>Isiyn(t) = Ieixt(t) + Iiint(t) = ei(t)weixt +</p>
      <p>X
xj2 ai(t)
wji
(6)</p>
      <p>Using of Izhikevich model was motivated by the lack of complexity required
(13 operations) to simulate a wide variety of neuronal behaviors (22 behaviors)
with a very satisfactory accuracy compared to natural counterparts.
2.3</p>
    </sec>
    <sec id="sec-2">
      <title>Spike-Time Dependant Plasticity</title>
      <p>The STDP is a Hebbian learning rule related to SNNs that is believed to be
responsible for learning and storage information in the brain. Schematically, let
a neuron xi connected to a neuron xj by a synapse going from xi to xj with a
weight wij : if the neuron xi fires before xj (tifire &lt; tjfire or tij &lt; 0), the synapse
is strengthened (wij increases) else if the neuron xi fires after xj (tifire &gt; tjfire
or tij &gt; 0), the synapse weakens (wij decreases).</p>
      <p>Let F (.) the STDP function, the synapse’s weight is changed according to
the Delta rule wij = wij + F ( tij ), being a weighting of STDP function,
tij the di↵ erence between tifire and tjfire and F (.) such as:</p>
      <p>F ( t) =
(+A+exp( t/⌧ +) if t &lt; 0</p>
      <p>
        A exp( t/⌧ ) if t 0
(7)
The parameters ⌧ + and ⌧ determine the ranges of pre-to-postsynaptic
interspike intervals over which synaptic strengthening and weakening occur. A+ and
A , which are both positive, determine the maximum amounts of synaptic
modification, which occur when t is close to zero[
        <xref ref-type="bibr" rid="ref26">26</xref>
        ].
      </p>
      <p>In our proposal, we use the STDP rule to strengthen the synapses between
parent neuron and child neurons and especially to significantly weaken the
synapses between child neurons and parent neuron. This helps to prevent the
reactivation of the parent neuron by child neurons and avoid wave instability.
2.4</p>
    </sec>
    <sec id="sec-3">
      <title>Wave Expansion Algorithm</title>
      <p>Before introducing the algorithm, some matrices and vectors will be defined. Let
V be a vector representing the membrane potential of all the neurons (N 2) of
the network and U a vector representing the membrane recovery. The SNN is
supplied with information by the occupancy grid which is actually a vector E
whose values are determined by Eq. 2. The vector E allows the calculation of
the external input current vector Iext = 80 ⇥ E applied to the neurons. Let A
an amplified activation vector which is such as:
if Vi
else</p>
      <p>The (N 2 ⇥ N 2) T matrix is an matrix expressing the activation time
differential between two neurons of the network : Tij = tifire tjfire. W is the
(N 2 ⇥ N 2) synapses weight matrix and C the network connexion matrix.</p>
      <p>The algorithm 1 illustrates the process giving birth to the wavefront and
its expansion. During the first 5ms, a positive current of 20mV is applied as
an external input to the neuron representing the portion of space including the
robot. This fast excitation is su cient to initiate the wave expansion. In order
to stabilize the wave travel, an STDP control process is applied on the weights
of synapses connecting activated neurons at time t. This process will strengthen
the synapses that had the same connecting direction as the wave expansion
and will greatly weaking synapses on the opposite direction. This will prevent
the reactivation of previously activated neurons and guarantee a single side of
expansion. In this algorithm, (⇥ ) express the matrix product, ( ) the Hadamard
(element-wise) product.</p>
      <p>Algorithm 1 Wave expansion algorithm
t 0 ms
Tth 10000 ms
while t &lt; Tth do
if t &gt; 5 ms then</p>
      <p>Erobot 0
end if
Calculate T
Calculate A (according to Eq. 8)
if Atarget = ↵ then</p>
      <p>Tth t + 10 ms
end if
Iext 80.E
W W + .F ( T ) (according to Eq. 7)
Iint (W C) ⇥ A
Isyn Iext + Iint
Calculate V according to Eq. 3
Calculate U according to Eq. 4
t t + 1 ms
end while</p>
      <p>
        The parameter Tth is the time-threashold. Actually, the target is known as a
reward cell which is a theoretical neuron representing the reward value associated
with a place cell. Each place cell is bijectively connected to a reward cell. It’s
the simplified version of a pre-frontal cortex column presented in Erdem and
Hasselmo [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. The algorithm does not stop until the target neuron is activated.
Although the target neuron is activated, we must add an extra-time to allow
removal of the wave because the path computation request a target neuron that
is in a rest state. If the activation time of the target neuron is Ta and the
extratime is a constant parameter Tet then Tth = Ta + Tet.
2.5
      </p>
      <p>
        Gradient map and path calculation
For computing a feasible path, our method is based on the classic case of
gradient ascent[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ][
        <xref ref-type="bibr" rid="ref20">20</xref>
        ][
        <xref ref-type="bibr" rid="ref22">22</xref>
        ][
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. However, unlike the gradient ascent in the literature,
we propagate the wave from the robot and the ascent is made from the target to
the robot. In order to obtain a gradient map, we take advantage of the
asymptotic potientiel raising of the activated neurons to the resting potential. This
asymptotic expansion can temporally di↵ erentiating neurons according to their
activation time. A neuron xi which is activated earlier than a neuron xj has a
higher potential at a future time t after the wave expansion phase. This property
ensures that, in the neighborhood of a neuron xj , the neighbor neuron with the
highest potential is the one who had activated first and is therefore the parent
of xj . This allows, from children to parents, to reconstruct a feasible path from
the target to the robot.
3
      </p>
      <p>Experiments
The experiments were performed on a 30 ⇥ 30 spiking-neurons grid. The selected
parameters are the same for all experiments. The neuron is a ’Regular Spiking’
type with parameters (a, b, c, d) = (0.02, 0.2, 65, 8). The membrane potential
is initialized at 65mV and the membrane recovery at 13. The amplification
value of A is ↵ = 100. The values of the parameters that regulate synaptic
plasticity are A = 4, A+ = 4.28, ⌧ = 10ms and = 0.25, all obtained empirically.
The neurons of the grid are connected according to the equation Eq.1.</p>
      <p>Three types of environment were tested. Firstly, a simple environment with
two perpendicular crossed walls with the aim of illustrating the wave
expansion as shown in Fig.2 (a.-h.). The waveform rises at the right of the grid (a.),
propagates in the medium and bypassing the walls (b.-g.) before stamping and
revealing a gradient map (h.).</p>
      <p>Secondly, an environment where barriers are walls forming two U-shapes
(Fig.2 i.). This environment is particularly problematic for local methods because
it has two local minimas, one in the center of each U-shape. Our method is
insensitive to these local minima and allows the calculation of a feasible path.</p>
      <p>Thirdly, a maze (Fig.2 j.). This experiment prove the algorithm’s ability to
overcome the complexity of obstacles. The execution time depends entirely on the
length of the feasible path between the robot and the target, not the complexity
of the environment.</p>
      <p>The simulations were performed on a computer with a 2GHz Core I3
processor and 8GB of RAM and implemented with the Python language. The execution
speed is an average of 1.6 seconds per millisecond of neuronal simulation. This
Fig. 2: Di↵ erent stages of the wave expansion (a.-h.). Path planning in the ’double U’
environment (i.). Path planning in the maze(j.).
slowness is mainly due to the matrix exponentials and products involving very
large matrices (W and T have dimensions of 900⇥ 900). This slowness prevents
the implementation of this algorithm on the Von Neumann architectures for use
under real-time conditions. However, the simulation has shown that biological
and parallel neural architecture allow the calculation of a path in less than 60
ms, hence, about 16 updated paths per second. Theoretically, this speed allow a
real-time use.
4</p>
      <p>
        Discussion
Our work is primarily aiming to verify the possibility to build a neural path
planner based exclusively on a biologically plausible neural model. The neural
path planner models seen in the literature are based on a reducing or very free
neuron modeling. Yang et al.[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], Chen et al.[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], Glasius et al.[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] use a
nonpulsating potential variable neural model based on the Grossberg equation[
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]:
dn(t)
✏ = n(t) + (b+ n(t))p+ (n(t) + b )p (9)
dt
where the parameters b+ and b are the passive decay rate, the upper and
lower bounds of the neural activity. p+ is the positive input (0 otherwise), p is
the negative input (0 otherwise). These non-pulsating models have the advantage
to provide stable and uniform gradient maps after a certain time. This comes
from the fact that these models fulfill the Lyapunov stability criteria, but, a
neuron remained stable at a very high potential is biologically implausible.
      </p>
      <p>
        Qu et al.[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] proposed a PCNN model which is closer to the Hodgkin-Huxley
model (pulsating neuron). However, the inputs of the neurons are voluntarily
standardized in order to obtain constant wave velocity everywhere. Furthermore,
Qu et al. artificially change (against biological premise) the neuron activation
threshold after the passage of the wave in order to avoid the reactivation of
parent neuron and keep a coherent wave front.
      </p>
      <p>In our proposal, the wave is stabilized by taking advantage of the natural
plasticity process of biological synapses that regulates the synapse weight
according to pre- and post-neuron spiking time. We used the natural input in the
neuron without any modification. This produces a drawback: the neuron output
(peak and frequency) is directly related to current input intensity. Since all
neurons don’t receive the same amount of current in the input (the number of active
neighbors varies) that generates waves that does not move at a uniform velocity
throughout the network. The di↵ erence is minimal but su cient to generate, in
some cases, non-optimal paths.</p>
      <p>
        This scenario is illustrated in figure Fig. 3. The red path is the one obtained
by Qu et al. whose wave has a uniform velocity whatever the space configuration.
The blue path is the one obtained by our algorithm. After initialization, the wave
splits into two sub-waves. The first goes through the left side of the vertical wall,
the second through the right side. In our case, and since the left side is narrower
than the right side, the number of neurons is reduced. The average intensity of
the input current of the left side neurons is slightly lower than the right side
ones, leading to pulses of lower frequencies. Accordingly, the right sub-wave will
be faster than the left one and will reach the goal first: generating a non-optimal
path. This phenomenon is also encountered by Yang et al. and Chen et al.[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
Qu et al. has standardized the input current of neurons and avoided the problem.
      </p>
      <p>
        In comparison with Ponulak and Hopfield model [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ], our neural model
(Izhikevich model) is a biologically more accurate model in comparison with
the ”integrate and fire” model used by the authors. Ponulak model considers the
cognitive map as a network of excitatory neurons that does not take into
consideration obstacles and dangerous places. Obstacles are not part of cognitive map
so they are unreachables. The authors do not mention the case of a dynamic
environment where obstacles are in motion. In our model, obstacles and dangerous
places are place cells which are inhibited from the pre-frontal cortex. An
inhibited neuron in the SNN can not be a medium to the wavefront, consequently,
any future paths will avoid (bypass) the inhibited neurons (dangerous places).
To calculate the next movement of the robot, Ponulak and Hopfield propose
an elegant solution base on the synaptic vector field in order to obtain motor
commands. They also explore the biological revelance of the travelling waves.
The authors claim that recent electrophysiological results suggest existence of
expanding waves of neural activity in the hippocampus during, so called, sharp
wave ripple (SWR) episodes [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]. Sharp wave ripples are brief highfrequency
bursts of neural activity observed during sleep or at awake rest.
5
      </p>
      <p>Conclusion
This paper presents a model for collision free path-planning organized on a
bidimensional grid of 8-connected pulsing neurons. This pulsing neurons are place
cells which form a cognitive map. The proposed method uses the concept of
travelling waves (or wave expansion) among the spiking neural network (cognitive
map) in order to find a faisible (not necessarily shortest) path. The
computational complexity is only related to the lenght of the path. The originality of this
method is that the considered neuron is very close to the biological neuron. In
addition, in order to stabilize the wave expansion, a natural phenomenon present
in the mammalian brain had been used : the Spike-Time Dependant Plasticity.
This approach distinguishes our method from that of the literature using
algorithmic corrections or non-plausible neurons to stabilize the wave. The main
weakness of the proposed method is that the generated waves does not move
at a uniform velocity through the network which does not guarantee the path
optimality. The second weakness is that this method is not suited to the Von
Neumann architectures and must be implemented on massively parallel machines
or biological neural tissues to hope use in condition of dynamic environments.
Thus, the challenge is to implement the method on parallel machine and improve
the quality of the resulting paths.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Latombe</surname>
          </string-name>
          , J.C:
          <article-title>Robot motion planning</article-title>
          , Springer (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Lozano-Prez</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Wesley</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <article-title>An algorithm for planning collision-free paths among polyhedral obstacles</article-title>
          .
          <source>Communications of the ACM</source>
          , Vol.
          <volume>22</volume>
          , No.
          <volume>10</volume>
          , pp.
          <fpage>560</fpage>
          -
          <lpage>570</lpage>
          (
          <year>1979</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Lozano-Prez</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Spatial Planning: A configuration space approach</article-title>
          .
          <source>IEEE Transaction on Computers</source>
          , Vol.
          <volume>100</volume>
          , No.
          <issue>2</issue>
          , pp.
          <fpage>108</fpage>
          -
          <lpage>120</lpage>
          (
          <year>1983</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Schwartz</surname>
            ,
            <given-names>J.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharir</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>On the piano moving problem: general techniques for computing topological properties of real algerbric manifolds</article-title>
          .
          <source>Advances in Applied Mathematics</source>
          , Vol.
          <volume>4</volume>
          , No.
          <issue>3</issue>
          , pp.
          <fpage>298</fpage>
          -
          <lpage>351</lpage>
          (
          <year>1983</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Bhattacharya</surname>
            ,
            <given-names>B.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zorbas</surname>
          </string-name>
          , J.:
          <article-title>Solving the two-dimensional find path problem using a line-triangle representation of the robot</article-title>
          .
          <source>Journal of Algorithms</source>
          , Vol.
          <volume>9</volume>
          , No.
          <issue>4</issue>
          , pp.
          <fpage>449</fpage>
          -
          <lpage>469</lpage>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Takahashi</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schilling</surname>
          </string-name>
          , R.J.:
          <article-title>Motion planning in a plane using generalized Voronoi diagrams</article-title>
          .
          <source>IEEE Robotics and Automation</source>
          , Vol.
          <volume>5</volume>
          , No.
          <issue>2</issue>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>150</lpage>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Barraquand</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Langlois</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Latombe</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          :
          <article-title>Numerical potential field techniques for robot path planning</article-title>
          .
          <source>Systems, Man and Cybernetics</source>
          , IEEE Transactions, Vol.
          <volume>22</volume>
          , No.
          <issue>2</issue>
          , pp.
          <fpage>224</fpage>
          -
          <lpage>241</lpage>
          (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Hwang</surname>
            ,
            <given-names>Y. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ahuja</surname>
          </string-name>
          , N.:
          <article-title>A potential field approach to path planning</article-title>
          .
          <source>IEEE Robotics and Automation</source>
          , Vol.
          <volume>8</volume>
          , No.
          <issue>1</issue>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>32</lpage>
          (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chirikjian</surname>
            ,
            <given-names>G. S.:</given-names>
          </string-name>
          <article-title>A new potential field method for robot path planning</article-title>
          .
          <source>Robotics and Automation</source>
          ,
          <source>2000. Proceedings. ICRA'00. IEEE International Conference</source>
          , Vol.
          <volume>2</volume>
          , pp.
          <fpage>977</fpage>
          -
          <lpage>982</lpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Koren</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Borenstein</surname>
          </string-name>
          , J.:
          <article-title>Potential field methods and their inherent limitations for mobile robot navigation</article-title>
          .
          <source>Robotics and Automation</source>
          , IEEE Transactions, pp.
          <fpage>1398</fpage>
          -
          <lpage>1404</lpage>
          . (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Warren</surname>
            ,
            <given-names>C. W.</given-names>
          </string-name>
          :
          <article-title>Global path planning using artificial potential fields</article-title>
          .
          <source>In Proceedings of the IEEE International Conference on Robotics and Automation (Scottsdale, AZ)</source>
          , Los Angeles. Computer Society Press of the IEEE, pp.
          <fpage>316</fpage>
          -
          <lpage>321</lpage>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Glasius</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Komoda</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gielen</surname>
            ,
            <given-names>S. C. A. M.:</given-names>
          </string-name>
          <article-title>Population coding in a neural net for trajectory formation</article-title>
          .
          <source>Network: Computation in Neural Systems</source>
          ,
          <volume>5</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>549</fpage>
          -
          <lpage>563</lpage>
          (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Connolly</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burns</surname>
            ,
            <given-names>J. B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weiss</surname>
          </string-name>
          , R.:
          <article-title>Path planning using Laplaces equation</article-title>
          .
          <source>In Proceedings of IEEE International Conference on Robotics and Automation</source>
          . Cincinnati,
          <string-name>
            <surname>OH</surname>
          </string-name>
          : IEEE.
          <year>1990</year>
          , pp.
          <fpage>2102</fpage>
          -
          <lpage>2106</lpage>
          (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Bugmann</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taylor</surname>
          </string-name>
          , J. G.,
          <string-name>
            <surname>Denham</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Route finding by neural nets</article-title>
          . In J. G. Taylor (Ed.),
          <article-title>Neural networks</article-title>
          , Alfred Waller Ltd., pp.
          <fpage>217</fpage>
          -
          <lpage>230</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>S. X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meng</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>An e cient neural network approach to dynamic robot motion planning</article-title>
          .
          <source>Neural Networks</source>
          , Vol.
          <volume>13</volume>
          , No.
          <issue>2</issue>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>148</lpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. X.</surname>
          </string-name>
          <article-title>Yang: Dynamic task assignment and path planning of multi-AUV system based on an improved self-organizing map and velocity synthesis method in three-dimensional underwater workspace</article-title>
          .
          <source>IEEE Transactions on Cybernetics</source>
          . Vol.
          <volume>43</volume>
          , No.
          <issue>2</issue>
          , pp.
          <fpage>504</fpage>
          -
          <lpage>514</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>J. Ni</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Zhang</surname>
            , L. Ren,
            <given-names>S. X.</given-names>
          </string-name>
          <article-title>Yang: Abrupt event monitoring for eater environmental system based on KPCA and SVM</article-title>
          .
          <source>IEEE Transactions on Instrumentation and Measurement</source>
          . Vol.
          <volume>61</volume>
          , No.
          <issue>4</issue>
          , pp.
          <fpage>980</fpage>
          -
          <lpage>989</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>S. X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , G. Yuan,
          <string-name>
            <surname>M. Q.-H. Meng</surname>
          </string-name>
          :
          <article-title>A bioinspired neurodynamics based approach to tracking control of mobile robots</article-title>
          .
          <source>IEEE Transactions on Industrial Electronics</source>
          . Vol.
          <volume>59</volume>
          , No.
          <issue>8</issue>
          , pp.
          <fpage>3211</fpage>
          -
          <lpage>3220</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Qu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>S. X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Willms</surname>
            ,
            <given-names>A. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yi</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Real-time robot path planning based on a modified pulse-coupled neural network model</article-title>
          .
          <source>Neural Networks</source>
          , Vol.
          <volume>20</volume>
          , No.
          <volume>11</volume>
          , pp.
          <fpage>1724</fpage>
          -
          <lpage>1739</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Glasius</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Komoda</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gielen</surname>
            ,
            <given-names>S. C.</given-names>
          </string-name>
          :
          <article-title>A biologically inspired neural net for trajectory formation and obstacle avoidance</article-title>
          .
          <source>Biological Cybernetics</source>
          , Vol.
          <volume>74</volume>
          , No.
          <issue>6</issue>
          , pp.
          <fpage>511</fpage>
          -
          <lpage>520</lpage>
          . (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Lebedev</surname>
            ,
            <given-names>D. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Steil</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ritter</surname>
            ,
            <given-names>H. J.:</given-names>
          </string-name>
          <article-title>The dynamic wave expansion neural network model for robot motion planning in time-varying environments</article-title>
          .
          <source>Neural Networks</source>
          , Vol.
          <volume>18</volume>
          , No.
          <issue>3</issue>
          , pp.
          <fpage>267</fpage>
          -
          <lpage>285</lpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Weidong</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Changhong</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yugeng</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>On-line safe path planning in unknown environments</article-title>
          .
          <source>In Robotics and Automation. Proceedings. ICRA'03. IEEE International Conference</source>
          , Vol.
          <volume>3</volume>
          , pp.
          <fpage>4191</fpage>
          -
          <lpage>4196</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Izhikevich</surname>
            ,
            <given-names>E. M.</given-names>
          </string-name>
          :
          <article-title>Simple model of spiking neurons</article-title>
          .
          <source>Neural Networks, IEEE Transactions</source>
          , Vol.
          <volume>14</volume>
          , No.
          <issue>6</issue>
          , pp.
          <fpage>1569</fpage>
          -
          <lpage>1572</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Hodgkin</surname>
            ,
            <given-names>A. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huxley</surname>
            ,
            <given-names>A. F.</given-names>
          </string-name>
          :
          <article-title>A quantitative description of membrane current and its application to conduction and excitation in nerve</article-title>
          .
          <source>The Journal of physiology</source>
          , Vol.
          <volume>117</volume>
          , No.
          <issue>4</issue>
          , pp.
          <fpage>500</fpage>
          -
          <lpage>544</lpage>
          (
          <year>1952</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Izhikevich</surname>
            ,
            <given-names>E. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moehlis</surname>
          </string-name>
          , J.:
          <article-title>Dynamical Systems in Neuroscience: The geometry of excitability and bursting</article-title>
          .
          <source>SIAM review</source>
          , Vol.
          <volume>50</volume>
          , No.
          <fpage>2</fpage>
          . (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>K. D.</given-names>
          </string-name>
          , Abbott.,
          <string-name>
            <surname>L. F.</surname>
          </string-name>
          :
          <article-title>Competitive Hebbian learning through spike-timing-dependent synaptic plasticity</article-title>
          .
          <source>Nature neuroscience</source>
          ,
          <volume>3</volume>
          (
          <issue>9</issue>
          ), pp.
          <fpage>919</fpage>
          -
          <lpage>926</lpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Grossberg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Nonlinear neural networks: Principles, mechanisms, and architectures</article-title>
          .
          <source>Neural networks</source>
          , Vol.
          <volume>1</volume>
          , No.
          <issue>1</issue>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>61</lpage>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Ponulak</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hopfield</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          :
          <article-title>Rapid, parallel path planning by propagating wavefronts of spiking neural activity</article-title>
          .
          <source>Frontiers in computational neuroscience</source>
          ,
          <volume>7</volume>
          . (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Erdem</surname>
            ,
            <given-names>U.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasselmo</surname>
            ,
            <given-names>M.E.</given-names>
          </string-name>
          :
          <article-title>A goal-directed spatial navigation model using forward trajectory planning based on grid cells</article-title>
          .
          <source>The European Journal of Neuroscience</source>
          <volume>35</volume>
          (
          <issue>6</issue>
          ), pp.
          <fpage>916</fpage>
          -
          <lpage>931</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>OKeefe</surname>
          </string-name>
          , J. :
          <article-title>Place units in the hippocampus of the freely moving rat</article-title>
          .
          <source>Experimental Neurology</source>
          <volume>51</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>78</fpage>
          -
          <lpage>109</lpage>
          (
          <year>1976</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Ellender</surname>
            ,
            <given-names>T. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nissen</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Colgin</surname>
            ,
            <given-names>L. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>E. O.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Paulsen</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Priming of hippocampal population bursts by individual perisomatic-targeting interneurons</article-title>
          .
          <source>J. Neurosci. 30</source>
          , pp.
          <fpage>5979</fpage>
          -
          <lpage>5991</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>