<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Enhancing Neural Based Obstacle Avoidance with CPG Controlled Hexapod Walking Robot</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Petr Cˇ ížek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jan Faigl</string-name>
          <email>faiglj@fel.cvut.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jan Bayer</string-name>
          <email>bayerja1@fel.cvut.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Czech Technical University in Prague, Faculty of Electrical Engineering</institution>
          ,
          <addr-line>Technická 2, 166 27 Prague</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <volume>1885</volume>
      <fpage>65</fpage>
      <lpage>70</lpage>
      <abstract>
        <p>Avoiding collisions with obstacles and intercepting objects based on the visual perception is a vital survival ability of any animal. In this work, we propose an extension of the biologically based collision avoidance approach to the detection of intercepting objects using the Lobula Giant Movement Detector (LGMD) connected directly to the locomotion control unit based on the Central Pattern Generator (CPG) of a hexapod walking robot. The proposed extension uses Recurrent Neural Network (RNN) to map the output of the LGMD on the input of the CPG to enhance collision avoiding behavior of the robot in cluttered environments. The presented results of the experimental verification of the proposed system with a real mobile hexapod crawling robot support the feasibility of the presented approach in collision avoidance scenarios.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>Avoiding collisions with obstacles and intercepting objects
is a vital survival ability for any animal. For a mobile robot
moving from one place to another, the contact with a fixed
or moving object may have fatal consequences. Therefore,
it is desirable to study the problem of collision avoidance
and derive new and computationally efficient ways to
trigger collision avoiding behavior.</p>
      <p>
        In this work, we concern a problem of biologically
inspired motion control and collision avoidance with a
legged walking robot equipped with a forward looking
camera only. We propose to utilize a Central Pattern
Generator (CPG) approach [1] for robot locomotion control
and the vision-based collision avoidance approach using
the Lobula Giant Movement Detector (LGMD) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] which
are both combined in the proposed controller based on
Recurrent Neural Network (RNN).
      </p>
      <p>
        The proposed solution builds on our previous results
published in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] in which only a simple mapping function
is utilized for transforming the output of the LGMD
neural network directly to the locomotion control parameters
of the CPG controller [1]. Such a solution works well in
laboratory conditions, but, unfortunately, it is error-prone
in the cluttered environment. It is mainly because of the
way how the LGMD neural network processes the visual
data and due to a simple mapping function. The LGMD
reacts on the lateral movement of vertical edges in the image
regardless their depth in the scene. In a cluttered
environment, this results that the output is heavily influenced by
a lot of stimuli from the distinctive edges in a far distance
      </p>
      <sec id="sec-1-1">
        <title>Proposed RNN-based Controller p turn</title>
      </sec>
      <sec id="sec-1-2">
        <title>CPG-based</title>
      </sec>
      <sec id="sec-1-3">
        <title>Locomotion Controll</title>
      </sec>
      <sec id="sec-1-4">
        <title>Actuators</title>
      </sec>
      <sec id="sec-1-5">
        <title>Environment</title>
      </sec>
      <sec id="sec-1-6">
        <title>Left LGMD</title>
      </sec>
      <sec id="sec-1-7">
        <title>Right LGMD</title>
      </sec>
      <sec id="sec-1-8">
        <title>Camera</title>
        <p>from the robot. Moreover the mapping function translates
the output of the LGMD directly to the locomotion
control parameters. Hence, the reaction of the robot is based
solely on the current observation of the environment which
results in situations when the robot hits an obstacle from
the side that has successfully avoided earlier but it is
already out of the field of view. Therefore, we propose to
enhance the collision avoiding behavior of the robot by
incorporating a memory mechanism by means of the RNN.</p>
        <p>
          The overall structure of the proposed system is depicted
in Fig. 1. Regarding to the previous approaches, here, we
would like to emphasize a practical verification of the
proposed method on a real walking robot as the specific nature
of the legged locomotion makes the problem more difficult
in comparison to the wheeled [
          <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
          ] or flying [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] robots.
The main difference originates in abrupt motions of the
camera induced by the locomotion of the robot which
negatively influences the output of the collision avoiding
visual pathway.
        </p>
        <p>The reminder of the paper is organized as follows.
The most related approaches on the neural-based
collision avoidance using vision are summarized in Section 2.
Section 3 describes the individual building blocks of the
proposed control architecture. Evaluation results and their
discussion are detailed in Section 4. Concluding remarks
and suggestions for future work are dedicated to
Section 5.
2</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>The problem of collision avoidance has been studied ever
since the mobile robots appeared. Hence, there is a lot of
different approaches using different sensors and different
processing techniques. In this work, we are focused on
vision-based neural obstacle avoidance methods and the
most related approaches are described in the rest of this
section.</p>
      <p>
        Direct mapping of the visual perception on the robot
control command using a feed-forward neural network has
been already utilized in several methods. The problem of
road following using neural networks, which dates back to
90s, can be considered as a special case of the collision
avoidance problem [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. However, such approaches cannot
be considered as biologically-based because of artificial
nature of the examined roads.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the Lobula Giant Movement Detector (LGMD)
neural network has been introduced in robotics to imitate
the way how insects avoid collisions with an
intercepting object [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The approach has been widely adopted
for its simplicity and relatively good performance with
wheeled [
        <xref ref-type="bibr" rid="ref2 ref4 ref5">2, 4, 5</xref>
        ] and flying [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] robots. However, these
approaches experimentally verify the collision avoidance
with a real robot either in a closed arena where it is
necessary to avoid collisions with walls or in a scenario where
a static robot is supposed to detect an intercepting object.
Moreover, the walls of the arena or the obstacles were
homogeneously distributed or coated with a high contrast
artificial pattern which significantly improves the behavior
of the LGMD. In our approach, we focus on the
deployment of the LGMD in heavily cluttered unstructured
environment, and thus evaluate the approach in more realistic
scenarios.
      </p>
      <p>
        An experimental study on the prediction of evasive
steering maneuvers in urban traffic scenarios has been
recently published in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In this approach, the performance
of the LGMD is improved by introducing so-called
“danger zones” which are the image areas that will most likely
indicate the incoming threat.
      </p>
      <p>
        Another approach presented in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] compares the
performance of the LGMD and Directional Selective Neurons
(DSN) in the ability to avoid collisions. Both of them are
to be found in the visual pathways of insects. The reported
results show that the LGMD can be trained using
evolutionary techniques to outperform the DSN in the collision
recognition ability.
      </p>
      <p>
        Regarding our target scenario, the most relevant
approach to the proposed solution has been presented in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
The authors use a biologically-inspired collision
avoidance approach based on the extraction of nearness
information from the image depth estimation to detect obstacles
and avoid collisions. The whole system allows a simulated
hexapod robot to navigate cluttered environment while
actively avoiding obstacles. However, the approach uses a
direct feed-forward approach for the motion control and it
has not been deployed in a real-world scenario.
      </p>
      <p>
        The herein proposed control mechanism utilizes a
Recurrent Neural Network (RNN) that has been already
utilized in collision avoiding scenarios using odor sensors on
whiskers [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] or a set of infrared rangefinders [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. A
vision-based collision avoidance for an UAV based on the
RNN has been recently presented in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] which trains the
UAV to avoid collisions during autonomous indoor flight.
This work served as the inspiration for our neural-based
autonomous agent.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Proposed Solution</title>
      <p>
        Three basic functional parts can be identified within the
proposed collision avoiding system. They are depicted in
three different colors in Fig. 1. The first part is the
locomotion control unit based on the chaotic oscillator [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
depicted in an orange color whose purpose is to control the
walking pattern and to solve the kinematics. It allows to
change the type of the motion gait based on the pre-set
parameter p and steer the robot motion according to the input
signal turn defining the turning radius. The second part is
the visual pathway depicted in a green color which utilizes
the LGMD neural network for avoiding approaching
objects and triggering escape behavior. The main idea of the
proposed approach is to use the LGMD outputs for setting
the hexapod control parameters, in particular, the turning
radius turn of the robot. In this work, we are proposing
to use the RNN-based approach for the translation of the
LGMD output to the turn parameter which is dedicated to
the last part depicted in a yellow color. Each part is
discussed in more detail in the following sections.
3.1
      </p>
      <sec id="sec-3-1">
        <title>CPG-Based Locomotion Control</title>
        <p>
          The locomotion control is based on our previous work
presented in [1]. It utilizes only one chaotic CPG [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]
consisting of two interconnected neurons with a control input
computed solely based on the input period p. The CPG
stabilizes a periodic orbit of p from the chaotic
oscillation, so the output is a discrete periodic signal. The period
p ∈ {4, 6, 8, 12} directly determines the resulting walking
pattern (motion gait): tripod, ripple, tetrapod, and wave,
respectively [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>
          Afterwards, the output of the chaotic oscillator is shaped
and post-processed in order to obtain a signal usable for a
trajectory generator and to determine the phase of
individual legs, i.e., whether the leg is swinging or supporting the
body. Afterwards, the output of the chaotic oscillator is
thresholded and a triangle wave alternating between −1
and 1 is produced, where the upslope (swing phase) is a
constant and the downslope (support phase) depends on
the period p. Based on the leg coordination rules [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ],
individual delays are applied to the triangular wave per each
leg to produce the rhythmic pattern for each leg.
        </p>
        <p>The result of the post-processing module is fed into
a trajectory generator, which determines the position of
foot-tips according to the input signal along with the
parameter turn, which is given by the RNN-based controller.
The turn parameter is equal to the distance (in
millimeters) from the robot center to the turning center on a line
perpendicular to the heading of the robot connecting the
FF
cell
F</p>
        <p>P</p>
        <p>S
I E I E I E I E I E</p>
        <p>P
S</p>
        <p>P</p>
        <p>S
LGMD</p>
        <p>P
S</p>
        <p>P
S</p>
        <p>Photoreceptive
layer
Inhibitory/Excitatory
layer
Summation
layer
LGMD cell
default positions of the middle legs. Based on the turn
parameter and the triangular wave, the trajectory
generator uniquely determines the foot-tip positions of each leg
on the constructed arcs which are limited by the angle α.
The value of α is computed from the distance of the
furthest leg from the pivotal point established by turn and the
maximum step size ymax. The idea of the trajectory
generator is visualized in Fig. 2. The output of the trajectory
generator is transformed into the joint space using the
inverse kinematics module and then performed by the robot
actuators. Notice, the speed of the robot forward motion is
determined by the period p, while the robot angular
velocity is controlled by the turn parameter, which is adjusted
by the RNN-based controller from the LGMD output.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>LGMD Neural Network</title>
        <p>
          The LGMD [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] is a neural network found in the visual
pathways of insects, such as locusts [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], which responds
selectively to objects approaching the animal on a collision
course. It is composed of four groups of cells:
Photoreceptive, Excitatory, Inhibitory, and Summation arranged
in three layers; and two individual cells: Feed-forward
inhibitory and Lobula Giant Movement Detector. The
structure of the network is visualized in Fig. 3.
        </p>
        <p>The Photoreceptive layer processes the sensory input
from the camera. Its output is the difference between two
successive grayscale camera frames and it is computed as</p>
        <p>Pf (x, y) = L f (x, y) − L f −1(x, y),
where L f is the current frame, L f −1 is the previous frame
and (x, y) are the pixel coordinates. In principle, the
Photoreceptive layer implements a contrast enhancement and
forms the input to the following two groups of neurons –
the Inhibition layer and Excitatory layer.</p>
        <p>
          The response of the Inhibition layer is computed as
n n
If (x, y) = ∑ ∑ Pf −1(x + i, y + j)wI (i, j),
i=−n j=−n
(i 6= j, if i = 0),
(1)
(2)
where wI are the inhibition weights set as
The Inhibition layer is essentially smoothing the
Photoreceptive layer output values and filtering those caused by
noise or camera imperfections. The inhibition weights
wI are selected experimentally with respect to the LGMD
description in [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] which uses 3×3 matrix of inhibition
weights, but on an image with a much lower resolution.
        </p>
        <p>The Excitatory layer is used to time delay the output of
Photoreceptive layer and it is calculated as</p>
        <p>E = Pf (x, y) .</p>
        <p>The response of the Summation layer is computed as</p>
        <p>S f (x, y) = E(x, y) − If (x, y) WI ,
where WI = 0.4 is the global inhibition weight. Let S0f be
a matrix for which each value exceeding the threshold Tr
is passed and any lower value is set to 0</p>
        <p>S0f (x, y) =
(S f (x, y) if S f (x, y) ≥ Tr
0 otherwise
.</p>
        <p>Then, an excitation of the LGMD cell is computed as
(3)
(4)
(5)
(6)
(7)
(8)
and finally, theLGMD cell output is</p>
        <p>k l
Uf = ∑ ∑ S0f (x, y)</p>
        <p>x=1 y=1
u f = (1 + e−Uf nc−e1ll )−1,
where ncell is the total number of cells (the number of
pixels). Note, the output of u f is in the interval u f ∈ [0.5, 1].</p>
        <p>Typically, the LGMD neural network contains
Feedforward cell which is not utilized in the proposed scheme
based on the results of the experimental evaluation. The
purpose of the Feed-forward cell is to suppress the
output of the LGMD cell in a case of fast camera movements.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experimental Evaluation</title>
      <p>The experimental verification of the proposed
neuralbased controller is focused on the ability of the hexapod
walking robot to avoid collisions with the obstacles on its
path. We are emphasizing the practical verification with a
real walking robot to thoroughly test the proposed solution
and provide insights on the achieved performance.</p>
      <p>The experimental evaluation has been considered with
the hexapod walking robot visualized in Fig. 5a. The robot
has six legs attached to the trunk that hosts the sensors. In
particular, the Logitech C920 camera with the field of view
78◦ to provide the LGMD with the visual input has been
utilized. The image data fed into the LGMD neural
network has been subsampled to the resolution of 176×144
pixels and divided into two parts overlapping in 10% of
the image area.
(b)
(f)
IN1</p>
      <p>IN2
...</p>
      <p>OUT</p>
      <p>Input layer
Hidden layer
Output layer
However, due to the specific nature of the legged
locomotion, this feature is undesirable as it makes the LGMD
network less sensitive.</p>
      <p>
        In our setup, two LGMD neural networks are utilized in
parallel to distinguish the direction of the interception, and
thus be able to steer the robot in the opposite direction to
achieve the desired obstacle avoiding behavior. The input
image from a single camera is split into left and right parts
with the overlapping center part. Each of the LGMDs
provide the output which we denote ule f t and urfight for the left
f
and the right LGMD respectively.
In our previous work [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], we utilized a direct mapping
function between the LGMDs output tuple and the turn
parameter of the CPG. The particular mapping function
was designed as
Φ(e) =
100/2e for |e| ≥ 0.2 ,
10000 · sgn(e) for |e| &lt; 0.2
(9)
where error e is calculated as the difference of the LGMD
outputs e = ulfe f t − urfight .
      </p>
      <p>However, the direct mapping function failed in the
collision avoidance in cluttered environment. Therefore, we
developed an RNN-based controller that takes the left and
right LGMD outputs on its input and provides an estimate
of the turn parameter on its output.</p>
      <p>
        In the proposed controller, we utilized the Recurrent
Neural Network (RNN) based on the Long Short Term
Memory (LSTM) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] with two inputs, one hidden layer,
and one output that estimate the error e which is then used
with the mapping function given by (9). The
Backpropagation Through Time (BPTT) [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] is utilized for the RNN
training, which unrolls the network over the time resulting
in a feed-forward neural network. As there are only two
real number inputs to the network, it is unnecessary to use
sliding window approaches to the learning as it is possible
to feed the data to the network in a full length. The
structure of the LSTM neural network is visualized in Fig. 4.
      </p>
      <p>The main idea is to connect the RNN directly to the
outputs of the left and right LGMDs and let the neural
network estimate the parameter e which is then translated by
(9) to the turn parameter of the CPG-based locomotion
controller.</p>
      <p>(a)
(c)
(d)
(e)
(g)</p>
      <p>
        The robot has operated in an arena surrounded by
obstacles, which are formed by tables, chairs and boxes (see
Fig. 5b). The robot movement has been tracked by a
visual localization system which tracks the AprilTag [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]
pattern attached to the robot, which allows to capture the
real trajectory the robot was traversing. Typical images
captured by the robot during traversing the arena are
visualized in Fig. 5c-g. As the LGMD reacts strongly on
the lateral movement of vertical edges in the image, it is
much harder to avoid obstacles in the cluttered
environment where the edges are distributed non-homogeneously
in contrast to experiments performed in [
        <xref ref-type="bibr" rid="ref2 ref4 ref6">2, 4, 6</xref>
        ].
The LSTM neural network [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] has been trained using the
BPTT technique [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The training process has been
performed as follows. First, 10 sample trajectories have been
collected by manually guiding the robot through the
environment while avoiding the obstacles. The outputs of both
the LGMDs have been recorded and the parameter turn
has been adjusted manually, from which the
corresponding error parameter e has been computed. The sampled
trajectories contain altogether 22530 sample points. Next,
the neural network has been trained with these 10
trajectories in 1000 iterations.
      </p>
      <p>The herein utilized RNN has 2 inputs, 16 hidden states,
and 1 output. The 16 hidden states have been selected as
a compromise between the complexity of the RNN and
the behavior observed during the experimental
verification. As one of the problems of the former solution is
the behavior of the robot when it successfully initiate the
obstacle avoidance but it then hits it from the side, we
selected 16 hidden states as the memory buffer to provide
sufficient capacity for the robot to traverse 0.4 m given its
dimensions, speed and camera frame rate.</p>
      <p>The sigmoid function has been used as the activation
function of the RNN
f (x) =</p>
      <p>1
1 + e−x
.</p>
      <p>(10)
As the LGMD outputs are in the range u f ∈ [0.5, 1] and the
error function e ∈ [−0.5, 0.5], the RNN has been trained
to estimate the value of e + 0.5 which is feasible for the
sigmoid function with the range of f (x) ∈ [0, 1].
4.2</p>
      <sec id="sec-4-1">
        <title>Experimental Results</title>
        <p>
          Altogether, 20 trials have been performed in the laboratory
arena to verify the ability of the robot to avoid collisions.
The robot has been directed to intercept different obstacles
and its behavior has been observed. The algorithm failed
only in 3 trials while the previous approach based on the
direct control proposed in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] is unable to operate in such a
heavily cluttered environment at all. The first failed trial is
specific by a direct collision with a low-textured wooden
barrier (see Fig. 5d), hence the LGMDs failed to detect
an approaching object. The second and third failures fall
into the category of sideway interception when the robot
successfully starts to avoid the obstacle but the robot hits
it later from a side.
        </p>
        <p>Fig. 6 shows three typical trajectories crawled by the
hexapod robot in the laboratory arena. The trajectory is
overlaid with the perpendicular arrows that characterize
the direction and magnitude of the error e that is used
for the robot steering which correspond to the direction in
which the neural-based controller is sensing an obstacle.
Besides, the corresponding plot of the LGMD outputs and
the comparison of the control output provided by the
proposed neural-based controller ernn and the direct control
method edirect is visualized in Fig. 7.</p>
        <p>Further, we let the robot to continuously crawl the area
and avoid obstacles. The robot has crawled the distance of
approx. 140 m while colliding only 8 times.
4.3</p>
        <p>
          Discussion
the RNN-based controller is feasible. Moreover, the
utilization of the RNN considerably improves the collision
avoiding behavior in comparison to the direct control
mechanism presented in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. The difference between the
control principles can be best observed in Fig. 7a. It can be
seen that the RNN filters oscillations in the error e which
would disable the robot from avoiding the collision in a
case of the direct control.
        </p>
        <p>On the other hand, it is not particularly clear what is the
RNN-based controller reacting to, as the dependency of
the output on the distance to the closest obstacle has not
been confirmed. This can be observed in Fig. 6c and the
corresponding plot of the error function in Fig. 7c where
the controller starts to oscillate after successfully avoiding
the first obstacle. Other experimental trials have shown
that these oscillations do not affect the collision avoiding
behavior; however, it is unclear how and why they are
produced by the neural controller.</p>
        <p>The results indicate that the RNN calculates a weighted
average of the LGMD outputs over a short period.
However, further analysis of the behavior of the controller is
necessary to reliably evaluate its properties.</p>
        <p>Last but not least, the proposed controller performs only
a collision avoiding behavior and does not guide the robot
to any particular goal. Thus, we consider an extension of
the proposed method to incorporate a higher level goal
following to the architecture of the neural-based controller as
a future work.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>In this paper, we propose an extension of the biologically
based collision avoidance approach with a Recurrent
Neural Network to enhance the collision avoiding behavior of
a hexapod walking robot. The proposed extension allows
the robot to operate in heavily cluttered environments. The
herein presented experimental results indicate feasibility
of the controller which failed to avoid collision in only 3
out of 20 performed trials. The experimental results raised
questions about the cause of the observed oscillations that
deserve future investigation. Besides, we aim to improve
the proposed biologically-based architecture to follow a
specific target location, and thus developed biologically
inspired autonomous navigation.</p>
      <p>Acknowledgments – This work was supported by the
Czech Science Foundation (GACˇ R) under research project
No. 15-09600Y. The support of the Grant Agency of the
CTU in Prague under grant No. SGS16/235/OHK3/3T/13
to Petr Cˇížek is also gratefully acknowledged.
The presented results indicate that the proposed
neuralbased locomotion controller with the collision avoidance
feedback provided by the LGMD neural network and
[1] P. Milicˇka, P. Cˇ ížek, and J. Faigl, “On chaotic
oscillatorbased central pattern generator for motion control of
hexapod walking robot,” in Informacˇné Technológie - Aplikácie</p>
      <p>(b)
Figure 6: Collision avoiding trajectories for the experiments t1, t4, and t5
uulffreigftht
ernn
edirect
0.8
0.6
itdeu0.4
lp0.2
m
A 0
-0.2
-0.4 0
uulffreigftht
ernn
edirect
1</p>
      <p>1
Time</p>
      <p>Time</p>
      <p>1
Time
(a) (b) (c)
Figure 7: Comparison of the control output provided by the proposed neural-based controller ernn and the direct method
edirect for the experiments t1, t4, and t5</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>Comparison of direct and neural based control - t1 Comparison of direct and neural based control -</article-title>
          t4
          <source>Comparison of direct and neural based control - t5 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0</source>
          .
          <article-title>8 a Teória (ITAT)</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>1649</volume>
          ,
          <year>2016</year>
          , pp.
          <fpage>131</fpage>
          -
          <lpage>137</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Blanchard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rind</surname>
          </string-name>
          , and
          <string-name>
            <surname>P. F. M. J. Verschure</surname>
          </string-name>
          , “
          <article-title>Collision avoidance using a model of the locust LGMD neuron</article-title>
          ,
          <source>” Robotics and Autonomous Systems (RAS)</source>
          , vol.
          <volume>30</volume>
          , no.
          <issue>1-2</issue>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>38</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cˇížek</surname>
          </string-name>
          , P. Milicˇka, and J. Faigl, “
          <article-title>Neural based obstacle avoidance with CPG controlled hexapod walking robot,”</article-title>
          <source>in IEEE International Joint Conference on Neural Networks (IJCNN)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>650</fpage>
          -
          <lpage>656</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yue</surname>
          </string-name>
          and
          <string-name>
            <given-names>F. C.</given-names>
            <surname>Rind</surname>
          </string-name>
          , “
          <article-title>Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement</article-title>
          ,
          <source>” IEEE Transactions on Neural Networks</source>
          , vol.
          <volume>17</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>705</fpage>
          -
          <lpage>716</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yue</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Hu</surname>
          </string-name>
          , “
          <article-title>Bio-inspired collision detector with enhanced selectivity for ground robotic vision system</article-title>
          ,
          <source>” in British Machine Vision Conference</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S. B. i Badia</given-names>
            , P. Pyk, and
            <surname>P. F. M. J. Verschure</surname>
          </string-name>
          , “
          <article-title>A biologically based flight control system for a blimp-based UAV,”</article-title>
          <source>in IEEE International Conference on Robotics and Automation (ICRA)</source>
          ,
          <year>2005</year>
          , pp.
          <fpage>3053</fpage>
          -
          <lpage>3059</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Pomerleau</surname>
          </string-name>
          ,
          <article-title>Neural network perception for mobile robot guidance</article-title>
          .
          <source>Springer Science and Business Media</source>
          ,
          <year>2012</year>
          , vol.
          <volume>239</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Wilson</surname>
          </string-name>
          , “
          <article-title>The central nervous control of flight in a locust</article-title>
          ,
          <source>” Journal of Experimental Biology</source>
          , vol.
          <volume>38</volume>
          , no.
          <issue>47</issue>
          , pp.
          <fpage>471</fpage>
          -
          <lpage>490</lpage>
          ,
          <year>1961</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hartbauer</surname>
          </string-name>
          , “
          <article-title>Simplified bionic solutions: a simple bioinspired vehicle collision detection system</article-title>
          ,
          <source>” Bioinspiration and Biomimetics</source>
          , vol.
          <volume>12</volume>
          , no.
          <issue>2</issue>
          , p.
          <fpage>026007</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yue</surname>
          </string-name>
          and
          <string-name>
            <given-names>F. C.</given-names>
            <surname>Rind</surname>
          </string-name>
          , “
          <article-title>Redundant neural vision systems - competing for collision recognition roles</article-title>
          ,
          <source>” IEEE Transactions on Autonomous Mental Development</source>
          , vol.
          <volume>5</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>186</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H. G.</given-names>
            <surname>Meyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. J. N.</given-names>
            <surname>Bertrand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Paskarbeit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Lindemann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schneider</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Egelhaaf</surname>
          </string-name>
          ,
          <article-title>A Bio-Inspired Model for Visual Collision Avoidance on a Hexapod Walking Robot</article-title>
          ,
          <year>2016</year>
          , pp.
          <fpage>167</fpage>
          -
          <lpage>178</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kodjabachian</surname>
          </string-name>
          and J. A. Meyer, “
          <article-title>Evolution and development of neural controllers for locomotion, gradientfollowing, and obstacle-avoidance in artificial insects</article-title>
          ,
          <source>” IEEE Transactions on Neural Networks</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>796</fpage>
          -
          <lpage>812</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B.-Q.</given-names>
            <surname>Huang</surname>
          </string-name>
          , G.-Y. Cao, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Guo</surname>
          </string-name>
          , “
          <article-title>Reinforcement learning neural network to the problem of autonomous mobile robot obstacle avoidance</article-title>
          ,
          <source>” in International Conference on Machine Learning and Cybernetics</source>
          , vol.
          <volume>1</volume>
          ,
          <issue>2005</issue>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kelchtermans</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Tuytelaars</surname>
          </string-name>
          , “
          <article-title>How hard is it to cross the room? - training (recurrent) neural networks to steer a UAV,” CoRR</article-title>
          , vol.
          <source>abs/1702.07600</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Steingrube</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Timme</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wörgötter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Manoonpong</surname>
          </string-name>
          , “
          <article-title>Self-organized adaptation of a simple neural circuit enables complex robot behaviour</article-title>
          ,”
          <source>Nature Physics</source>
          , vol.
          <volume>6</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>224</fpage>
          -
          <lpage>230</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>N.</given-names>
            <surname>Porcino</surname>
          </string-name>
          , “
          <article-title>Hexapod gait control by a neural network,”</article-title>
          <source>in IEEE International Joint Conference on Neural Networks (IJCNN)</source>
          ,
          <year>1990</year>
          , pp.
          <fpage>189</fpage>
          -
          <lpage>194</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>D. M. Wilson</surname>
          </string-name>
          , “Insect walking,
          <source>” Annual Review of Entomology</source>
          , vol.
          <volume>11</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>103</fpage>
          -
          <lpage>122</lpage>
          ,
          <year>1966</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Schmidhuber</surname>
          </string-name>
          ,
          <article-title>“Long short-term memory</article-title>
          ,
          <source>” Neural Computation</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>1735</fpage>
          -
          <lpage>1780</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>I. Sutskever</surname>
          </string-name>
          , “
          <article-title>Training recurrent neural networks</article-title>
          ,
          <source>” Ph.D. dissertation</source>
          , University of Toronto,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>E.</given-names>
            <surname>Olson</surname>
          </string-name>
          , “
          <article-title>AprilTag: A Robust and Flexible Visual Fiducial System,”</article-title>
          <source>in IEEE International Conference on Robotics and Automation (ICRA)</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>3400</fpage>
          -
          <lpage>3407</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>