=Paper=
{{Paper
|id=Vol-1885/65
|storemode=property
|title=Enhancing Neural
Based Obstacle Avoidance with CPG Controlled Hexapod Walking Robot
|pdfUrl=https://ceur-ws.org/Vol-1885/65.pdf
|volume=Vol-1885
|authors=Petr Čížek,Jan Faigl,Jan Bayer
|dblpUrl=https://dblp.org/rec/conf/itat/CizekFB17
}}
==Enhancing Neural
Based Obstacle Avoidance with CPG Controlled Hexapod Walking Robot==
J. Hlaváčová (Ed.): ITAT 2017 Proceedings, pp. 65–70
CEUR Workshop Proceedings Vol. 1885, ISSN 1613-0073, c 2017 P. Čížek, J. Faigl, J. Bayer
Enhancing Neural Based Obstacle Avoidance with CPG Controlled Hexapod
Walking Robot
Petr Čížek, Jan Faigl, and Jan Bayer
Czech Technical University in Prague, Faculty of Electrical Engineering, Technická 2, 166 27 Prague, Czech Republic
{cizekpe6|faiglj|bayerja1}@fel.cvut.cz
Abstract: Avoiding collisions with obstacles and inter- Proposed RNN-based Controller
cepting objects based on the visual perception is a vital
survival ability of any animal. In this work, we propose p turn
an extension of the biologically based collision avoidance CPG-based
approach to the detection of intercepting objects using the Left LGMD Right LGMD
Locomotion Controll
Lobula Giant Movement Detector (LGMD) connected di-
rectly to the locomotion control unit based on the Cen- Actuators Camera
tral Pattern Generator (CPG) of a hexapod walking robot. Environment
The proposed extension uses Recurrent Neural Network
(RNN) to map the output of the LGMD on the input of the Figure 1: Overview of the proposed control system struc-
CPG to enhance collision avoiding behavior of the robot ture. Different colors discriminate the individual func-
in cluttered environments. The presented results of the ex- tional parts of the architecture.
perimental verification of the proposed system with a real
mobile hexapod crawling robot support the feasibility of
the presented approach in collision avoidance scenarios. from the robot. Moreover the mapping function translates
the output of the LGMD directly to the locomotion con-
trol parameters. Hence, the reaction of the robot is based
1 Introduction solely on the current observation of the environment which
results in situations when the robot hits an obstacle from
Avoiding collisions with obstacles and intercepting objects
the side that has successfully avoided earlier but it is al-
is a vital survival ability for any animal. For a mobile robot
ready out of the field of view. Therefore, we propose to
moving from one place to another, the contact with a fixed
enhance the collision avoiding behavior of the robot by in-
or moving object may have fatal consequences. Therefore,
corporating a memory mechanism by means of the RNN.
it is desirable to study the problem of collision avoidance
The overall structure of the proposed system is depicted
and derive new and computationally efficient ways to trig-
in Fig. 1. Regarding to the previous approaches, here, we
ger collision avoiding behavior.
would like to emphasize a practical verification of the pro-
In this work, we concern a problem of biologically
posed method on a real walking robot as the specific nature
inspired motion control and collision avoidance with a
of the legged locomotion makes the problem more difficult
legged walking robot equipped with a forward looking
in comparison to the wheeled [4, 5] or flying [6] robots.
camera only. We propose to utilize a Central Pattern Gen-
The main difference originates in abrupt motions of the
erator (CPG) approach [1] for robot locomotion control
camera induced by the locomotion of the robot which neg-
and the vision-based collision avoidance approach using
atively influences the output of the collision avoiding vi-
the Lobula Giant Movement Detector (LGMD) [2] which
sual pathway.
are both combined in the proposed controller based on Re-
The reminder of the paper is organized as follows.
current Neural Network (RNN).
The most related approaches on the neural-based colli-
The proposed solution builds on our previous results
sion avoidance using vision are summarized in Section 2.
published in [3] in which only a simple mapping function
Section 3 describes the individual building blocks of the
is utilized for transforming the output of the LGMD neu-
proposed control architecture. Evaluation results and their
ral network directly to the locomotion control parameters
discussion are detailed in Section 4. Concluding remarks
of the CPG controller [1]. Such a solution works well in
and suggestions for future work are dedicated to Sec-
laboratory conditions, but, unfortunately, it is error-prone
tion 5.
in the cluttered environment. It is mainly because of the
way how the LGMD neural network processes the visual
data and due to a simple mapping function. The LGMD re- 2 Related Work
acts on the lateral movement of vertical edges in the image
regardless their depth in the scene. In a cluttered environ- The problem of collision avoidance has been studied ever
ment, this results that the output is heavily influenced by since the mobile robots appeared. Hence, there is a lot of
a lot of stimuli from the distinctive edges in a far distance different approaches using different sensors and different
66 P. Čížek, J. Faigl, J. Bayer
processing techniques. In this work, we are focused on RNN has been recently presented in [14] which trains the
vision-based neural obstacle avoidance methods and the UAV to avoid collisions during autonomous indoor flight.
most related approaches are described in the rest of this This work served as the inspiration for our neural-based
section. autonomous agent.
Direct mapping of the visual perception on the robot
control command using a feed-forward neural network has
been already utilized in several methods. The problem of 3 Proposed Solution
road following using neural networks, which dates back to
90s, can be considered as a special case of the collision Three basic functional parts can be identified within the
avoidance problem [7]. However, such approaches cannot proposed collision avoiding system. They are depicted in
be considered as biologically-based because of artificial three different colors in Fig. 1. The first part is the lo-
nature of the examined roads. comotion control unit based on the chaotic oscillator [15]
In [2], the Lobula Giant Movement Detector (LGMD) depicted in an orange color whose purpose is to control the
neural network has been introduced in robotics to imitate walking pattern and to solve the kinematics. It allows to
the way how insects avoid collisions with an intercept- change the type of the motion gait based on the pre-set pa-
ing object [8]. The approach has been widely adopted rameter p and steer the robot motion according to the input
for its simplicity and relatively good performance with signal turn defining the turning radius. The second part is
wheeled [2, 4, 5] and flying [6] robots. However, these the visual pathway depicted in a green color which utilizes
approaches experimentally verify the collision avoidance the LGMD neural network for avoiding approaching ob-
with a real robot either in a closed arena where it is neces- jects and triggering escape behavior. The main idea of the
sary to avoid collisions with walls or in a scenario where proposed approach is to use the LGMD outputs for setting
a static robot is supposed to detect an intercepting object. the hexapod control parameters, in particular, the turning
Moreover, the walls of the arena or the obstacles were ho- radius turn of the robot. In this work, we are proposing
mogeneously distributed or coated with a high contrast ar- to use the RNN-based approach for the translation of the
tificial pattern which significantly improves the behavior LGMD output to the turn parameter which is dedicated to
of the LGMD. In our approach, we focus on the deploy- the last part depicted in a yellow color. Each part is dis-
ment of the LGMD in heavily cluttered unstructured envi- cussed in more detail in the following sections.
ronment, and thus evaluate the approach in more realistic
scenarios. 3.1 CPG-Based Locomotion Control
An experimental study on the prediction of evasive
steering maneuvers in urban traffic scenarios has been re- The locomotion control is based on our previous work pre-
cently published in [9]. In this approach, the performance sented in [1]. It utilizes only one chaotic CPG [15] con-
of the LGMD is improved by introducing so-called “dan- sisting of two interconnected neurons with a control input
ger zones” which are the image areas that will most likely computed solely based on the input period p. The CPG
indicate the incoming threat. stabilizes a periodic orbit of p from the chaotic oscilla-
Another approach presented in [10] compares the per- tion, so the output is a discrete periodic signal. The period
formance of the LGMD and Directional Selective Neurons p ∈ {4, 6, 8, 12} directly determines the resulting walking
(DSN) in the ability to avoid collisions. Both of them are pattern (motion gait): tripod, ripple, tetrapod, and wave,
to be found in the visual pathways of insects. The reported respectively [16].
results show that the LGMD can be trained using evolu- Afterwards, the output of the chaotic oscillator is shaped
tionary techniques to outperform the DSN in the collision and post-processed in order to obtain a signal usable for a
recognition ability. trajectory generator and to determine the phase of individ-
Regarding our target scenario, the most relevant ap- ual legs, i.e., whether the leg is swinging or supporting the
proach to the proposed solution has been presented in [11]. body. Afterwards, the output of the chaotic oscillator is
The authors use a biologically-inspired collision avoid- thresholded and a triangle wave alternating between −1
ance approach based on the extraction of nearness infor- and 1 is produced, where the upslope (swing phase) is a
mation from the image depth estimation to detect obstacles constant and the downslope (support phase) depends on
and avoid collisions. The whole system allows a simulated the period p. Based on the leg coordination rules [17], in-
hexapod robot to navigate cluttered environment while ac- dividual delays are applied to the triangular wave per each
tively avoiding obstacles. However, the approach uses a leg to produce the rhythmic pattern for each leg.
direct feed-forward approach for the motion control and it The result of the post-processing module is fed into
has not been deployed in a real-world scenario. a trajectory generator, which determines the position of
The herein proposed control mechanism utilizes a Re- foot-tips according to the input signal along with the pa-
current Neural Network (RNN) that has been already uti- rameter turn, which is given by the RNN-based controller.
lized in collision avoiding scenarios using odor sensors on The turn parameter is equal to the distance (in millime-
whiskers [12] or a set of infrared rangefinders [13]. A ters) from the robot center to the turning center on a line
vision-based collision avoidance for an UAV based on the perpendicular to the heading of the robot connecting the
Enhancing Neural Based Obstacle Avoidance with CPG Controlled Hexapod Walking Robot 67
Photoreceptive
FF P P P P P layer
cell
F Inhibitory/Excitatory
I E I E I E I E I E layer
S S S S S Summation
layer
LGMD LGMD cell
Figure 3: LGMD neural network model
Figure 2: Trajectory generation - the turning point denoted
as the small red disk is given by turn parameter. α is com-
puted as the maximum angle given the turning radius and where wI are the inhibition weights set as
the maximum step size ymax .
0.06 0.12 0.25 0.12 0.06
0.12 0.06 0.12 0.06 0.12
default positions of the middle legs. Based on the turn wI = 0.25 0.12 0 0.12 0.25
. (3)
parameter and the triangular wave, the trajectory genera- 0.12 0.06 0.12 0.06 0.12
tor uniquely determines the foot-tip positions of each leg 0.06 0.12 0.25 0.12 0.06
on the constructed arcs which are limited by the angle α.
The value of α is computed from the distance of the fur- The Inhibition layer is essentially smoothing the Photore-
thest leg from the pivotal point established by turn and the ceptive layer output values and filtering those caused by
maximum step size ymax . The idea of the trajectory gen- noise or camera imperfections. The inhibition weights
erator is visualized in Fig. 2. The output of the trajectory wI are selected experimentally with respect to the LGMD
generator is transformed into the joint space using the in- description in [2] which uses 3×3 matrix of inhibition
verse kinematics module and then performed by the robot weights, but on an image with a much lower resolution.
actuators. Notice, the speed of the robot forward motion is The Excitatory layer is used to time delay the output of
determined by the period p, while the robot angular veloc- Photoreceptive layer and it is calculated as
ity is controlled by the turn parameter, which is adjusted
by the RNN-based controller from the LGMD output. E = Pf (x, y) . (4)
The response of the Summation layer is computed as
3.2 LGMD Neural Network
S f (x, y) = E(x, y) − I f (x, y) WI , (5)
The LGMD [2] is a neural network found in the visual
pathways of insects, such as locusts [8], which responds where WI = 0.4 is the global inhibition weight. Let S0f be
selectively to objects approaching the animal on a collision a matrix for which each value exceeding the threshold Tr
course. It is composed of four groups of cells: Photore- is passed and any lower value is set to 0
ceptive, Excitatory, Inhibitory, and Summation arranged (
in three layers; and two individual cells: Feed-forward in- 0 S f (x, y) if S f (x, y) ≥ Tr
hibitory and Lobula Giant Movement Detector. The struc- S f (x, y) = . (6)
0 otherwise
ture of the network is visualized in Fig. 3.
The Photoreceptive layer processes the sensory input Then, an excitation of the LGMD cell is computed as
from the camera. Its output is the difference between two
k l
successive grayscale camera frames and it is computed as
U f = ∑ ∑ S0f (x, y) (7)
Pf (x, y) = L f (x, y) − L f −1 (x, y), (1) x=1 y=1
where L f is the current frame, L f −1 is the previous frame and finally, the LGMD cell output is
and (x, y) are the pixel coordinates. In principle, the Pho- −1
toreceptive layer implements a contrast enhancement and u f = (1 + e−U f ncell )−1 , (8)
forms the input to the following two groups of neurons –
where ncell is the total number of cells (the number of pix-
the Inhibition layer and Excitatory layer.
els). Note, the output of u f is in the interval u f ∈ [0.5, 1].
The response of the Inhibition layer is computed as
Typically, the LGMD neural network contains Feed-
n n forward cell which is not utilized in the proposed scheme
I f (x, y) = ∑ ∑ Pf −1 (x + i, y + j)wI (i, j), (2) based on the results of the experimental evaluation. The
i=−n j=−n
purpose of the Feed-forward cell is to suppress the out-
(i 6= j, if i = 0), put of the LGMD cell in a case of fast camera movements.
68 P. Čížek, J. Faigl, J. Bayer
IN1 IN2 Input layer 4 Experimental Evaluation
The experimental verification of the proposed neural-
... Hidden layer based controller is focused on the ability of the hexapod
walking robot to avoid collisions with the obstacles on its
path. We are emphasizing the practical verification with a
OUT Output layer real walking robot to thoroughly test the proposed solution
and provide insights on the achieved performance.
Figure 4: LSTM recurrent neural network model The experimental evaluation has been considered with
the hexapod walking robot visualized in Fig. 5a. The robot
has six legs attached to the trunk that hosts the sensors. In
However, due to the specific nature of the legged loco- particular, the Logitech C920 camera with the field of view
motion, this feature is undesirable as it makes the LGMD 78◦ to provide the LGMD with the visual input has been
network less sensitive. utilized. The image data fed into the LGMD neural net-
In our setup, two LGMD neural networks are utilized in work has been subsampled to the resolution of 176×144
parallel to distinguish the direction of the interception, and pixels and divided into two parts overlapping in 10% of
thus be able to steer the robot in the opposite direction to the image area.
achieve the desired obstacle avoiding behavior. The input
image from a single camera is split into left and right parts
with the overlapping center part. Each of the LGMDs pro-
vide the output which we denote ulef f t and uright
f for the left
and the right LGMD respectively.
3.3 RNN-Based Controller
(a) (b)
In our previous work [3], we utilized a direct mapping
function between the LGMDs output tuple and the turn
parameter of the CPG. The particular mapping function
was designed as
(c) (d) (e) (f) (g)
100/2e for |e| ≥ 0.2
Φ(e) = , (9) Figure 5: (a) The hexapod walking robot, (b) the labora-
10000 · sgn(e) for |e| < 0.2
tory test environment, and (c-g) typical images captured
where error e is calculated as the difference of the LGMD by the robot
outputs e = ulef f t − uright
f . The robot has operated in an arena surrounded by ob-
However, the direct mapping function failed in the col- stacles, which are formed by tables, chairs and boxes (see
lision avoidance in cluttered environment. Therefore, we Fig. 5b). The robot movement has been tracked by a vi-
developed an RNN-based controller that takes the left and sual localization system which tracks the AprilTag [20]
right LGMD outputs on its input and provides an estimate pattern attached to the robot, which allows to capture the
of the turn parameter on its output. real trajectory the robot was traversing. Typical images
In the proposed controller, we utilized the Recurrent captured by the robot during traversing the arena are vi-
Neural Network (RNN) based on the Long Short Term sualized in Fig. 5c-g. As the LGMD reacts strongly on
Memory (LSTM) [18] with two inputs, one hidden layer, the lateral movement of vertical edges in the image, it is
and one output that estimate the error e which is then used much harder to avoid obstacles in the cluttered environ-
with the mapping function given by (9). The Backpropa- ment where the edges are distributed non-homogeneously
gation Through Time (BPTT) [19] is utilized for the RNN in contrast to experiments performed in [2, 4, 6].
training, which unrolls the network over the time resulting
in a feed-forward neural network. As there are only two
4.1 RNN Training Process
real number inputs to the network, it is unnecessary to use
sliding window approaches to the learning as it is possible The LSTM neural network [18] has been trained using the
to feed the data to the network in a full length. The struc- BPTT technique [19]. The training process has been per-
ture of the LSTM neural network is visualized in Fig. 4. formed as follows. First, 10 sample trajectories have been
The main idea is to connect the RNN directly to the out- collected by manually guiding the robot through the envi-
puts of the left and right LGMDs and let the neural net- ronment while avoiding the obstacles. The outputs of both
work estimate the parameter e which is then translated by the LGMDs have been recorded and the parameter turn
(9) to the turn parameter of the CPG-based locomotion has been adjusted manually, from which the correspond-
controller. ing error parameter e has been computed. The sampled
Enhancing Neural Based Obstacle Avoidance with CPG Controlled Hexapod Walking Robot 69
trajectories contain altogether 22530 sample points. Next, the RNN-based controller is feasible. Moreover, the uti-
the neural network has been trained with these 10 trajecto- lization of the RNN considerably improves the collision
ries in 1000 iterations. avoiding behavior in comparison to the direct control
The herein utilized RNN has 2 inputs, 16 hidden states, mechanism presented in [3]. The difference between the
and 1 output. The 16 hidden states have been selected as control principles can be best observed in Fig. 7a. It can be
a compromise between the complexity of the RNN and seen that the RNN filters oscillations in the error e which
the behavior observed during the experimental verifica- would disable the robot from avoiding the collision in a
tion. As one of the problems of the former solution is case of the direct control.
the behavior of the robot when it successfully initiate the On the other hand, it is not particularly clear what is the
obstacle avoidance but it then hits it from the side, we se- RNN-based controller reacting to, as the dependency of
lected 16 hidden states as the memory buffer to provide the output on the distance to the closest obstacle has not
sufficient capacity for the robot to traverse 0.4 m given its been confirmed. This can be observed in Fig. 6c and the
dimensions, speed and camera frame rate. corresponding plot of the error function in Fig. 7c where
The sigmoid function has been used as the activation the controller starts to oscillate after successfully avoiding
function of the RNN the first obstacle. Other experimental trials have shown
1 that these oscillations do not affect the collision avoiding
f (x) = . (10) behavior; however, it is unclear how and why they are pro-
1 + e−x
duced by the neural controller.
As the LGMD outputs are in the range u f ∈ [0.5, 1] and the The results indicate that the RNN calculates a weighted
error function e ∈ [−0.5, 0.5], the RNN has been trained average of the LGMD outputs over a short period. How-
to estimate the value of e + 0.5 which is feasible for the ever, further analysis of the behavior of the controller is
sigmoid function with the range of f (x) ∈ [0, 1]. necessary to reliably evaluate its properties.
Last but not least, the proposed controller performs only
a collision avoiding behavior and does not guide the robot
4.2 Experimental Results
to any particular goal. Thus, we consider an extension of
Altogether, 20 trials have been performed in the laboratory the proposed method to incorporate a higher level goal fol-
arena to verify the ability of the robot to avoid collisions. lowing to the architecture of the neural-based controller as
The robot has been directed to intercept different obstacles a future work.
and its behavior has been observed. The algorithm failed
only in 3 trials while the previous approach based on the
direct control proposed in [3] is unable to operate in such a 5 Conclusion
heavily cluttered environment at all. The first failed trial is
specific by a direct collision with a low-textured wooden In this paper, we propose an extension of the biologically
barrier (see Fig. 5d), hence the LGMDs failed to detect based collision avoidance approach with a Recurrent Neu-
an approaching object. The second and third failures fall ral Network to enhance the collision avoiding behavior of
into the category of sideway interception when the robot a hexapod walking robot. The proposed extension allows
successfully starts to avoid the obstacle but the robot hits the robot to operate in heavily cluttered environments. The
it later from a side. herein presented experimental results indicate feasibility
Fig. 6 shows three typical trajectories crawled by the of the controller which failed to avoid collision in only 3
hexapod robot in the laboratory arena. The trajectory is out of 20 performed trials. The experimental results raised
overlaid with the perpendicular arrows that characterize questions about the cause of the observed oscillations that
the direction and magnitude of the error e that is used deserve future investigation. Besides, we aim to improve
for the robot steering which correspond to the direction in the proposed biologically-based architecture to follow a
which the neural-based controller is sensing an obstacle. specific target location, and thus developed biologically
Besides, the corresponding plot of the LGMD outputs and inspired autonomous navigation.
the comparison of the control output provided by the pro-
Acknowledgments – This work was supported by the
posed neural-based controller ernn and the direct control
Czech Science Foundation (GAČR) under research project
method edirect is visualized in Fig. 7.
No. 15-09600Y. The support of the Grant Agency of the
Further, we let the robot to continuously crawl the area
CTU in Prague under grant No. SGS16/235/OHK3/3T/13
and avoid obstacles. The robot has crawled the distance of
to Petr Čížek is also gratefully acknowledged.
approx. 140 m while colliding only 8 times.
4.3 Discussion References
The presented results indicate that the proposed neural- [1] P. Milička, P. Čížek, and J. Faigl, “On chaotic oscillator-
based locomotion controller with the collision avoidance based central pattern generator for motion control of hexa-
feedback provided by the LGMD neural network and pod walking robot,” in Informačné Technológie - Aplikácie
70 P. Čížek, J. Faigl, J. Bayer
(a) (b) (c)
Figure 6: Collision avoiding trajectories for the experiments t1 , t4 , and t5
Comparison of direct and neural based control - t1 Comparison of direct and neural based control - t4 Comparison of direct and neural based control - t5
left left left
uf uf uf
0.8 0.8 0.8
u right
f
u right
f
u right
f
0.6 e 0.6 e 0.6 e
rnn rnn rnn
Amplitude
Amplitude
Amplitude
0.4 e direct 0.4 e direct 0.4 e direct
0.2 0.2 0.2
0 0 0
-0.2 -0.2 -0.2
-0.4 -0.4 -0.4
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Time Time Time
(a) (b) (c)
Figure 7: Comparison of the control output provided by the proposed neural-based controller ernn and the direct method
edirect for the experiments t1 , t4 , and t5
a Teória (ITAT), CEUR Workshop Proceedings, vol. 1649, [11] H. G. Meyer, O. J. N. Bertrand, J. Paskarbeit, J. P. Lin-
2016, pp. 131–137. demann, A. Schneider, and M. Egelhaaf, A Bio-Inspired
[2] M. Blanchard, F. Rind, and P. F. M. J. Verschure, “Colli- Model for Visual Collision Avoidance on a Hexapod Walk-
sion avoidance using a model of the locust LGMD neuron,” ing Robot, 2016, pp. 167–178.
Robotics and Autonomous Systems (RAS), vol. 30, no. 1-2, [12] J. Kodjabachian and J. A. Meyer, “Evolution and de-
pp. 17–38, 2000. velopment of neural controllers for locomotion, gradient-
[3] P. Čížek, P. Milička, and J. Faigl, “Neural based obstacle following, and obstacle-avoidance in artificial insects,”
avoidance with CPG controlled hexapod walking robot,” in IEEE Transactions on Neural Networks, vol. 9, no. 5, pp.
IEEE International Joint Conference on Neural Networks 796–812, 1998.
(IJCNN), 2017, pp. 650–656. [13] B.-Q. Huang, G.-Y. Cao, and M. Guo, “Reinforcement
[4] S. Yue and F. C. Rind, “Collision detection in complex dy- learning neural network to the problem of autonomous mo-
namic scenes using an LGMD-based visual neural network bile robot obstacle avoidance,” in International Conference
with feature enhancement,” IEEE Transactions on Neural on Machine Learning and Cybernetics, vol. 1, 2005, pp.
Networks, vol. 17, no. 3, pp. 705–716, 2006. 85–89.
[5] Q. Fu, S. Yue, and C. Hu, “Bio-inspired collision detec- [14] K. Kelchtermans and T. Tuytelaars, “How hard is it to cross
tor with enhanced selectivity for ground robotic vision sys- the room? - training (recurrent) neural networks to steer a
tem,” in British Machine Vision Conference, 2016. UAV,” CoRR, vol. abs/1702.07600, 2017.
[6] S. B. i Badia, P. Pyk, and P. F. M. J. Verschure, “A bi- [15] S. Steingrube, M. Timme, F. Wörgötter, and P. Manoon-
ologically based flight control system for a blimp-based pong, “Self-organized adaptation of a simple neural circuit
UAV,” in IEEE International Conference on Robotics and enables complex robot behaviour,” Nature Physics, vol. 6,
Automation (ICRA), 2005, pp. 3053–3059. no. 3, pp. 224–230, 2010.
[7] D. A. Pomerleau, Neural network perception for mobile [16] N. Porcino, “Hexapod gait control by a neural network,” in
robot guidance. Springer Science and Business Media, IEEE International Joint Conference on Neural Networks
2012, vol. 239. (IJCNN), 1990, pp. 189–194.
[8] D. M. Wilson, “The central nervous control of flight in a [17] D. M. Wilson, “Insect walking,” Annual Review of Ento-
locust,” Journal of Experimental Biology, vol. 38, no. 47, mology, vol. 11, no. 1, pp. 103–122, 1966.
pp. 471–490, 1961. [18] S. Hochreiter and J. Schmidhuber, “Long short-term mem-
[9] M. Hartbauer, “Simplified bionic solutions: a simple bio- ory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780,
inspired vehicle collision detection system,” Bioinspiration 1997.
and Biomimetics, vol. 12, no. 2, p. 026007, 2017. [19] I. Sutskever, “Training recurrent neural networks,” Ph.D.
[10] S. Yue and F. C. Rind, “Redundant neural vision systems dissertation, University of Toronto, 2013.
– competing for collision recognition roles,” IEEE Trans- [20] E. Olson, “AprilTag: A Robust and Flexible Visual Fiducial
actions on Autonomous Mental Development, vol. 5, no. 2, System,” in IEEE International Conference on Robotics
pp. 173–186, 2013. and Automation (ICRA), 2011, pp. 3400–3407.