<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Enhancing Neural Based Obstacle Avoidance with CPG Controlled Hexapod Walking Robot</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Petr</forename><surname>Čížek</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Electrical Engineering</orgName>
								<orgName type="institution">Czech Technical University in Prague</orgName>
								<address>
									<addrLine>Technická 2</addrLine>
									<postCode>166 27</postCode>
									<settlement>Prague</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jan</forename><surname>Faigl</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Electrical Engineering</orgName>
								<orgName type="institution">Czech Technical University in Prague</orgName>
								<address>
									<addrLine>Technická 2</addrLine>
									<postCode>166 27</postCode>
									<settlement>Prague</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jan</forename><surname>Bayer</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Electrical Engineering</orgName>
								<orgName type="institution">Czech Technical University in Prague</orgName>
								<address>
									<addrLine>Technická 2</addrLine>
									<postCode>166 27</postCode>
									<settlement>Prague</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Enhancing Neural Based Obstacle Avoidance with CPG Controlled Hexapod Walking Robot</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">787FC8E29BFBDE2DF27E4164A00E74CD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T10:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Avoiding collisions with obstacles and intercepting objects based on the visual perception is a vital survival ability of any animal. In this work, we propose an extension of the biologically based collision avoidance approach to the detection of intercepting objects using the Lobula Giant Movement Detector (LGMD) connected directly to the locomotion control unit based on the Central Pattern Generator (CPG) of a hexapod walking robot. The proposed extension uses Recurrent Neural Network (RNN) to map the output of the LGMD on the input of the CPG to enhance collision avoiding behavior of the robot in cluttered environments. The presented results of the experimental verification of the proposed system with a real mobile hexapod crawling robot support the feasibility of the presented approach in collision avoidance scenarios.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Avoiding collisions with obstacles and intercepting objects is a vital survival ability for any animal. For a mobile robot moving from one place to another, the contact with a fixed or moving object may have fatal consequences. Therefore, it is desirable to study the problem of collision avoidance and derive new and computationally efficient ways to trigger collision avoiding behavior.</p><p>In this work, we concern a problem of biologically inspired motion control and collision avoidance with a legged walking robot equipped with a forward looking camera only. We propose to utilize a Central Pattern Generator (CPG) approach <ref type="bibr" target="#b0">[1]</ref> for robot locomotion control and the vision-based collision avoidance approach using the Lobula Giant Movement Detector (LGMD) <ref type="bibr" target="#b1">[2]</ref> which are both combined in the proposed controller based on Recurrent Neural Network (RNN).</p><p>The proposed solution builds on our previous results published in <ref type="bibr" target="#b2">[3]</ref> in which only a simple mapping function is utilized for transforming the output of the LGMD neural network directly to the locomotion control parameters of the CPG controller <ref type="bibr" target="#b0">[1]</ref>. Such a solution works well in laboratory conditions, but, unfortunately, it is error-prone in the cluttered environment. It is mainly because of the way how the LGMD neural network processes the visual data and due to a simple mapping function. The LGMD reacts on the lateral movement of vertical edges in the image regardless their depth in the scene. In a cluttered environment, this results that the output is heavily influenced by a lot of stimuli from the distinctive edges in a far distance from the robot. Moreover the mapping function translates the output of the LGMD directly to the locomotion control parameters. Hence, the reaction of the robot is based solely on the current observation of the environment which results in situations when the robot hits an obstacle from the side that has successfully avoided earlier but it is already out of the field of view. Therefore, we propose to enhance the collision avoiding behavior of the robot by incorporating a memory mechanism by means of the RNN.</p><p>The overall structure of the proposed system is depicted in Fig. <ref type="figure" target="#fig_0">1</ref>. Regarding to the previous approaches, here, we would like to emphasize a practical verification of the proposed method on a real walking robot as the specific nature of the legged locomotion makes the problem more difficult in comparison to the wheeled <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref> or flying <ref type="bibr" target="#b5">[6]</ref> robots. The main difference originates in abrupt motions of the camera induced by the locomotion of the robot which negatively influences the output of the collision avoiding visual pathway.</p><p>The reminder of the paper is organized as follows. The most related approaches on the neural-based collision avoidance using vision are summarized in Section 2. Section 3 describes the individual building blocks of the proposed control architecture. Evaluation results and their discussion are detailed in Section 4. Concluding remarks and suggestions for future work are dedicated to Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>The problem of collision avoidance has been studied ever since the mobile robots appeared. Hence, there is a lot of different approaches using different sensors and different processing techniques. In this work, we are focused on vision-based neural obstacle avoidance methods and the most related approaches are described in the rest of this section.</p><p>Direct mapping of the visual perception on the robot control command using a feed-forward neural network has been already utilized in several methods. The problem of road following using neural networks, which dates back to 90s, can be considered as a special case of the collision avoidance problem <ref type="bibr" target="#b6">[7]</ref>. However, such approaches cannot be considered as biologically-based because of artificial nature of the examined roads.</p><p>In <ref type="bibr" target="#b1">[2]</ref>, the Lobula Giant Movement Detector (LGMD) neural network has been introduced in robotics to imitate the way how insects avoid collisions with an intercepting object <ref type="bibr" target="#b7">[8]</ref>. The approach has been widely adopted for its simplicity and relatively good performance with wheeled <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref> and flying <ref type="bibr" target="#b5">[6]</ref> robots. However, these approaches experimentally verify the collision avoidance with a real robot either in a closed arena where it is necessary to avoid collisions with walls or in a scenario where a static robot is supposed to detect an intercepting object. Moreover, the walls of the arena or the obstacles were homogeneously distributed or coated with a high contrast artificial pattern which significantly improves the behavior of the LGMD. In our approach, we focus on the deployment of the LGMD in heavily cluttered unstructured environment, and thus evaluate the approach in more realistic scenarios.</p><p>An experimental study on the prediction of evasive steering maneuvers in urban traffic scenarios has been recently published in <ref type="bibr" target="#b8">[9]</ref>. In this approach, the performance of the LGMD is improved by introducing so-called "danger zones" which are the image areas that will most likely indicate the incoming threat.</p><p>Another approach presented in <ref type="bibr" target="#b9">[10]</ref> compares the performance of the LGMD and Directional Selective Neurons (DSN) in the ability to avoid collisions. Both of them are to be found in the visual pathways of insects. The reported results show that the LGMD can be trained using evolutionary techniques to outperform the DSN in the collision recognition ability.</p><p>Regarding our target scenario, the most relevant approach to the proposed solution has been presented in <ref type="bibr" target="#b10">[11]</ref>. The authors use a biologically-inspired collision avoidance approach based on the extraction of nearness information from the image depth estimation to detect obstacles and avoid collisions. The whole system allows a simulated hexapod robot to navigate cluttered environment while actively avoiding obstacles. However, the approach uses a direct feed-forward approach for the motion control and it has not been deployed in a real-world scenario.</p><p>The herein proposed control mechanism utilizes a Recurrent Neural Network (RNN) that has been already utilized in collision avoiding scenarios using odor sensors on whiskers <ref type="bibr" target="#b11">[12]</ref> or a set of infrared rangefinders <ref type="bibr" target="#b12">[13]</ref>. A vision-based collision avoidance for an UAV based on the RNN has been recently presented in <ref type="bibr" target="#b13">[14]</ref> which trains the UAV to avoid collisions during autonomous indoor flight. This work served as the inspiration for our neural-based autonomous agent.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Proposed Solution</head><p>Three basic functional parts can be identified within the proposed collision avoiding system. They are depicted in three different colors in Fig. <ref type="figure" target="#fig_0">1</ref>. The first part is the locomotion control unit based on the chaotic oscillator <ref type="bibr" target="#b14">[15]</ref> depicted in an orange color whose purpose is to control the walking pattern and to solve the kinematics. It allows to change the type of the motion gait based on the pre-set parameter p and steer the robot motion according to the input signal turn defining the turning radius. The second part is the visual pathway depicted in a green color which utilizes the LGMD neural network for avoiding approaching objects and triggering escape behavior. The main idea of the proposed approach is to use the LGMD outputs for setting the hexapod control parameters, in particular, the turning radius turn of the robot. In this work, we are proposing to use the RNN-based approach for the translation of the LGMD output to the turn parameter which is dedicated to the last part depicted in a yellow color. Each part is discussed in more detail in the following sections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">CPG-Based Locomotion Control</head><p>The locomotion control is based on our previous work presented in <ref type="bibr" target="#b0">[1]</ref>. It utilizes only one chaotic CPG <ref type="bibr" target="#b14">[15]</ref> consisting of two interconnected neurons with a control input computed solely based on the input period p. The CPG stabilizes a periodic orbit of p from the chaotic oscillation, so the output is a discrete periodic signal. The period p ∈ {4, 6, 8, 12} directly determines the resulting walking pattern (motion gait): tripod, ripple, tetrapod, and wave, respectively <ref type="bibr" target="#b15">[16]</ref>.</p><p>Afterwards, the output of the chaotic oscillator is shaped and post-processed in order to obtain a signal usable for a trajectory generator and to determine the phase of individual legs, i.e., whether the leg is swinging or supporting the body. Afterwards, the output of the chaotic oscillator is thresholded and a triangle wave alternating between −1 and 1 is produced, where the upslope (swing phase) is a constant and the downslope (support phase) depends on the period p. Based on the leg coordination rules <ref type="bibr" target="#b16">[17]</ref>, individual delays are applied to the triangular wave per each leg to produce the rhythmic pattern for each leg.</p><p>The result of the post-processing module is fed into a trajectory generator, which determines the position of foot-tips according to the input signal along with the parameter turn, which is given by the RNN-based controller. The turn parameter is equal to the distance (in millimeters) from the robot center to the turning center on a line perpendicular to the heading of the robot connecting the Figure <ref type="figure">2</ref>: Trajectory generation -the turning point denoted as the small red disk is given by turn parameter. α is computed as the maximum angle given the turning radius and the maximum step size y max . default positions of the middle legs. Based on the turn parameter and the triangular wave, the trajectory generator uniquely determines the foot-tip positions of each leg on the constructed arcs which are limited by the angle α. The value of α is computed from the distance of the furthest leg from the pivotal point established by turn and the maximum step size y max . The idea of the trajectory generator is visualized in Fig. <ref type="figure">2</ref>. The output of the trajectory generator is transformed into the joint space using the inverse kinematics module and then performed by the robot actuators. Notice, the speed of the robot forward motion is determined by the period p, while the robot angular velocity is controlled by the turn parameter, which is adjusted by the RNN-based controller from the LGMD output.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">LGMD Neural Network</head><p>The LGMD <ref type="bibr" target="#b1">[2]</ref> is a neural network found in the visual pathways of insects, such as locusts <ref type="bibr" target="#b7">[8]</ref>, which responds selectively to objects approaching the animal on a collision course. It is composed of four groups of cells: Photoreceptive, Excitatory, Inhibitory, and Summation arranged in three layers; and two individual cells: Feed-forward inhibitory and Lobula Giant Movement Detector. The structure of the network is visualized in Fig. <ref type="figure" target="#fig_1">3</ref>.</p><p>The Photoreceptive layer processes the sensory input from the camera. Its output is the difference between two successive grayscale camera frames and it is computed as</p><formula xml:id="formula_0">P f (x, y) = L f (x, y) − L f −1 (x, y),<label>(1)</label></formula><p>where L f is the current frame, L f −1 is the previous frame and (x, y) are the pixel coordinates. In principle, the Photoreceptive layer implements a contrast enhancement and forms the input to the following two groups of neuronsthe Inhibition layer and Excitatory layer.</p><p>The response of the Inhibition layer is computed as </p><formula xml:id="formula_1">I f (x, y) = n ∑ i=−n n ∑ j=−n P f −1 (x + i, y + j)w I (i, j),<label>(2)</label></formula><formula xml:id="formula_2">(i = j, if i = 0)</formula><formula xml:id="formula_3">w I =      </formula><p>0.06 0.12 0.25 0.12 0.06 0.12 0.06 0.12 0.06 0.12 0.25 0.12 0 0.12 0.25 0.12 0.06 0.12 0.06 0.12 0.06 0.12 0.25 0.12 0.06</p><formula xml:id="formula_4">      .<label>(3)</label></formula><p>The Inhibition layer is essentially smoothing the Photoreceptive layer output values and filtering those caused by noise or camera imperfections. The inhibition weights w I are selected experimentally with respect to the LGMD description in <ref type="bibr" target="#b1">[2]</ref> which uses 3×3 matrix of inhibition weights, but on an image with a much lower resolution.</p><p>The Excitatory layer is used to time delay the output of Photoreceptive layer and it is calculated as</p><formula xml:id="formula_5">E = P f (x, y) . (<label>4</label></formula><formula xml:id="formula_6">)</formula><p>The response of the Summation layer is computed as</p><formula xml:id="formula_7">S f (x, y) = E(x, y) − I f (x, y) W I ,<label>(5)</label></formula><p>where W I = 0.4 is the global inhibition weight. Let S f be a matrix for which each value exceeding the threshold T r is passed and any lower value is set to 0</p><formula xml:id="formula_8">S f (x, y) = S f (x, y) if S f (x, y) ≥ T r 0 otherwise .<label>(6)</label></formula><p>Then, an excitation of the LGMD cell is computed as</p><formula xml:id="formula_9">U f = k ∑ x=1 l ∑ y=1 S f (x, y)<label>(7)</label></formula><p>and finally, the LGMD cell output is</p><formula xml:id="formula_10">u f = (1 + e −U f n −1 cell ) −1 ,<label>(8)</label></formula><p>where n cell is the total number of cells (the number of pixels). Note, the output of u f is in the interval u f ∈ [0.5, 1]. Typically, the LGMD neural network contains Feedforward cell which is not utilized in the proposed scheme based on the results of the experimental evaluation. The purpose of the Feed-forward cell is to suppress the output of the LGMD cell in a case of fast camera movements.  In our setup, two LGMD neural networks are utilized in parallel to distinguish the direction of the interception, and thus be able to steer the robot in the opposite direction to achieve the desired obstacle avoiding behavior. The input image from a single camera is split into left and right parts with the overlapping center part. Each of the LGMDs provide the output which we denote u le f t f and u right f for the left and the right LGMD respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">RNN-Based Controller</head><p>In our previous work <ref type="bibr" target="#b2">[3]</ref>, we utilized a direct mapping function between the LGMDs output tuple and the turn parameter of the CPG. The particular mapping function was designed as Φ(e) = 100/2e for |e| ≥ 0.2 10000 • sgn(e) for |e| &lt; 0.2</p><formula xml:id="formula_11">, (<label>9</label></formula><formula xml:id="formula_12">)</formula><p>where error e is calculated as the difference of the LGMD outputs e = u le f t f − u right f . However, the direct mapping function failed in the collision avoidance in cluttered environment. Therefore, we developed an RNN-based controller that takes the left and right LGMD outputs on its input and provides an estimate of the turn parameter on its output.</p><p>In the proposed controller, we utilized the Recurrent Neural Network (RNN) based on the Long Short Term Memory (LSTM) <ref type="bibr" target="#b17">[18]</ref> with two inputs, one hidden layer, and one output that estimate the error e which is then used with the mapping function given by ( <ref type="formula" target="#formula_11">9</ref>). The Backpropagation Through Time (BPTT) <ref type="bibr" target="#b18">[19]</ref> is utilized for the RNN training, which unrolls the network over the time resulting in a feed-forward neural network. As there are only two real number inputs to the network, it is unnecessary to use sliding window approaches to the learning as it is possible to feed the data to the network in a full length. The structure of the LSTM neural network is visualized in Fig. <ref type="figure" target="#fig_3">4</ref>.</p><p>The main idea is to connect the RNN directly to the outputs of the left and right LGMDs and let the neural network estimate the parameter e which is then translated by <ref type="bibr" target="#b8">(9)</ref> to the turn parameter of the CPG-based locomotion controller.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experimental Evaluation</head><p>The experimental verification of the proposed neuralbased controller is focused on the ability of the hexapod walking robot to avoid collisions with the obstacles on its path. We are emphasizing the practical verification with a real walking robot to thoroughly test the proposed solution and provide insights on the achieved performance.</p><p>The experimental evaluation has been considered with the hexapod walking robot visualized in Fig. <ref type="figure" target="#fig_4">5a</ref>. The robot has six legs attached to the trunk that hosts the sensors. In particular, the Logitech C920 camera with the field of view 78 • to provide the LGMD with the visual input has been utilized. The image data fed into the LGMD neural network has been subsampled to the resolution of 176×144 pixels and divided into two parts overlapping in 10% of the image area. The robot has operated in an arena surrounded by obstacles, which are formed by tables, chairs and boxes (see Fig. <ref type="figure" target="#fig_4">5b</ref>). The robot movement has been tracked by a visual localization system which tracks the AprilTag <ref type="bibr" target="#b19">[20]</ref> pattern attached to the robot, which allows to capture the real trajectory the robot was traversing. Typical images captured by the robot during traversing the arena are visualized in Fig. <ref type="figure" target="#fig_4">5c-g</ref>. As the LGMD reacts strongly on the lateral movement of vertical edges in the image, it is much harder to avoid obstacles in the cluttered environment where the edges are distributed non-homogeneously in contrast to experiments performed in <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b5">6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">RNN Training Process</head><p>The LSTM neural network <ref type="bibr" target="#b17">[18]</ref> has been trained using the BPTT technique <ref type="bibr" target="#b18">[19]</ref>. The training process has been performed as follows. First, 10 sample trajectories have been collected by manually guiding the robot through the environment while avoiding the obstacles. The outputs of both the LGMDs have been recorded and the parameter turn has been adjusted manually, from which the corresponding error parameter e has been computed. The sampled trajectories contain altogether 22530 sample points. Next, the neural network has been trained with these 10 trajectories in 1000 iterations.</p><p>The herein utilized RNN has 2 inputs, 16 hidden states, and 1 output. The 16 hidden states have been selected as a compromise between the complexity of the RNN and the behavior observed during the experimental verification. As one of the problems of the former solution is the behavior of the robot when it successfully initiate the obstacle avoidance but it then hits it from the side, we selected 16 hidden states as the memory buffer to provide sufficient capacity for the robot to traverse 0.4 m given its dimensions, speed and camera frame rate.</p><p>The sigmoid function has been used as the activation function of the RNN</p><formula xml:id="formula_13">f (x) = 1 1 + e −x .<label>(10)</label></formula><p>As the LGMD outputs are in the range u f ∈ [0.5, 1] and the error function e ∈ [−0.5, 0.5], the RNN has been trained to estimate the value of e + 0.5 which is feasible for the sigmoid function with the range of f (x) ∈ [0, 1].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Experimental Results</head><p>Altogether, 20 trials have been performed in the laboratory arena to verify the ability of the robot to avoid collisions. The robot has been directed to intercept different obstacles and its behavior has been observed. The algorithm failed only in 3 trials while the previous approach based on the direct control proposed in <ref type="bibr" target="#b2">[3]</ref> is unable to operate in such a heavily cluttered environment at all. The first failed trial is specific by a direct collision with a low-textured wooden barrier (see Fig. <ref type="figure" target="#fig_4">5d</ref>), hence the LGMDs failed to detect an approaching object. The second and third failures fall into the category of sideway interception when the robot successfully starts to avoid the obstacle but the robot hits it later from a side. Fig. <ref type="figure">6</ref> shows three typical trajectories crawled by the hexapod robot in the laboratory arena. The trajectory is overlaid with the perpendicular arrows that characterize the direction and magnitude of the error e that is used for the robot steering which correspond to the direction in which the neural-based controller is sensing an obstacle. Besides, the corresponding plot of the LGMD outputs and the comparison of the control output provided by the proposed neural-based controller e rnn and the direct control method e direct is visualized in Fig. <ref type="figure">7</ref>.</p><p>Further, we let the robot to continuously crawl the area and avoid obstacles. The robot has crawled the distance of approx. 140 m while colliding only 8 times.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Discussion</head><p>The presented results indicate that the proposed neuralbased locomotion controller with the collision avoidance feedback provided by the LGMD neural network and the RNN-based controller is feasible. Moreover, the utilization of the RNN considerably improves the collision avoiding behavior in comparison to the direct control mechanism presented in <ref type="bibr" target="#b2">[3]</ref>. The difference between the control principles can be best observed in Fig. <ref type="figure">7a</ref>. It can be seen that the RNN filters oscillations in the error e which would disable the robot from avoiding the collision in a case of the direct control.</p><p>On the other hand, it is not particularly clear what is the RNN-based controller reacting to, as the dependency of the output on the distance to the closest obstacle has not been confirmed. This can be observed in Fig. <ref type="figure">6c</ref> and the corresponding plot of the error function in Fig. <ref type="figure">7c</ref> where the controller starts to oscillate after successfully avoiding the first obstacle. Other experimental trials have shown that these oscillations do not affect the collision avoiding behavior; however, it is unclear how and why they are produced by the neural controller.</p><p>The results indicate that the RNN calculates a weighted average of the LGMD outputs over a short period. However, further analysis of the behavior of the controller is necessary to reliably evaluate its properties.</p><p>Last but not least, the proposed controller performs only a collision avoiding behavior and does not guide the robot to any particular goal. Thus, we consider an extension of the proposed method to incorporate a higher level goal following to the architecture of the neural-based controller as a future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>In this paper, we propose an extension of the biologically based collision avoidance approach with a Recurrent Neural Network to enhance the collision avoiding behavior of a hexapod walking robot. The proposed extension allows the robot to operate in heavily cluttered environments. The herein presented experimental results indicate feasibility of the controller which failed to avoid collision in only 3 out of 20 performed trials. The experimental results raised questions about the cause of the observed oscillations that deserve future investigation. Besides, we aim to improve the proposed biologically-based architecture to follow a specific target location, and thus developed biologically inspired autonomous navigation. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Overview of the proposed control system structure. Different colors discriminate the individual functional parts of the architecture.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Photoreceptive layerInhibitory/Excitatory layerSummation layerLGMD cell</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: LSTM recurrent neural network model</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: (a) The hexapod walking robot, (b) the laboratory test environment, and (c-g) typical images captured by the robot</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :Figure 7 :</head><label>67</label><figDesc>Figure 6: Collision avoiding trajectories for the experiments t 1 , t 4 , and t 5</figDesc><graphic coords="6,65.45,101.26,100.78,75.59" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgments -This work was supported by the Czech Science Foundation (GA ČR) under research project No. 15-09600Y. The support of the Grant Agency of the CTU in Prague under grant No. SGS16/235/OHK3/3T/13 to Petr Čížek is also gratefully acknowledged.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">On chaotic oscillatorbased central pattern generator for motion control of hexapod walking robot</title>
		<author>
			<persName><forename type="first">P</forename><surname>Milička</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Čížek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Faigl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Informačné Technológie -Aplikácie a Teória (ITAT), CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1649</biblScope>
			<biblScope unit="page" from="131" to="137" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Collision avoidance using a model of the locust LGMD neuron</title>
		<author>
			<persName><forename type="first">M</forename><surname>Blanchard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Rind</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">F M J</forename><surname>Verschure</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Robotics and Autonomous Systems (RAS)</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="17" to="38" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Neural based obstacle avoidance with CPG controlled hexapod walking robot</title>
		<author>
			<persName><forename type="first">P</forename><surname>Čížek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Milička</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Faigl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Joint Conference on Neural Networks (IJCNN)</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="650" to="656" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement</title>
		<author>
			<persName><forename type="first">S</forename><surname>Yue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">C</forename><surname>Rind</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="705" to="716" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Bio-inspired collision detector with enhanced selectivity for ground robotic vision system</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">British Machine Vision Conference</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A biologically based flight control system for a blimp-based UAV</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">B</forename><surname>Badia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">F M J</forename><surname>Verschure</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Robotics and Automation (ICRA)</title>
				<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="3053" to="3059" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Neural network perception for mobile robot guidance</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Pomerleau</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>Springer Science and Business Media</publisher>
			<biblScope unit="volume">239</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The central nervous control of flight in a locust</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Wilson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Experimental Biology</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">47</biblScope>
			<biblScope unit="page" from="471" to="490" />
			<date type="published" when="1961">1961</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Simplified bionic solutions: a simple bioinspired vehicle collision detection system</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hartbauer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bioinspiration and Biomimetics</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page">26007</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Redundant neural vision systems -competing for collision recognition roles</title>
		<author>
			<persName><forename type="first">S</forename><surname>Yue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">C</forename><surname>Rind</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Autonomous Mental Development</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="173" to="186" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">A Bio-Inspired Model for Visual Collision Avoidance on a Hexapod Walking Robot</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">G</forename><surname>Meyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">J N</forename><surname>Bertrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Paskarbeit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Lindemann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Egelhaaf</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="167" to="178" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Evolution and development of neural controllers for locomotion, gradientfollowing, and obstacle-avoidance in artificial insects</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kodjabachian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Meyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="796" to="812" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Reinforcement learning neural network to the problem of autonomous mobile robot obstacle avoidance</title>
		<author>
			<persName><forename type="first">B.-Q</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G.-Y</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Guo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Machine Learning and Cybernetics</title>
				<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="85" to="89" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">How hard is it to cross the room? -training (recurrent) neural networks to steer a UAV</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kelchtermans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tuytelaars</surname></persName>
		</author>
		<idno>. abs/1702.07600</idno>
	</analytic>
	<monogr>
		<title level="j">CoRR</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Self-organized adaptation of a simple neural circuit enables complex robot behaviour</title>
		<author>
			<persName><forename type="first">S</forename><surname>Steingrube</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Timme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wörgötter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Manoonpong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature Physics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="224" to="230" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Hexapod gait control by a neural network</title>
		<author>
			<persName><forename type="first">N</forename><surname>Porcino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Joint Conference on Neural Networks (IJCNN)</title>
				<imprint>
			<date type="published" when="1990">1990</date>
			<biblScope unit="page" from="189" to="194" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Insect walking</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Wilson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annual Review of Entomology</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="103" to="122" />
			<date type="published" when="1966">1966</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Long short-term memory</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Computation</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="1735" to="1780" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Training recurrent neural networks</title>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
		<respStmt>
			<orgName>University of Toronto</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. dissertation</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">AprilTag: A Robust and Flexible Visual Fiducial System</title>
		<author>
			<persName><forename type="first">E</forename><surname>Olson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Robotics and Automation (ICRA)</title>
				<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="3400" to="3407" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
