<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Localization with a low-cost robot</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Stanislav</forename><surname>Slušný</surname></persName>
							<email>slusny@cs.cas.cz</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Computer Science</orgName>
								<orgName type="institution">Academy of Sciences</orgName>
								<address>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roman</forename><surname>Neruda</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Computer Science</orgName>
								<orgName type="institution">Academy of Sciences</orgName>
								<address>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Petra</forename><surname>Vidnerová</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Computer Science</orgName>
								<orgName type="institution">Academy of Sciences</orgName>
								<address>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Localization with a low-cost robot</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">6C45D72DABFD2736F7DD2F3F72A6A66E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T04:48+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The robot localization problem is a fundamental and well studied problem in robotics research. Algorithms used to estimate pose on the map are usually based on Kalman or particle filters. These algorithms are able to cope with errors, that arise due to inaccuracy of robot sensors and effectors. The performance of the localization algorithm depends heavily on their quality. This work shows performance of localization algorithm based on particle filter with small miniature low-cost E-puck robot. Information from VGA camera and eight infrared sensors are used to correct estimation of the robot's pose.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The robot localization problem is a fundamental and well studied problem in robotics research. Several algorithms are used to estimate pose on the known map and cope with errors, that arise due to inaccuracy of robot sensors and effectors. Their performance depends heavily on quality of robot's equipment: the more precise (and usually more expensive) sensors, the better results of localization procedure.</p><p>This work deals with localization algorithm based on particle filter with small miniature low-cost E-puck robot. Information from cheap VGA camera and eight infrared sensors are used to correct estimation of the robot's pose. To achieve better results, several landmarks are put into the environment. We assume, that robot knows the map of the environment in advance (distribution of obstacles and walls in the environment and position of the landmarks). We do not consider the more difficult simultaneous localization and mapping (SLAM) problem in this work (the case, when robot does not know it's own position in advance and does not have the map of the environment available).</p><p>E-puck is a widely used robot for scientific and educational purposes -it is open-source and low-cost. Despite its cheapness and limited sensor system, localization can be successfully implemented, as will be shown in this article. We are not aware of any published results of localization algorithms with E-puck robot. The survey of localization methods can be found in <ref type="bibr" target="#b0">[1]</ref>. 2 Introducing E-puck robot <ref type="figure" target="#fig_0">1</ref>) is a mobile robot with a diameter of 70 mm and a weight of 50 g. The robot is supported by two lateral wheels that can rotate in both directions and two rigid pivots in the front and in the back. The sensory system employs eight "active infrared light" sensors distributed around the body, six on one side and two on other side. In "passive mode", they measure the amount of infrared light in the environment, which is roughly proportional to the amount of visible light. In "active mode" these sensors emit a ray of infrared light and measure the amount of reflected light. The closer they are to a surface (the e-puck sensors can detect a white paper at a maximum distance of approximately 8 cm), the higher is the amount of infrared light measured. Unfortunately, because of their imprecision and characteristics (see Figure <ref type="figure">2</ref>), they can be used as bumpers only. As can be seen, they provide high resolution only within few millimeters. They are very sensitive to the obstacle surface, as well. Besides infrared sensors, robot is equipped with low-cost VGA camera. The camera and image processing will be described in the following section.</p><formula xml:id="formula_0">E-puck ([2, 3], Figure</formula><p>Two stepper motors support the movement of the robot. A stepper motor is an electromechanical device which converts electrical pulses into discrete mechanical movements. It can divide a full rotation into a 1000 steps, the maximum speed corresponds to about a rotation every second. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Dead reckoning</head><p>Dead reckoning ( <ref type="bibr" target="#b2">[4]</ref>, derived originally from deduced reckoning) is the process of estimating robot's current position based upon a previously determined position. For shorter trajectories, position can be estimated using shaft encoders and precise stepper motors.</p><p>E-puck is equipped with a differential drive (Figure <ref type="figure">3</ref>) -a simplest method to control robot. For a differential drive robot the position of the robot can be estimated by looking at the difference in the encoder values ∆s R and ∆s L . By estimating the position of the robot, we mean the computation of tuple x, y, Θ as a function of previous position (x OLD , y OLD , Θ OLD ) and encoder values (∆s R and ∆s L ).  </p><formula xml:id="formula_1">  x y θ   =   x OLD y OLD θ OLD   +   ∆x ∆y ∆θ   (1) ∆θ = ∆s R − ∆s L L (2) ∆s = ∆s R + ∆s L 2 (3)</formula><formula xml:id="formula_2">∆y = ∆s.sin(θ + ∆θ 2 )<label>(5)</label></formula><p>The major drawback of this procedure is error accumulation. At each step (each time you take an encoder measurement), the position update will involve some error. This error accumulates over time and therefore renders accurate tracking over large distances impossible (see Figure <ref type="figure" target="#fig_2">4</ref>). Tiny differences in wheel diameter will result in important errors after a few meters, if they are not properly taken into account.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Image processing</head><p>The robot has a low-cost VGA camera with resolution of 480x640 pixels. Unfortunately, the Bluetooth connection supports only a transmission of 2028 colored pixel. For this reason a resolution of 52x39 pixels maximizes the Bluetooth connection and keeps a 4:3 ratio. This is the resolution we have used in our experiments (see Figure <ref type="figure" target="#fig_2">4</ref>). Another drawback of the camera is that it is very sensitive to the light conditions.</p><p>Despite these limitations, camera can be used to detect objects or landmarks. However, the information about distance to the landmark extracted from the camera is not reliable (due to the noise), and we do not use it in following section.</p><p>Landmarks are objects of rectangular shape of size 5x5 cm and three different colors -red, green and blue. We implemented image processing subsystem, that detects relative position of the landmark from the robot. Following steps are included:</p><p>-Gaussian filter is used to reduce camera noise ( Output from the image processing is the relative position and color of the detected landmarks (for example -I see red landmark by angle 15 degrees).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Particle filter localization</head><p>As shown previously (Figure <ref type="figure" target="#fig_2">4</ref>), pose estimation based on dead reckoning is possible for short distances only. For longer trajectories, more clever methods are needed. These methods are based either on Kalman filter <ref type="bibr" target="#b5">[8]</ref> (or some of its variants) or particle filter (PF) <ref type="bibr" target="#b6">[9]</ref>.</p><p>The PF possesses three basic steps -state prediction, observation integration and resampling. It works with quantity p(x t ) -the probability, that robots is located at the position x t in time t. In the case of PF, the probability distribution is represented by the set of particles. Such a representation is approximate, but can represent much broader space of distributions that, for example, Gaussians, as it is nonparametric.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Each particle x</head><p>[m] t is a hypothesis, where the robot can be at time t. We have used M particles in our  experiment. The input of the algorithm is the set of particles X t , most recent control command u t and the most recent sensor measurements z t .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">State prediction based on odometry.</head><p>The first step is the computation of temporary particle set X from X t . It is created by applying odometry model p(</p><formula xml:id="formula_3">x t |u t , x t−1 ) to each parti- cle x [m] t from X t .</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Correction step -Observation integration</head><p>The next step is the computation of importance factor w</p><p>[m] t . It is the probability of the measurement z t under particle x</p><p>[m] t , given by w</p><formula xml:id="formula_4">[m] t = p(z t |x [m] t ).</formula><p>Two types of measurements were considered:</p><p>-Measurement coming from distance sensors Distance sensor (one averaged value for front, left, right and back direction) were used as bumpers only. In case of any contradiction between real state and hypothesis, importance factor was decreased correspondingly. -Measurement obtained from image processing Output from image processing was compared with expected position of the landmarks. In case of any contradiction (colors and relative angle of landmarks were checked), importance factor was decreased. The bigger mismatch, the smaller importance factor was assigned to the hypothesis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Re-sampling</head><p>The last step incorporates so-called importance sampling. The algorithm draws with replacement M particles from temporary set X and creates new particle set X t+1 . The probability of drawing each particles is given by its importance weight. This principle is called survival of the fittest in AI ( <ref type="bibr" target="#b7">[10]</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Experiments</head><p>Experiments were carried out in an arena of size 1x0.75 meters. Three landmarks (red, blue and green, one of each color) were placed into the arena, as shown on the figure <ref type="figure" target="#fig_7">8</ref>. Robot was controlled by commands sent from computer, values from sensors were sent back to computer by using Bluetooth. Execution of each command took 64 milliseconds. The experiment started by putting robot into the arena and randomly distributing 2000 particles. After several steps, the PF algorithm relocated the particles into real location of the robot. The robot was able to localize itself. The convergence of the algorithm depends on the fact, if robot is moving near the wall or in the middle of the arena. The impact of infrared sensors was obvious. Algorithms were verified in the simulator( <ref type="bibr" target="#b8">[11]</ref>) and in reality, as well. The video demonstration can be found at ( <ref type="bibr" target="#b9">[12]</ref>).</p><p>The localization algorithm was able to cope with even bigger areas, up to the size of three meters. However, we had to add more landmarks to simplify the localization process. Localization algorithm showed satisfiable performance, relocating hypothesis near real robot pose.</p><p>Localization and pose estimation is an opening gate towards more sophisticated robotics experiments. As we have shown, the localization process can be carried out even with low-cost robot. Experiments were executed both in simulation and real environment.</p><p>A lot of work remains to be done. The experiments in this work considered static environment only. Addition of another robot will make the problem much more difficult.</p><p>As we have mentioned already, there are certain areas in the environment, where convergence of the localization algorithm is very fast -in corners or near walls. Sensor fusion is the process of combining sensory data from disparate sources such that the resulting information is in some sense better than would be possible when these sources were used individually. We are dealing with sensor fusion of infrared sensors and input from camera.</p><p>As a future work, we would like to implement path planning, that takes into account performance of the localization algorithm. Suggested path (generated by path planning algorithm) should be safe (the chance to get lost should be small) and short. Multi-criterial path planning will be based on dynamic programming ( <ref type="bibr" target="#b10">[13]</ref>). The idea is to learn areas with high loss probability from experience.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Miniature e-puck robot has eight infrared sensors and two motors.</figDesc><graphic coords="1,308.31,189.84,113.30,112.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .Fig. 3 .</head><label>23</label><figDesc>Fig. 2. Multiple measurements of front sensor. E-puck was placed in front of the wall at a given distance and average IR sensor value from 10 measurements was drown into the graph.</figDesc><graphic coords="2,62.16,90.35,232.69,155.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Illustration of error accumulation. Robot was ordered to make 10 squares of size 30 cm. Odometry errors are caused mostly by rotation movement.</figDesc><graphic coords="2,350.83,198.08,141.70,131.74" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. The physical parameters of the real camera (picture taken from [2]). Camera settings used in experiments corresponds to parameters a = 6 cm, b = 4.5 cm, c = 5.5 cm, α = 0.47 rad, β = 0.7 rad.</figDesc><graphic coords="3,94.91,90.36,164.52,141.49" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>[5]) -Color segmentation into the red, blue and green color. ([6]) -Blob detection is used to detect position and size of the objects on the image. ([7]) -Object detection is used to remove objects from image, that have non-rectangular shape.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. First step in PF algorithm -to each position hypothesis x t−1 is applied odometry model based on movement u t−1 and new hypothesis x t is sampled from distribution p(xt|xt−1, ut−1).</figDesc><graphic coords="3,345.24,90.21,149.52,113.23" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 7 .</head><label>7</label><figDesc>Fig.7. Second step in PF algorithm -each particle is assigned a importance factor, corresponding to the probability of observation z t . If image processing detects two landmarks on the actual camera image, particles 0 and 1 will be assigned small weight.</figDesc><graphic coords="3,306.98,281.85,226.20,113.21" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Fig. 8 .</head><label>8</label><figDesc>Fig. 8. Placement of red, green and blue landmarks in rectangular arena of size 1x0.75m.</figDesc><graphic coords="4,78.75,90.50,199.48,112.87" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Velocity parameters of E-puck mobile robot.</figDesc><table><row><cell>Parameters</cell><cell>Value</cell></row><row><cell cols="2">Maximum translational velocity 12.8 cm / sec</cell></row><row><cell>Maximum rotational velocity</cell><cell>4.86 rad / sec</cell></row><row><cell cols="2">Stepper motor maximum speed +-1000 steps / sec</cell></row><row><cell>Distance between tires</cell><cell>5.3 cm</cell></row></table></figure>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This work was supported by GA ČR grant 201/08/1744, Institutional Research Plan AV0Z10300504 and by the Ministry of Education of Czech Republic under the project Center of Applied Cybernetics No. 1M684077004 (1M0567).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Thrun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Burgard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Fox</surname></persName>
		</author>
		<title level="m">Probabilistic Robotics</title>
				<meeting><address><addrLine>Cambridge, MA</addrLine></address></meeting>
		<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">E-Puck</forename></persName>
		</author>
		<ptr target="http://www.e-puck.org" />
		<imprint/>
	</monogr>
	<note>online documentation</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Behavior-Based Robotics</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Arking</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>The MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Computer Vision</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">G</forename><surname>Shapiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">C</forename><surname>Stockman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Prentence Hall</title>
		<imprint>
			<biblScope unit="volume">150</biblScope>
			<biblScope unit="page">137</biblScope>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Fast and Inexpensive Color Image Segmentation for Interactive Robots</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bruce</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Balch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Veloso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IROS-2000</title>
				<meeting>IROS-2000</meeting>
		<imprint>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="2061" to="2066" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A new approach to linear filtering and prediction problems</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Kalman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Trans. ASME, Journal of Basic Engineering</title>
		<imprint>
			<biblScope unit="volume">82</biblScope>
			<biblScope unit="page" from="35" to="45" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Simulation and the Monte Carlo Method</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">Y</forename><surname>Rubinstein</surname></persName>
		</author>
		<imprint>
			<publisher>John Wiley and Sons, Inc</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Stochastic simulation algorithms for dynamic probabilistic networks</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kanazawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Koller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Russel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 11th Annual Conference on Uncertainty in AI</title>
				<meeting>the 11th Annual Conference on Uncertainty in AI<address><addrLine>Montreal, Canada</addrLine></address></meeting>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="http://www.cyberbotics.com" />
		<title level="m">Webots simulator</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="http://www.cs.cas.cz/slusny" />
		<title level="m">Video demonstration</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Reinforcement Learning: An Introduction</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Sutto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barto</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>The MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
