=Paper= {{Paper |id=None |storemode=property |title=Localization With a Low-cost Robot |pdfUrl=https://ceur-ws.org/Vol-584/paper11.pdf |volume=Vol-584 |dblpUrl=https://dblp.org/rec/conf/itat/SlusnyNV09 }} ==Localization With a Low-cost Robot== https://ceur-ws.org/Vol-584/paper11.pdf
                             Localization with a low-cost robot?

                              Stanislav Slušný, Roman Neruda, and Petra Vidnerová

                         Institute of Computer Science, Academy of Sciences, Czech Republic
                                                 slusny@cs.cas.cz

Abstract. The robot localization problem is a fundamen-
tal and well studied problem in robotics research. Algo-
rithms used to estimate pose on the map are usually based
on Kalman or particle filters. These algorithms are able
to cope with errors, that arise due to inaccuracy of robot
sensors and effectors. The performance of the localization
algorithm depends heavily on their quality.
This work shows performance of localization algo-
rithm based on particle filter with small miniature low-cost
E-puck robot. Information from VGA camera and eight in-
frared sensors are used to correct estimation of the robot’s
pose.
                                                               Fig. 1. Miniature e-puck robot has eight infrared sensors
                                                               and two motors.
1     Introduction
The robot localization problem is a fundamental and
well studied problem in robotics research. Several al-         2   Introducing E-puck robot
gorithms are used to estimate pose on the known map
and cope with errors, that arise due to inaccuracy
of robot sensors and effectors. Their performance de-          E-puck ([2, 3], Figure 1) is a mobile robot with a di-
pends heavily on quality of robot’s equipment: the             ameter of 70 mm and a weight of 50 g. The robot
more precise (and usually more expensive) sensors, the         is supported by two lateral wheels that can rotate in
better results of localization procedure.                      both directions and two rigid pivots in the front and
    This work deals with localization algorithm based          in the back. The sensory system employs eight “active
on particle filter with small miniature low-cost E-puck        infrared light” sensors distributed around the body,
robot. Information from cheap VGA camera and eight             six on one side and two on other side. In “passive
infrared sensors are used to correct estimation of the         mode”, they measure the amount of infrared light in
robot’s pose. To achieve better results, several land-         the environment, which is roughly proportional to the
marks are put into the environment. We assume, that            amount of visible light. In “active mode” these sensors
robot knows the map of the environment in advance              emit a ray of infrared light and measure the amount
(distribution of obstacles and walls in the environment        of reflected light. The closer they are to a surface (the
and position of the landmarks). We do not consider the         e-puck sensors can detect a white paper at a max-
more difficult simultaneous localization and mapping           imum distance of approximately 8 cm), the higher
(SLAM) problem in this work (the case, when robot              is the amount of infrared light measured. Unfortu-
does not know it’s own position in advance and does            nately, because of their imprecision and characteristics
not have the map of the environment available).                (see Figure 2), they can be used as bumpers only. As
    E-puck is a widely used robot for scientific and edu-      can be seen, they provide high resolution only within
cational purposes - it is open-source and low-cost. De-        few millimeters. They are very sensitive to the obsta-
spite its cheapness and limited sensor system, localiza-       cle surface, as well. Besides infrared sensors, robot is
tion can be successfully implemented, as will be shown         equipped with low-cost VGA camera. The camera and
in this article. We are not aware of any published re-         image processing will be described in the following sec-
sults of localization algorithms with E-puck robot. The        tion.
survey of localization methods can be found in [1].                Two stepper motors support the movement of the
?
    This work was supported by GA ČR grant 201/08/1744,       robot. A stepper motor is an electromechanical de-
    Institutional Research Plan AV0Z10300504 and by the        vice which converts electrical pulses into discrete me-
    Ministry of Education of Czech Republic under the          chanical movements. It can divide a full rotation into
    project Center of Applied Cybernetics No. 1M684077004      a 1000 steps, the maximum speed corresponds to
    (1M0567).                                                  about a rotation every second.
78      Stanislav Slušný et al.

                                                                   Parameters                     Value

                                                                   Maximum translational velocity 12.8 cm / sec
                                                                   Maximum rotational velocity    4.86 rad / sec
                                                                   Stepper motor maximum speed +- 1000 steps / sec
                                                                   Distance between tires         5.3 cm

                                                                  Table 1. Velocity parameters of E-puck mobile robot.




Fig. 2. Multiple measurements of front sensor. E-puck was
placed in front of the wall at a given distance and average
IR sensor value from 10 measurements was drown into the
graph.




                                                              Fig. 4. Illustration of error accumulation. Robot was or-
                                                              dered to make 10 squares of size 30 cm. Odometry errors
                                                              are caused mostly by rotation movement.



                                                                                                   ∆θ
                                                                                ∆x = ∆s.cos(θ +       )             (4)
                                                                                                   2
         Fig. 3. Differential drive robot schema.                                                  ∆θ
                                                                                ∆y = ∆s.sin(θ +       )             (5)
                                                                                                   2
                                                                  The major drawback of this procedure is error ac-
3    Dead reckoning                                           cumulation. At each step (each time you take an en-
                                                              coder measurement), the position update will involve
Dead reckoning ([4], derived originally from deduced          some error. This error accumulates over time
reckoning) is the process of estimating robot’s current       and therefore renders accurate tracking over large
position based upon a previously determined position.         distances impossible (see Figure 4). Tiny differences
For shorter trajectories, position can be estimated us-       in wheel diameter will result in important errors af-
ing shaft encoders and precise stepper motors.                ter a few meters, if they are not properly taken into
    E-puck is equipped with a differential drive (Fig-        account.
ure 3) - a simplest method to control robot. For a dif-
ferential drive robot the position of the robot can be
estimated by looking at the difference in the encoder         4      Image processing
values ∆sR and ∆sL . By estimating the position of the
robot, we mean the computation of tuple x, y, Θ as         The robot has a low-cost VGA camera with resolution
a function of previous position (xOLD , yOLD , ΘOLD )      of 480x640 pixels. Unfortunately, the Bluetooth con-
and encoder values (∆sR and ∆sL ).                         nection supports only a transmission of 2028 colored
                                                     pixel. For this reason a resolution of 52x39 pixels max-
               x        xOLD          ∆x                   imizes the Bluetooth connection and keeps a 4:3 ratio.
              y  =  yOLD  +  ∆y               (1)    This is the resolution we have used in our experiments
               θ        θOLD          ∆θ                   (see Figure 4). Another drawback of the camera is that
                         ∆sR − ∆sL                         it is very sensitive to the light conditions.
                    ∆θ =                               (2)      Despite these limitations, camera can be used to
                             L
                         ∆sR + ∆sL                         detect objects or landmarks. However, the information
                    ∆s =                               (3) about distance to the landmark extracted from the
                             2
                                                                           Localization with a low-cost robot      79




                                                            Fig. 6. First step in PF algorithm - to each position hy-
                                                            pothesis xt−1 is applied odometry model based on move-
                                                            ment ut−1 and new hypothesis xt is sampled from distrib-
Fig. 5. The physical parameters of the real camera (pic-
                                                            ution p(xt |xt−1 , ut−1 ).
ture taken from [2]). Camera settings used in experiments
corresponds to parameters a = 6 cm, b = 4.5 cm, c = 5.5
cm, α = 0.47 rad, β = 0.7 rad.

camera is not reliable (due to the noise), and we do
not use it in following section.
    Landmarks are objects of rectangular shape of size
5x5 cm and three different colors - red, green and blue.
We implemented image processing subsystem, that de-
tects relative position of the landmark from the robot.
Following steps are included:
 – Gaussian filter is used to reduce camera noise ([5])
 – Color segmentation into the red, blue and green          Fig. 7. Second step in PF algorithm - each particle is as-
   color. ([6])                                             signed a importance factor, corresponding to the proba-
 – Blob detection is used to detect position and size       bility of observation zt . If image processing detects two
   of the objects on the image. ([7])                       landmarks on the actual camera image, particles 0 and 1
 – Object detection is used to remove objects from          will be assigned small weight.
   image, that have non-rectangular shape.

   Output from the image processing is the relative experiment. The input of the algorithm is the set of
position and color of the detected landmarks (for ex- particles Xt , most recent control command ut and the
ample - I see red landmark by angle 15 degrees).      most recent sensor measurements zt .

                                                             1. State prediction based on odometry.
5   Particle filter localization                                The first step is the computation of temporary
                                                                particle set X from Xt . It is created by apply-
As shown previously (Figure 4), pose estimation based           ing odometry model p(xt |ut , xt−1 ) to each parti-
on dead reckoning is possible for short distances only.               [m]
                                                                cle xt from Xt .
For longer trajectories, more clever methods are             2. Correction step - Observation integration
needed. These methods are based either on Kalman
                                                                The next step is the computation of importance
filter [8] (or some of its variants) or particle                           [m]
                                                                factor wt . It is the probability of the mea-
filter (PF) [9].                                                                               [m]            [m]
     The PF possesses three basic steps - state predic-         surement zt under particle xt , given by wt =
                                                                       [m]
tion, observation integration and resampling. It works          p(zt |xt ).
with quantity p(xt ) - the probability, that robots is          Two types of measurements were considered:
located at the position xt in time t. In the case of PF,          – Measurement coming from distance sensors
the probability distribution is represented by the set               Distance sensor (one averaged value for front,
of particles. Such a representation is approximate, but              left, right and back direction) were used as
can represent much broader space of distributions                    bumpers only. In case of any contradiction be-
that, for example, Gaussians, as it is nonparametric.                tween real state and hypothesis, importance
                    [m]
     Each particle xt is a hypothesis, where the robot               factor was decreased correspondingly.
can be at time t. We have used M particles in our                 – Measurement obtained from image processing
80      Stanislav Slušný et al.

                                                                7   Conclusions
                                                                Localization and pose estimation is an opening gate
                                                                towards more sophisticated robotics experiments. As
                                                                we have shown, the localization process can be car-
                                                                ried out even with low-cost robot. Experiments were
                                                                executed both in simulation and real environment.
                                                                    A lot of work remains to be done. The experiments
                                                                in this work considered static environment only. Ad-
                                                                dition of another robot will make the problem much
                                                                more difficult.
Fig. 8. Placement of red, green and blue landmarks in rec-          As we have mentioned already, there are certain
tangular arena of size 1x0.75m.                                 areas in the environment, where convergence of the
                                                                localization algorithm is very fast - in corners or near
                                                                walls. Sensor fusion is the process of combining sen-
                                                                sory data from disparate sources such that the result-
        Output from image processing was compared
                                                                ing information is in some sense better than would be
        with expected position of the landmarks. In
                                                                possible when these sources were used individually. We
        case of any contradiction (colors and relative
                                                                are dealing with sensor fusion of infrared sensors and
        angle of landmarks were checked), importance
                                                                input from camera.
        factor was decreased. The bigger mismatch,
                                                                    As a future work, we would like to implement path
        the smaller importance factor was assigned to
                                                                planning, that takes into account performance of the
        the hypothesis.
                                                                localization algorithm. Suggested path (generated by
 3. Re-sampling
                                                                path planning algorithm) should be safe (the chance
    The last step incorporates so-called importance
                                                                to get lost should be small) and short. Multi-criterial
    sampling. The algorithm draws with replacement
                                                                path planning will be based on dynamic program-
    M particles from temporary set X and creates new
                                                                ming ([13]). The idea is to learn areas with high loss
    particle set Xt+1 . The probability of drawing each
                                                                probability from experience.
    particles is given by its importance weight. This
    principle is called survival of the fittest in AI ([10]).
                                                                References
                                                                 1. S. Thrun, W. Burgard, and D. Fox: Probabilistic Ro-
6    Experiments                                                    botics. Cambridge, MA: MIT Press, 2005.
                                                                 2. http://en.wikibooks.org/wiki/Cyberbotics
Experiments were carried out in an arena of                         Robot Curriculum/
size 1x0.75 meters. Three landmarks (red, blue and               3. E-puck, online documentation.
green, one of each color) were placed into the arena,               http://www.e-puck.org
as shown on the figure 8. Robot was controlled by com-           4. R.C. Arking: Behavior-Based Robotics. The MIT
mands sent from computer, values from sensors were                  Press, 1998.
sent back to computer by using Bluetooth. Execution              5. L.G. Shapiro, G.C. Stockman: Computer Vision.
                                                                    Prentence Hall, 150, 2001, p. 137.
of each command took 64 milliseconds.                            6. J. Bruce, T. Balch and M. Veloso: Fast and Inexpensive
     The experiment started by putting robot into the               Color Image Segmentation for Interactive Robots. In
arena and randomly distributing 2000 particles. After               Proceedings of IROS-2000, 2000, 2061–2066.
several steps, the PF algorithm relocated the particles          7. http://www.v3ga.net/processing/BlobDetection/
into real location of the robot. The robot was able                 index-page-home.html
to localize itself. The convergence of the algorithm de-         8. R.E. Kalman: A new approach to linear filtering and
pends on the fact, if robot is moving near the wall or in           prediction problems. Trans. ASME, Journal of Basic
the middle of the arena. The impact of infrared sensors             Engineering 82, 35–45.
                                                                 9. R.Y. Rubinstein: Simulation and the Monte Carlo
was obvious. Algorithms were verified in the simula-                Method.. John Wiley and Sons, Inc.
tor([11]) and in reality, as well. The video demonstra-         10. K. Kanazawa, D. Koller, and S.J. Russel: Stochastic
tion can be found at ([12]).                                        simulation algorithms for dynamic probabilistic net-
     The localization algorithm was able to cope with               works. In Proceedings of the 11th Annual Conference
even bigger areas, up to the size of three meters. How-             on Uncertainty in AI, Montreal, Canada.
ever, we had to add more landmarks to simplify the lo-          11. Webots simulator. http://www.cyberbotics.com.
calization process. Localization algorithm showed sat-          12. Video demonstration. http://www.cs.cas.cz/slusny.
                                                                13. R.S. Sutto, and A. Barto: Reinforcement Learning: An
isfiable performance, relocating hypothesis near real
                                                                    Introduction. The MIT Press, 1998.
robot pose.