<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Input combination for Monte Carlo Localization</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>David Obdr</string-name>
          <email>david.obdrzalek@mff.cuni.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Charles University in Prague, Faculty of Mathematics and Physics Malostransk</institution>
        </aff>
      </contrib-group>
      <fpage>45</fpage>
      <lpage>52</lpage>
      <abstract>
        <p>One of the basic skills for an autonomous robot is the ability to determine its own position. There are numerous high-level systems which provide precise position information, but the same task may be also solved using less advanced and less accurate sensors. In our paper, we show how a good output may be acquired from not so good inputs if it is combined using modi¯ed Monte Carlo Localization (MCL) system. The combination of more inputs also helps to acquire plausible results even in situations, where adding new sensor(s) to an existing system raises doubts about the position calculation, because the newly added sensor may provide di®erent position information (or data from which the position is calculated) than what is provided by the already used sensors. We will show that using such \incorrect" data may be bene¯cial for determining the position with a reasonable probability.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>2</p>
    </sec>
    <sec id="sec-2">
      <title>Motivation and characterization of the task</title>
      <sec id="sec-2-1">
        <title>The goal of robot localization is to determine the po</title>
        <p>sition of the robot which moves through its working
environment. It may use data about the environment
1 Introduction and data about the robot, both using measured data
as well as data known in advance. To a great extent,
Good localization is for an autonomous robot one of the localization algorithm itself can be application
inthe keys to success. A robot which does not know its dependent and the usage for speci¯c purposes can vary
position, or which gets lost while performing its task, just by choosing di®erent inputs.
is not something what could be used in real life well. Inputs for the localization system may come from
For some tasks, localization can be quite simple, but di®erent sources: data may be acquired for example
in general, the better autonomous robot we want, the using di®erent sensors mounted on the robot
(meabetter (and usually more complicated) localization it suring either internal or external properties), by
reneeds. Some systems use single input for the localiza- ceiving signals sent from external sources, or even
cretion task and do not need any complex mechanisms to ated as virtual data which does not represent any real
accomplish all needed goals. Other systems combine measurements (e.g. expected position change based on
more inputs (and more input types) to acquire data for commands issued by the control system). Because the
the localization process; it may be because of di®erent di®erent sensors provide data with di®erent level of
acnature of the data which is available to be measured curacy and trustworthiness, data should be also
hanfor the localization as well as because di®erent sensing dled with di®erent weights.
methods may help to overcome problems with erro- The localization algorithm processes the selected
neous data coming from individual sensors. However, inputs to calculate the robot position. If the algorithm
combining more inputs may at the same time rise ques- itself does not depend on a speci¯c type(s) of input
tions about the preciseness of the localization process: data, then it is possible to create a generic solution
What went wrong if two or more inputs showed dif- which works with di®erent (and con¯gurable) sets of
ferent positions? Is it because of faulty sensor, inex- sensors.
act measurement, cumulative errors or improper data In this paper we present one possible solution of the
handling? localization problem which has been tested on a real</p>
        <p>
          Recently, the research in the robot localization sta- robot. This robot was used to play in the Eurobot
aurted to bring very good results using probabilistic me- tonomous robot contest in 2009 (see [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]). Although the
thods. In this paper, we discuss this problem in gen- system was created for the 2009 edition of the contest,
eral, and we present one particular implementation it was designed so that it could be used without
rewhich uses Monte Carlo Localization (MCL). programming for future editions too. Moreover, it was
of proper size and tries to determine the one the
robot is in. Unfortunately, this requires large
operational memory to store the data and computing
power to handle it.
        </p>
        <sec id="sec-2-1-1">
          <title>Monte-Carlo localization [6] can represent multi</title>
          <p>modal distribution and thus localize the robot
globally. It can process inputs from many sensors with
di®erent accuracies. Moreover, it can be easily
implemented.
designed to be independent on this particular task at
all and it may be reused in other projects with
di®erent inputs as well.</p>
          <p>
            The Eurobot contest rules change every year, but
they always share the same core of basic
characteristics (for more details, see the Eurobot Association web
pages at [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ]):
{ Known indoor environment with highlighted
im
          </p>
          <p>portant landmarks
{ Small working area (2.1 £ 3 meters) For our given task, it is not possible to use standard
{ Possibility to place localization beacons at prede- Kalman ¯lter, because its basic requirements are not
¯ned places around the working area met: in our case, the expected value and variance of
{ Target objects placed semi-randomly on the work- the measured values are not known.</p>
          <p>ing area The second mentioned localization algorithm,
Mar{ Prede¯ned starting position of the robot kov grid-based localization, would overcome this
prob{ Limited height and circumference of the robot lem, but would impose another problem at the same
{ Two robots moving in the area at the same time time { the need to handle lot of data. The position of
with the necessity to avoid collisions a robot is speci¯ed as one cell in a grid covering the
This list obviously a®ects the development of ro- whole working area. It is needed to store some data
bot hardware and software. However, our aim was to for each grid cell, and to reach good precision level,
create a localization algorithm which is not that much the grid must be ¯ne. Our hardware platform provided
application dependent and can be used for other ap- su±cient storage with reasonable power for processing
plications in di®erent conditions too. the navigation task and for controlling the hardware,</p>
          <p>
            The exactly needed precision level is application but including Markov localization would cause
overdependent and varies a lot from one application to an- loading of the system and the goal to create a
successother. For the speci¯c conditions it is however impor- ful autonomous robot could not be reached: neither
tant to maintain the precision in an acceptable range. the memory volume nor the computational power of
Therefore, our aim was to create a localization algo- our hardware were strong enough to handle Markov
rithm which would be able to reach the required preci- grid-based localization alone, not talking about
comsion level even using less precise inputs. The precision bining it with all the other needed tasks.
should not be prede¯ned nor implied by the algorithm. Therefore, we have decided to implement
MonteCarlo localization and let it use the remaining
resources in the system as long as it does not a®ect its
func3 Localization algorithms tionality. This also directly implied the selection of the
MCL parameters in the tuning phase { \eat as much
The area of autonomous robot localization is well re- as you like as long as supply lasts". At the same time,
searched (see e.g. [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ]), and several ways can be used we gained the possibility to use more di®erent sensors
to solve the localization problem. Therefore we do not for the localization task.
try to invent a new algorithm. Instead, we will out- The Monte-Carlo localization will be further
disline some existing localization algorithms and discuss cussed in Section 4 and our implementation in
Secsome of their implementation details, together with tions 5 and 6.
technical problems we have met.
          </p>
          <p>
            For localization based on various input values one
can choose from many algorithms; the most know are: 4 General description of MCL
Kalman ¯lter [
            <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
            ] generalizes the °oating mean. It In this section we will brie°y outline Monte Carlo
Locan handle noisy data so it is suitable for process- calization (MCL), introduced by Dieter Fox in the
ing the data from less precise sensors. However, late 1990s [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. This algorithm meets all the
requirethe model must be described with the expected ments mentioned in problem statement section earlier
value and variability which is often too di±cult in this paper. It is a well de¯ned and researched
algoconstraint. rithm and it is also well established in many
applicaMarkov localization based on grid [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] resolves tions (see e.g. [7{10]).
          </p>
          <p>one of the problems of Kalman ¯lter, which needs Monte Carlo Localization maintains a list of
possito know the expected value and the variance of in- ble states of the state space (representing the positions
put data. This algorithm splits the area to the grid of the robot). Each state is weighted by its probability
of correspondence with the actual state of the robot. During the initialization of MCL, it is needed to
In the most common implementation, the state repre- set the probability cloud. If the robot's position is
sents the coordinates in 2D Cartesian space and the known, then for its (known) state x the probability
heading direction of the robot. It may be of course p ¡x j M 0¢ = 1, and for all other states y 6= x the
easily extended to 3D space and/or contain more in- probability p ¡y j M 0¢ = 0.
formation depicting the robot state. All these possible If the robot's position is not known, the probability
states compose the so called probability cloud. of all positions is the same and p ¡x j M 0¢ must be set</p>
          <p>The Monte Carlo Localization algorithm consists for all x so that
of three phases: Prediction, Measurement, and
Resampling. Z p ¡x j M 0¢dx = 1</p>
          <p>During the Prediction phase, a new value for each
item of the cloud is computed, resulting in a new prob- One of the most important features of this method
ability cloud. To simulate various inaccuracies that is its ability to process data from more than one
sourappear in a real hardware, random noise is added to ce. Every sensor participates on computing the
probeach position in the prediction phase. This is very use- ability for the given state. For example, if we have
ful. For example: If the wheels were slipping and no a compass sensor and it reads that the robot is
headrandom noise was added, the probability cloud would ing to the north, we can lower the probability of
diftravel faster than the real hardware. ferently oriented samples. If robot's bumper signalizes</p>
          <p>During the measurement phase, data from real sen- collision, there is a high probability for the robot to be
sors are processed to adjust probability of the positions near a wall or another obstacle. It is therefore possible
in the cloud. The probability of samples with lesser to discard the part of the probability cloud which lies
likelihood (according to sensors) is lowered and vice in an open space.
versa. For example, when the sensors show the robot The Monte Carlo Localization can be implemented
orientation is northwards, weight for samples repre- easily, yet it provides very good results especially in
senting other orientations is lowered. cases, where the sensors do not provide exactly
cor</p>
          <p>The last phase - resampling - manages size and cor- rect data (e.g. the distance measurement is subject to
rectness of the cloud. Samples with probability lower errors). Our implementation of the MCL algorithm,
than a given threshold are removed from the cloud. To which shows its great usability, is described in more
keep the number of positions constant, new positions detail in the following section.
are added. These new positions are placed around
existing positions with high probability.</p>
          <p>Formally, the goal is to determine robot's state in 5 Sensor classes in modi¯ed MCL
step k, presuming the original state and all the
measurements M k = fmi; i = 1::kg are known. Robot's
state is given by a vector x = hx; y; ®i, where x and y
is the robot position and ® is its heading.</p>
          <p>During the prediction phase, the probability
density p ¡xk j M k¢ for step k is enumerated. It is based
only on presumed movement of the robot without any
input from real sensors. Therefore, for any known
command uk¡1 given to the robot, we have
We have decided to modify the original MCL
algorithm and use sensor input also for the prediction
phase. We allow selected reliable sensors to change the
position of MCL samples in addition to changing the
weights of samples based on the sensor readouts. This
allows to maintain the probability cloud more in shape
with the actual robot movement, yet we keep the core
MCL idea of adding random noise to handle
unexpected inputs or inputs which are not in accordance
to actual robot movement.</p>
          <p>In our implementation, we divide the inputs
coming from sensors in two categories, which will be
further discussed in following paragraphs:
p (xk j xk¡1; uk¡1) p ¡xk¡1 j M k¡1¢ dxk¡1
p ¡xk j M k¡1¢ =
=</p>
          <p>Z</p>
          <p>In the measurement phase, we will compute the
¯nal value of probability density for actual step k. To
do so, data from sensors is used. It implies the
probability of p (mk j xk), where mk is the actual state and
xk is the assumed position. The probability density in
step k is then described by the following equation:
p ¡xk j M k¢ =
p (mk j xk) p ¡xkjM k¡1¢
p (mkjM k¡1)
{ Advancing inputs
{ Checking inputs</p>
          <p>Our system contains two interfaces for these two
types of inputs. The device or its abstraction in
Hardware Abstraction Layer implements the corresponding
interface based on its type, so the MCL core can use it
as its input. The MCL core calls each device when it
has new data, and the work with the samples is done
by each device separately. This keeps the main code
easier to read, simpler, and input independent. Also,
the device itself knows the best how to interpret the
raw data it measures.</p>
          <p>The level of reliability can be speci¯ed for each
input device. Then, the samples are adjusted by the
devices with respect to their con¯gured `credibilities'.</p>
          <p>For example: two sets of odometry encoders, one pair
on driven wheels and one pair on dedicated wheels,
have di®erent accuracy because the driven wheels may
slip on the surface when too much power is used. Then,
the credibility of driven wheels encoders will be set
lower than the credibility of the sensors mounted on
undriven wheels. In addition, setting the reliability
level helps to deal with di®erent frequencies of data
sampling.
5.1</p>
          <p>Advancing inputs
{ Compass - checks the direction of samples
{ Beacons - check the distance from stationary
bea</p>
          <p>cons
{ Bumpers - check collisions with playing ¯eld
bor</p>
          <p>ders and other objects
{ IR distance sensors - check distance to borders and</p>
          <p>obstacles
6</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Implementation aspects</title>
      <p>Our implementation of MCL is based on previous work
on a robot which participated in Eurobot 2008
contest (see [11]). That ¯rst \pilot" MCL
implementation in 2008 was not complete; it was rather
proofof-concept than a reliable software unit, and we also
knew the computational power will be di®erent if 2009
so performance was not considered at all. However,
the results seemed very promising, so it has not been
dropped but has been further developed and extended
to use it in 2009 for real. In the following paragraphs,
we emphasize several implementation aspects we
consider as important for the successful result.</p>
      <sec id="sec-3-1">
        <title>This input type is used for changing the samples which</title>
        <p>form the probability cloud. Such input could be for
example odometry, from which relative movement since
last iteration is calculated. This di®erence is then used
to change the samples properties. i.e. to move them.</p>
        <p>The information provided by these kind of inputs ap- 6.1 Position estimation
plied to samples is `blurred' by randomly generated
noise as described earlier in the general MCL descrip- It is expected that the MCL outputs the estimation
tion. After moving the samples, boundary conditions of robot position. Because of its nature, the
resultare checked (i.e. to exclude samples which fell out of ing position (\most probable position") can be
comthe physical working area). As a result the probabil- puted from the samples at any time. This estimation
ity of samples representing impossible positions is de- is very simple, just computing the weighted average of
creased. all samples. In addition we can determine the overall</p>
        <p>These advancing inputs are added to the origi- reliability of this estimation. Therefore, we have made
nal MCL algorithm, which deals only with theoret- the interface to the MCL asynchronous to the inputs,
ical movement based on movement commands. It is and the MCL core can be called at any time whenever
not necessary to use it so, but it brings much better needed. This approach obviously dramatically saves
precision for little cost. the computational power in comparison to
incremen</p>
        <p>Our robot currently uses only one advancing input: tal localization methods which might need periodical
the odometry information from encoders mounted on updates or re-calculations.
dedicated sensor wheels.
6.2</p>
        <sec id="sec-3-1-1">
          <title>Localization without initial knowledge 5.2</title>
        </sec>
        <sec id="sec-3-1-2">
          <title>Checking inputs</title>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Monte Carlo Localization can also determine the robot</title>
        <p>Checking inputs do not a®ect the position of the sam- position from scratch. At the beginning of localization
ples. Instead, they are just adjusting their probabil- (when the robot is lost) samples are spread uniformly
ity (also called sample weights). The reason for this all over the playing ¯eld as described in Section 4.
is that inputs of this type provide absolute position The sensors providing absolute positioning
informainformation and not relative di®erence from the last tion lower the weight of misplaced samples and new
measurement. This also does not need to be one exact samples are placed in regions with higher probability
point, but an area or position probability distribution, (see Figure 2). This is repeated until su±ciently
reliwhich ¯ts perfectly to the Monte Carlo Localization able position estimation of the robot is reached.
algorithm. At the same time, it is possible to reach a result</p>
        <p>All checking inputs may be processed separately; even without absolute sensors { as the robot moves,
we regulate them only by setting their reliability levels. the sensors which provide relative information
(adOur robot uses these checking inputs: vancing inputs) will move the samples as usually and
B1
B2
dt</p>
        <p>1
1
dt2
3</p>
        <p>Intersection
2
dt3</p>
        <p>B3
the boundary checks gradually cut the impossible con- constant. This condition is met as we are using
ultra¯gurations until the required precision is reached sonic waves in a small environment with more or less
(e.g. only one and small cloud remains). constant conditions and the robot speed is negligible.
Since the speed of the sound waves is known, we
6.3 Adding / removing sensors can measure the time di®erence, from which the
distance may be easily calculated. It is possible to use two
When new sensors are added to a system, the infor- beacons for good position calculation (provided the
mation they provide may a®ect the position the robot measurements are precise and we know at which
half\thinks" it is in. It may be because the new sensor plane the robot is). If there 3 or more beacons located
gives better data (in which case we certainly appre- around the working area, the trilateration will
theoretciate the change). On contrary, the new sensor could ically give a single solution. Practically, the
measureprovide data with lesser quality then the already exist- ments may not be precise and so the calculation may
ing sensors. This is not a big problem in MCL, because not lead to any intersection of the circles. In our
syssuch low quality data may change the samples prop- tem, this is not a problem, because the measurements
erties (position) or weight, but because of the nature are not used for trilateration but handled separately
how MCL works, such change may be perceived as to adapt the probability cloud used in MCL.
adding the random noise which is part of MCL any- To correctly measure the traveling time, we
synway. So, it may even help the algorithm to work well. chronize the transmitter-receiver system by using
inIt could be said in general { the more di®erent inputs, frared light (see below).
the better. Many other systems based on the TOF principle</p>
        <p>If we remove a sensor from the robot or if it stops have been developed; for examples see e.g. [12, 13].
providing data and there are other sensors available, it
does not imply the MCL results will be worse. It means
there are fewer inputs which adjust the samples or
their weights but the remaining sensors will adapt the
probability cloud in accordance to their inputs and so
su±cient level of result preciseness can be maintained.</p>
        <p>Therefore, the system is less vulnerable than a system
which relies during the localization on one sensor or
sensor set.
6.4</p>
        <sec id="sec-3-2-1">
          <title>Beacons</title>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>In the following paragraphs, we present our design of</title>
        <p>a sensor set which provides relatively good information
about the robot position and in cooperation with other
sensors it helps to create a robust localization system
{ the beacon system.</p>
        <p>The main idea of this beacon system is to mount
several beacons around the working area of the
robot and let the robot measure the distance to these
beacons. Then, the robot will be able to estimate its
position because the beacons position is known. This
sensor set provides absolute position information, but
its correctness may be attacked by the environment
features, as for example the signal may get re°ected
by close obstacles like walls or the robot could not be
able to measure the distance to one beacon because
the signal may get lost (or get blocked by an
obstacle).</p>
      </sec>
      <sec id="sec-3-4">
        <title>The principle In our system, we measure the time the signal travels from the transmitter at the beacon to the receiver mounted on the robot (TOF { Time As we want to measure time di®erence between the of Flight). Of course this works only if the speed is signal being sent and received, we need to have syn</title>
        <p>Hardware The transmitting system consists of three
stationary interconnected beacons located at speci¯ed
places around the working area of the robot. When
the system receives signals from the three beacons and
calculates the three distances, it can theoretically
determine its position by using trilateration. In praxis,
such simplest form does not fully work. The robot may
move between receiving signals from all the beacons,
some signals may not be received or they may provide
incorrect information because of re°ections. Even that,
good estimation may be acquired as was discussed in
Section 6.</p>
        <p>The signal is sent one-way only, the receivers do not
communicate with the transmitters. Therefore, there
can be multiple independent identical receivers, which
are able to determine its individual positions. These
receivers may be then used for localizing more objects
and if the information is passed to a single center,
it may be of substantial help (for example to create
opponent avoidance system by placing one receiver on
the opponent and reading it by wireless transfer).
Transmitters at each beacon work in the following
way:
1. Send timing information (infrared)
2. Wait for a de¯ned period of time
3. Send distance measuring information (ultrasonic)
4. Wait for a de¯ned period of time
(this is sequentially repeated for all beacons)
chronized clocks. This is done in step 1 by using
infrared modulated signals. Besides that, the
transmitted information contains additional information about
the beacon which is going to transmit sound waves.</p>
        <p>Sound waves are transmitted as ultrasonic waves,
and are always transmitted only from one beacon at
a time. The transmitted signal contains also the
identi¯cation of the source beacon.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Receiver waits for the infrared timing information.</title>
        <p>When it is received, the receiver resynchronizes its
internal timer and generates a message. These messages
are transported to the localization unit. Every
message contains time stamp, information that
synchronization occurred, and the information about beacon
which is going to transmit ultrasonic information in
this time step. Upon reception of the timing
information, the receiver switches its state to wait for
ultrasonic signal. When correct ultrasonic information
arrives, the receiver generates similar message as is
the message after IR reception, but containing time
stamp for ultrasonic reception and beacon
identi¯cation transmitted in the ultrasonic data. The
di®erence in these two timestamps is linearly dependent
to distance with a constant o®set (the two signals are
not transmitted exactly at the same time). Since each
beacon identi¯es itself in both infrared and ultrasonic
transmissions, the probability of mismatch is reduced.</p>
        <p>When the infrared information is not received,
a message is generated saying the synchronization did
not occur and the timestamp is generated from
previously synchronized internal clock. When the ultrasonic
information is not received, localization unit is noti¯ed
that nothing was received.</p>
        <p>The situation after three successfully received
ultrasonic signals with synchronized clock can be seen
in Figure 1.
6.5</p>
        <sec id="sec-3-5-1">
          <title>Beacons and MCL</title>
          <p>unit which controls all the other devices and is highly
con¯gurable. It means that all the parameters of the
equation for distance calculation can be changed easily
without the need of changing the beacons hardware or
device ¯rmware. It even allows us to calculate or adjust
the parameters on the °ight if distance information is
provided based on external measurement.</p>
          <p>The con¯guration of the main computing unit
contains not only the important constants for the
equation, but also the positions of the transmitting
beacons. As we know the distance and the beacon id, we
can increase the weights of the MCL samples in the
circular belt formed by these two values and a range
constant. MCL samples far from the belt are penalized
(see Figure 2).</p>
          <p>This approach is much better and more robust
than just waiting for intersections and then computing
the robot position using simple trilateration. These
intersections may not happen very often because of the
time gap between individual beacon transmissions
(especially when the robot is moving fast). At the same
time, it is good to implement di®erent weighting for
the samples on a belt, near an intersection of two belts
and near the intersection of all three belts.
6.6</p>
        </sec>
        <sec id="sec-3-5-2">
          <title>Camera</title>
          <p>The idea of using camera for absolute robot
positioning seemed very hard at the ¯rst time. Later, when
we had the modular MCL implementation ¯nished,
we realized there is a great opportunity to use the
information we get from the camera while looking for
the playing elements positioned at prede¯ned places of
the playing area. Now, we can compare the playing
element positions (acquired from the camera) with their
¯xed positions (de¯ned by the Eurobot contest rules)
and adjust the weight of the MCL samples to merge
the two positions. For more details, see [14].</p>
          <p>As described earlier, our beacon system consists of 6.7 Gyroscope
three transmitting and one receiving beacons. The
information is passed from the beacon system to the In the early stages of robot design, we proposed to use
main computing unit via messages containing beacon a compass as one of the input sensors. However, using
id (i.e. transmitter identi¯cation) and time di®erence a compass in a small indoor competition is not a very
between the infrared and ultrasonic transmissions. good idea, because its precision can be degraded by
in</p>
          <p>There are two reasons why each message contains °uence of many factors (e.g. huge metallic cupboard,
the time di®erence (delta) instead of the calculated electromagnetism, steel concrete walls or metal
strucdistance: computational power of the microcontroller ture building). Using a gyroscope instead of a compass
and the degree of robustness. The main computing would be much more e±cient for our purposes,
beunit is more powerful than the receiving beacon, so cause gyroscope works completely independently and
we let the beacon do less work and we even bene- the in°uence of the environment is minimal. The only
¯t from this decision. We considered deltas to be the problem is the placement of the gyroscope itself,
beperfect raw data for our purpose - distance measure- cause it should be placed in the rotational axis of the
ment. The computation is done in the main computing robot.
7 The results and performance the contest and all connected events when the working
conditions for the robot remain unchanged. After
¯nApparently, processing of a large number of MCL sam- ishing the year, we want to evaluate the performance
ples may have impact on performance of the whole of this MCL implementation to be able to judge its
uslocalization system. In general, the more samples are age in other environments and with other hardware.
taken into computation, the more precise the
localization is, but the slower the computation is. References</p>
          <p>In our project, we have achieved acceptable speed
using 400 samples. The acquired precision lies within
a margin of single millimeters which is su±cient for
the current task; should higher precision be needed,
it could be easily reached by increasing the samples
count and ¯ne-tuning the weighting functions. Also,
this number of samples and the resulting precision are
independent on the working area size.</p>
          <p>On contrary, the Markov grid-based localization
requires to handle a grid with the size of the robot
working area and the number of cells proportional to
required precision. In the case of Eurobot contest, to
reach the same precision we would need to handle a
grid of 2100 x 3000 samples which is magnitudes higher
than the 400 samples needed when using MCL.</p>
          <p>The depicted modi¯cation of using sensors for the
prediction phase instead of using just the information
of expected robot movement has increased the
precision too, as it takes into account not only where the
robot was supposed to move, but where it actually
moved. To be able to use sensors for this modi¯cation,
it is needed to assure their good credibility. For our
real robot, we have used odometry sensors mounted
on dedicated wheels, which are not subject to skids
and slides of the powered wheels. It has proven this is
su±cient to reach a good level of precision.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. Eurobot Association: Eurobot autonomous robot contest: http://www.eurobot.org,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. S. Thrun:
          <article-title>Robotic mapping: a survey. In: Exploring arti¯cial intelligence in the new millennium</article-title>
          . Morgan Kaufmann Publishers Inc.,
          <year>2003</year>
          ,
          <volume>1</volume>
          {
          <fpage>35</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>R.</given-names>
            <surname>Negenborn</surname>
          </string-name>
          :
          <article-title>Robot localisation and kalman ¯lters: On ¯nding your position in a noisy world</article-title>
          .
          <source>Master's thesis</source>
          , Utrecht University,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>G.</given-names>
            <surname>Welch</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Bishop: An introduction to the Kalman ¯lter</article-title>
          .
          <source>Technical Report TR 95-041</source>
          , University of North Carolina at Chapel Hill,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>W.</given-names>
            <surname>Burgard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Derr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.B.</given-names>
            <surname>Cremers</surname>
          </string-name>
          :
          <article-title>Integrating global position estimation and position tracking for mobile robots: The dynamic Markov localization approach</article-title>
          .
          <source>In: Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
          .,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>F.</given-names>
            <surname>Dellaert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Burgard</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Thrun: Monte Carlo localization for mobile robots</article-title>
          .
          <source>In: Proc. of the IEEE International Conference on Robotics &amp; Automation (ICRA99)</source>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>E.</given-names>
            <surname>Menegatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zoccarato</surname>
          </string-name>
          , E. Pagello, H. Ishiguro:
          <article-title>Image-based Monte-Carlo localisation with omnidirectional images</article-title>
          .
          <source>Robotics and Autonomous Systems</source>
          <volume>48</volume>
          ,
          <year>2004</year>
          ,
          <volume>17</volume>
          {
          <fpage>30</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>