=Paper= {{Paper |id=Vol-1484/paper18 |storemode=property |title=Adapting Low-Cost Platforms for Robotics Research |pdfUrl=https://ceur-ws.org/Vol-1484/paper18.pdf |volume=Vol-1484 |dblpUrl=https://dblp.org/rec/conf/iros/GeorgeCZJGW15 }} ==Adapting Low-Cost Platforms for Robotics Research== https://ceur-ws.org/Vol-1484/paper18.pdf
                      Adapting Low-Cost Platforms for Robotics Research
                    Karimpanal T.G., Chamambaz M., Li W.Z., Jeruzalski T., Gupta A., Wilhelm E.


   Abstract— Validation of robotics theory on real-world hard-          available here:
ware platforms is important to prove the practical feasibility          https://github.com/SUTDMEC/EvoBot Downloads.git
of algorithms. This paper discusses some of the lessons learned      • Adaptable: The final platform is intended to be as
while adapting the EvoBot, a low-cost robotics platform that
we designed and prototyped, for research in diverse areas               general purpose as possible, with minimum changes
in robotics. The EvoBot platform was designed to be a low-              needed to be made to the base firmware by users in
cost, open source, general purpose robotics platform intended           order to scale to a wide variety of common research
to enable testing and validation of algorithms from a wide              applications. Some representative applications are de-
variety of sub-fields of robotics. Throughout the paper, we             scribed in section IV. . . .
outline and discuss some common failures, practical limitations
and inconsistencies between theory and practice that one may         In the process of designing the EvoBots platform, a
encounter while adapting such low-cost platforms for robotics     great deal of failure-based learning was involved, and this
research. We demonstrate these aspects through four represen-     paper compiles and synthesizes this process in order for it
tative common robotics tasks- localization, real-time control,    to be useful for future robotics researchers who intend to
swarm consensus and path planning applications, performed
using the EvoBots. We also propose some potential solutions to
                                                                  develop their own platforms, or adapt commercially available
the encountered problems and try to generalize them.              platforms. Specific emphasis will be on challenges common
Index Terms — low cost, open-source, swarm robotics, lessons      to robotic platforms in general, and approaches used to avoid
learned                                                           failure when attempting to solve them. The authors would
                                                                  like to point out that each robotics project presents a unique
                        I. I NTRODUCTION                          sets of problems, so we have attempted to only describe
                                                                  problems we think may be generalized.
   The Motion, Energy and Control (MEC) Lab at the
Singapore University of Technology and Design (SUTD) is                     II. P RECEDENTS AND E VO B OT D ESIGN
engaged in an ambitious research project to create a swarm           This section will discuss commonalities and differences
of autonomous intelligence-gathering robots for indoor en-        between the EvoBot and other comparable low-cost robotics
vironments. After reviewing commercially available low-           platforms, as well as provide insights into the lessons learned
cost robotics platforms, some of which are shown in Table         during the design process. In order to reduce the time be-
I, the decision was taken to build a custom ground-based          tween design cycles, the EvoBots were prototyped with a 3D
robot platform for performing swarming research, dubbed           printed body and developed across three major generations
the ’EvoBot’. The primary trade-off is between platform           and several minor revisions. An exploded view of the EvoBot
flexibility and cost, where existing robots are intended to       is shown in Figure 1.
either be applied in specific research areas, and are hence          Like the Khepera, Finch, Amigobot (table I) and most of
equipped with limited and specific sensor and communica-          the other platforms, locomotion on the EvoBot is achieved
tion capabilities, or are more flexible with respect to sensor
and firmware packages, but are also more expensive. In
terms of software control development environments, well-
established frameworks such as ROS [1] or the Robotics
Toolbox [9] have extremely useful modular functions for
performing baseline robotics tasks, but require substantial
modification between applications and require specific oper-
ating systems and release versions.
   With this in mind, the MEC Lab set out to develop the
EvoBot platform with the following goals:
  • Low cost: affordable for research groups requiring a
    large number of swarming robots (e.g. more than 50)
    with sufficient sensing and control features.
  • Open source: The hardware, body/chassis design, appli-
    cation software and firmware for the EvoBots are fully
    open source in order to enable any group to replicate         Fig. 1. The 3-D printed case has two slots at the bottom for the optical
                                                                  flow sensors, a housing for the left and right tread encoders, and 5 IR depth
    the platform with minimal effort. All the hardware and        sensors. The encoders on the forward wheels and the optional ultrasonic
    software files used for the design of the EvoBots is          sensors are not shown



      FinE-R 2015                                            Page 20                                IROS 2015, Hamburg - Germany
      The path to success: Failures in Real Robots                                                  October 2, 2015
                                   TABLE I
                                                                                   high even after using methods such as the Kalman filter [15].
    P RECEDENTS FOR LOW- COST RESEARCH ROBOTICS PLATFORMS
                                                                                   Although obtaining the position estimates from the IMU is
    Platform     Est. Cost (USD)   Commercial   Scholar Hits   Schematics   Code
                                                                                   not recommended in general, good estimates can be obtained
  Khepera [19]        2200            Yes         >1000           No        Yes    with units that are capable of a much higher resolution and
  Kilobot [22]        100+            Yes           82            No        Yes    data rate. Such units however, come at an expense and thus
  e-Puck [18]         340+            Yes         >1000           Yes       Yes    may not be the ideal choice for a low-cost platform.
  Jasmine [4]         150+            No          >1000           Yes       Yes
  Formica [11]         50             No            10            Yes       Yes    A. Sensing Features
  Wolfbot [6]         550+            No             4            Yes       Yes       While the sensors mentioned earlier in this section focused
   Colias [3]         50+             No          >1000           No        No     on features shared with other platforms, there are some
   Finch [17]         100             Yes           73            No        Yes    sensing, control and communication features specific to the
 Amigobot [2]         2500            Yes           170           No        Yes    EvoBots.
    EvoBot            300             Yes            0            Yes       Yes    In almost all robotics applications, errors in the robot’s
                                                                                   position estimates may gradually accumulate (e.g. for ground
                                                                                   robots, skidding or slippage of the wheels on the ground
                                                                                   surface confound the wheel encoders). In the first two gener-
using a differential drive system, with motors on either side                      ations of the EvoBots, this error in the wheel encoders caused
of its chassis. Although this system introduces kinematic                          substantial challenges for localization. In order to tackle this
constraints by restricting sideways motion, the associated                         problem, optical flow sensors were placed on the underside of
simplicity in manufacturing and assembly, and in the mathe-                        the EvoBot. They detect a change in the position of the robot
matical model for use in control applications are significant                      by sensing the relative motion of the robot body with respect
advantages. The wheels are coupled to the motors through a                         to the ground surface. This ensures that the localization can
gearbox with a gear reduction ratio of 1:100. After reduction,                     be performed more reliably even in the case of slippage. As
the final speed of the robot can be varied from -180 to                            shown in Figure 1, there are two such sensors placed side
180 mm/s, so that both forward and backward motions are                            by side at a specific distance apart. This arrangement allows
possible.                                                                          inference of heading information of the robot along with
The speed of each motor is controlled by a pulse width                             information of the distance traveled. Although the optical
modulated voltage signal. In order to ensure predictable                           flow sensors perform well in a given environment, they need
motion, an internal PI controller is implemented by taking                         to be calibrated extensively through empirical means. In
feedback from encoders that track the wheel movement.                              addition, it was found that the calibration procedure had to
The controller parameters may need to be hand-tuned to                             be repeated each time a new surface is encountered, as the
compensate for minor mechanical differences between the                            associated parameters are likely to change from surface to
two sides of the robot, arising from imperfect fabrication                         surface.
and assembly.                                                                         As described above, the wheel encoders, optical flow as
For obstacle detection and mapping applications, 5 infra-                          well as the IMU sensors can all be used to estimate the
red (IR) sensors were placed on the sides of a regular                             robot’s position and heading. Each of these becomes relevant
pentagon to ensure maximum coverage, with one IR sensor                            in certain specific situations. For example, when the robot
facing the forward direction. Similar arrangement of range                         has been lifted off the ground, the IMU provides the most
sensors is found in platforms such as Colias and e-puck                            reliable estimate of orientation; when the robot is on the
(table I). Despite this arrangement, there exist blind spots                       ground and there is slippage, the optical flow estimates are
between adjacent sensors, which could lead to obstacles not                        the most reliable; and when there is no slippage, the position
being detected in certain orientations of the robot relative                       estimates from the encoders are reliable. Some details on
to an obstacle. The use of ultrasonic sensors instead of                           the use of multiple sensors for localization are mentioned in
IRs, could reduce the probability of occurrence of blind                           section IV-A.
spots due to their larger range and coverage, but they are                            In their current configuration, the EvoBots also include
also more expensive. It is thus advisable to choose sensors                        the AI-ball, a miniature wifi-enabled video camera with
appropriately and to thoroughly test their performance and                         comprehensive driver support and a low cost point to capture
limitations before a final decision on the physical design and                     video and image data [25]. The camera unit is independent
sensor placements is made.                                                         of the rest of the hardware and thus can be removed without
Another feature present in the EvoBots common to other                             any inconvenience if it is not needed.
platforms is the use an inertial motion unit (IMU). The
6DOF IMU gives information regarding the acceleration of                           B. Control and Communication Features
the robot in the x, y and z directions, along with roll,                              There are a wide variety of microcontrollers which are
pitch and yaw (heading) information. Although in theory,                           presently available on the market. The chipset used on the
the position of the robot can be inferred using the IMU data,                      EvoBot is the Cortex M0 processor, which has a number
in practice, these sensors are very noisy, and the errors in                       of favorable characteristics such as low power, high per-
the position estimates from these sensors are unacceptably                         formance and at the same time, is cost efficient, with a


     FinE-R 2015                                                              Page 21                          IROS 2015, Hamburg - Germany
     The path to success: Failures in Real Robots                                                              October 2, 2015
large number of available I/O (input/output) pins. One of the                  A. Sensing
primary advantages of this processor is that it is compatible                     While it may be considered acceptable to use ideal sensor
with the mbed platform (https://developer.mbed.org/), which                    models in simulation, in real-world robotics applications,
is a convenient, online software development platform for                      and especially in the case of low-cost platforms, sensing
ARM Cortex-M microcontrollers. The mbed development                            information about the environment can be done only with
platform has built-in libraries for the drivers, for the motor                 a certain degree of range and accuracy. It is thus important
controller, Bluetooth module and the other sensors.                            to be judicious in choosing the location of the sensors. For
   Like the e-puck, the EvoBots have Bluetooth communica-                      the case of range sensors for obstacle detection, blind spots
tion capability. After experimenting with various peer-to-peer                 must be avoided as much as possible to prevent unexpected
and mesh architectures, it was determined that in order to                     behaviours. Also, the upper and lower limits of the sensor
maintain flexibility with respect to a large variety of potential              range must be considered. As in the case of the EvoBots
target applications, having the EvoBots communicate to a                       or other low-cost platforms, the sensor range may be quite
central server via standard Bluetooth in a star network                        limited and prone to noise, and blind spots exist despite
topology using the low-cost HC-06 Bluetooth module was                         careful sensor placement. In case of occurrence of blind
the best solution. Sensor data from the various sensors                        spots, it may be worth encoding default behaviours into
gathered during motion is transmitted to the central server                    the robot such as conditional wall following or automatic
every 70ms. The camera module operates in parallel and uses                    steering away from detected obstacles.
Wi-Fi 802.11b to transfer the video feed to the central server.
The Bluetooth star network is also used to issue control                       B. Calibration
commands to the robots.                                                           The behaviour of sensors are typically considered to be
   In addition, having a star network topology with a central                  same for different environments during simulation. In prac-
server indirectly allows additional flexibility in terms of                    tice, most sensors need some form of calibration. In addition,
programming languages. Only a Bluetooth link is required,                      the calibration parameters for certain sensors may depend on
and all the computation can be done on the central server.                     the environment in which they operate. For example, the
For example, the EvoBot can be controlled using several                        calibration parameters for the optical flow sensors in the
programming languages, including (but not limited to) C,                       EvoBots vary based on the surface on which they operate. In
python and M AT LAB. The Bluetooth link also opens up                          addition, the parameters are very sensitive to small variations
the possibility of developing smartphone mobile applications                   in ground clearance which can be caused by variations in the
for it. Table II summarizes all the sensors used in the EvoBot.                robot body due to imperfect 3D printing or during assembly.
                                                                               It is often the case that parameters will have to be set
                           TABLE II                                            and reset from time to time. This can considerably add
T HE FINAL SENSING CAPABILITIES OF THE 3 RD GENERATION E VO B OT               to development time and effort. Automating the calibration
                                 PLATFORM                                      process should be done whenever possible. If automation is
                                                                               not possible, one way to mitigate this issue is to have the
                                                                               software package include user commands that can change
      Signal           Sensor       Frequency      Full Range Accuracy
                                                                               calibration parameters on the fly. This was the solution
   acceleration       MPU6050        1000 Hz             +/- 0.1%              adopted for the EvoBots in order to calibrate the encoders
   Temperature        MPU6050       max 40 Hz   +/-1 ◦ C from -40 to 85 ◦ C    and optical flow sensors.
    x/y pixels        ADNS5090       1000 Hz            +/- 5mm/m
                                                                               C. Timing
    Encoders           GP2S60        400 Hz             +/- 5mm/m
  Battery current      ACS712         33 Hz          +/- 1.5 % @ 25C              In simulation, the robot’s update loop and solvers can
    Proximity       GP2Y0A02YK0F      25 Hz              +/- 1mm
                                                                               have either fixed or variable time steps, but it is most likely
     Camera            AI-ball        30fps           VGA 640x480
                                                                               to be variable in practice. Thus, keeping track of time is
                                                                               critical, especially for real-time applications. For example, in
                                                                               the EvoBots, the performance of the localization algorithm
                                                                               (section IV-A) significantly depreciates if the time step is
                                                                               considered fixed.
                     III. G ENERAL P ROBLEMS
                                                                               D. Communication
   The EvoBot platform was designed to be useful for a                            When a large amount of sensor data and instructions
wide range of research problems, and to enable researchers                     needs to be exchanged between platforms or between a
working in theoretical robotics to easily shift their algorithms               platform and a central machine, maintaining the integrity
from simulation to hardware. Moving from virtual to real                       of the communicated data is critical. When the exchanged
platforms introduces a set of additional issues which must                     information is delayed or lost either partially or completely
be considered. Some of these, which were encountered while                     during information exchange, it could lead to a series of
developing applications for the EvoBots, are discussed in this                 unwanted situations such as improper formatting of data,
section.                                                                       lack of synchronization, incorrect data etc., These aspects are


     FinE-R 2015                                                          Page 22                          IROS 2015, Hamburg - Germany
     The path to success: Failures in Real Robots                                                          October 2, 2015
seldom accounted for in simulation. In the EvoBots, the issue                      1

                                                                                  0.8
of lack of data integrity was tackled by performing cyclic                        0.6                              Robot 2
redundancy checks (CRC) [21] as part of the communication                         0.4

routine. It is ideal to perform bit-wise checks as well, but this                 0.2     Robot 1




                                                                         Y (m)
                                                                                   0
has not been done in the case of the EvoBots.                                    −0.2
   The issues listed in this section apply to almost all robotics                −0.4
                                                                                                                                 Ground Truth
applications in general. The following section discusses some                    −0.6
                                                                                                                                 Estimated (Kalman Filter)
                                                                                 −0.8                                            Just Model
of the applications and the associated issues encountered                         −1
                                                                                                                                 Measurment Only
                                                                                   −1   −0.5        0       0.5              1          1.5                  2
during their implementation using the EvoBots.                                                             X (m)




                     IV. A PPLICATIONS                              Fig. 2. The Kalman Filtering process improves the state estimate beyond
                                                                    what the model and the measurements are capable of on their own.
   This section discusses application-specific issues in diverse
sub-fields of robotics such as real-time control, swarm and
artificial intelligence applications. These applications were
selected because they are commonly implemented tasks from           in the estimated state.One solution to this problem is to label
a wide variety of sub-fields in robotics.                           each sensor data package with time, compute the sampling
                                                                    time on the central computer and use the computed real-time
A. Localization                                                     sampling time in the extended Kalman filter formulation.
   Ground robots typically need to have an estimate of              With these factors in mind, labeling the sensor data package
position and heading. The EvoBot platform, as well as many          with time seems to be an efficient solution to deal with
other robots, are designed to be used in an indoor environ-         variable sampling time.
ment, and therefore it is not possible to use GPS without              A common problem associated with estimating the state
modifying the environment in question; also, positioning            using the velocity sensors is “slippage”. In cases where
information provided by GPS has the accuracy in the order           a robot slips on the floor, the wheel encoders reflect an
of a 101 meters which is not suitable for small ground robots.      erroneous result. In the worst case, when the robot is stuck,
For this reason, the set of on-board sensors are used to            the encoders keep providing ticks as if the robot is moving,
estimate the robot’s states of position and heading. The on-        which leads to a significant error in the estimated position.
board sensors however, are typically associated with some           To solve this, we utilized the information from the optical
noise and retrieving positioning information from them leads        flow sensors and designed an adaptive Kalman filter. The
to erroneous results. For instance, integrating acceleration        velocities reflected by the encoders and optical flow sensors
data to get the velocity and position does not provide accurate     are compared and in case there is a significant difference
state estimation and leads to huge offsets from the true            between them, the occurrence of slippage is inferred. Then,
value due to the associated noise. To overcome this issue,          the noise covariance matrix in the extended Kalman filter is
state estimation is performed using a method based on the           changed so that the filter relies more on the velocity provided
Extended Kalman Filter [15].                                        by the optical flow sensor. Thus, we infer that although
   We used the information provided by the wheel encoders,          having multiple sensors sensing the same quantity may seem
optical flow sensors and the heading provided by IMU’s              redundant, each of them, or a combination of them may be
gyroscope to estimate the position and heading. The sensor          useful in different contexts.
information is combined with the mathematical model of
the robot to estimate the position and heading of the robot.
Figure 2 presents the result of an experiment where two
robots follow a trajectory (red line) and the data from the         B. Application in Real-time Control
sensors are used in the extended Kalman filter to estimate its
position and heading. In the next two paragraphs we explain            In a perfect scenario where there is no disturbance and
two problems we faced while implementing the designed               model mismatch, it is possible to use some feed-forward
Kalman filter and we describe the corresponding solutions           control to drive the robot on a desired trajectory. However, in
used in our platform.                                               the real world, a number of issues such as model mismatch,
   Timing is one of the most important factors for localization     disturbance and error in the internal state deviates the robot
and control tasks. The delay in communication between               from its desired path. Therefore, feedback control is vital to
the robot and central computer where the Kalman filter is           achieve an accurate tracking behavior. In general, the navi-
running, causes some variation in the time intervals within         gation problem can be devised into three categories: tracking
which each sensor data package is received. For example, the        a reference trajectory, following a reference path and point
time intervals can vary between 50 to 100 ms Therefore, as          stabilization. The difference between trajectory tracking and
mentioned in section III, the sampling time in the extended         path following is that in the former, the trajectory is defined
Kalman filter cannot be fixed a-priori. The state estimation        over time while in the latter, there is no timing law assigned
procedure highly depends on the corresponding sampling              to it. In this section, we focus on designing a trajectory
time and using a fixed sampling time leads to a large error         tracking controller.


    FinE-R 2015                                               Page 23                                   IROS 2015, Hamburg - Germany
    The path to success: Failures in Real Robots                                                        October 2, 2015
  The kinematic model of our platform is given by:                                           2
                                                                                                                                                              Reference Trajectory
                                                                                                                                                              Current Position
                                                                                          1.5
                        v1 + v2
                x˙c =             cos θc
                           2                                                                 1




                                                                               y (meter)
                               
                        v1 + v2                                                             0.5

                y˙c =             sin θc
                           2                                                                 0

                      1
                θ˙c = (v1 − v2 )                                   (1)                     −0.5

                      l
                                                                                            −1
                                                                                            −1.5   −1             −0.5             0              0.5          1                   1.5
where v1 and v2 are velocities of right and left wheels,                                                                       x (meter)


θc is the heading (counter clockwise) and l is the distance
between two wheels. As stated earlier, one of the reasons                                          Fig. 3.        The trajectory of the robot
for adopting a differential drive configuration for the EvoBot
is the simplicity of the kinematic model (1). Hereafter, we                                  2
                                                                                                                                                                        Error in x
                                                                                                                                                                        Error in y
use two postures, namely, the “reference posture” pr =                                      1.5
                                                                                                                                                                        Errrot in θ



(xr , yr , θr )0 and the “current posture” pc = (xc , yc , θc )0 . The
                                                                                             1

error posture is defined as the difference between reference
posture and current posture in a rotated coordinate where                                   0.5



(xc , yc ) is the origin and the new X axis is the direction of                              0


θc .                                                                                       −0.5
                                              
                xe           cos θc    sin θc 0                                             −1
                                                                                              0         50               100                150         200                  250

    pe =  ye  =  − sin θc cos θc 0  (pr − pc ).                                                                            Time (sec)



                θe              0        0     1
                                                                                                        Fig. 4.    The errors in x, y and θ
                                                                    (2)
The goal in tracking control is to reduce error to zero as
fast as possible considering physical constraints such as                    One of the important factors in implementing the real time
maximum velocity and acceleration of the physical system.                 controller is time synchronization. The controller needs to
The input to the system is the reference posture pr and                   keep track of time in order to generate the correct reference
reference velocities (vr , wr )0 while the output is the current          point. This issue is critical especially when a number of
posture pr . A controller is designed using Lyapunov theory               robots need to be controlled. In this scenario, an independent
[16] to converge the error posture to zero. It is not difficult           counter can be assigned to each robot to keep track of its
to verify that by choosing                                                time and generate the correct reference point at each time
 (                                                                        step.
   v1 = vr cos θe + kx xe + 2l (wr + vr (ky ye + kθ sin θe ))
                                                                          C. Application in Swarm Robotics
   v2 = vr cos θe + kx xe − 2l (wr + vr (ky ye + kθ sin θe ))
                                                               (3)           The overarching application of the EvoBots platform is
the resulting closed-loop system is asymptotically stable for             to enable low-cost swarm robotics research. The study of
any combination of parameters kx > 0, ky > 0 and kθ > 0.1                 emergent collective behaviour arising from interactions be-
The tuning parameters highly affect the performance of the                tween a large number of agents and their environment is an
closed loop system in terms of convergence time and the                   area which is gaining increasing importance recently. The
level of control input applied to the system. Hence, we chose             robots used for these purposes are usually simple and large
parameters kx , ky and kθ based on the physical constraints               in number, with communication capabilities built into them.
of our platform e.g. maximum velocity and acceleration.                   Although it is ideal to have peer-to-peer communication
Extensive simulation is performed to verify the controller (3).           between the agents, the same effect can be simulated through
Figure 3 shows some experimental results where the robot                  a star-shaped network, where all the agents share information
follows a reference trajectory (blue curve). The reference                with a central machine.
trajectory is a circle of radius 1m. The initial posture is                  A typical swarm robotics application is to have the swarm
selected to be (0, 0, 0)0 . Errors xr − xc , yr − yc , and θr − θc        arrive at a consensus about some common quantity, and
are shown in Figure 4. We remark that reference linear                    to use this as a basis to collectively make decisions about
and angular velocities vr and wr are difficult to obtain for              the next actions of each member [5]. Figure 5 shows the
arbitrary trajectories. We solved this problem using some                 configuration of a set of 6 robots at different times. The
numerical methods where the point-wise linear and angular                 heading/orientation of each robot depends on those of the
velocities are computed numerically.                                      other robots and is governed by the update equation:

  1 Choosing V = 1 (x2 + y 2 ) + (1 − cos θ ) and taking its derivative
                   2  e    e               e                                                                 θi ← θi + K(θm − θi )                                                       (4)
confirms that V̇ ≤ 0. Hence, V is indeed a Lyapunov function for the
system (1).                                                               where


     FinE-R 2015                                                    Page 24                                                IROS 2015, Hamburg - Germany
     The path to success: Failures in Real Robots                                                                          October 2, 2015
                                                                                        Fig. 6.   Planned path of the robot shown in blue



                                                                              designed to enable this objective falls under the fields of ma-
                                                                              chine learning and artificial intelligence (AI). This includes
                                                                              approaches such as neural networks [14], support vector
                                                                              machines [10], evolutionary robotics [20], reinforcement
                                                                              learning [23], probabilistic methods [24], etc., which are used
                                                                              for a range of tasks such as object recognition, clustering,
                                                                              path planning, optimal control, localization and mapping etc.
                                                                              Although most algorithms can be tested out in a simulated
                                                                              environment, the uncertainties and nuances associated with a
                                                                              real environment limit the utility of simulated solutions, es-
Fig. 5. Overhead view of the robots at different times during the heading     pecially in fields such as evolutionary robotics [7], [8], [12].
consensus. The robots are initially unaligned, but arrive at a consensus on
heading at t=10s                                                              A more thorough validation necessitates implementation of
                                                                              these algorithms on real-world physical systems.
                                                                              To illustrate some of the primary issues and requirements
                                                                              of a platform for use in AI, a path planning task is demon-
                                   X                                          strated using the standard A-star algorithm [13]. The scenario
                           θm =         θi /N                          (5)
                                                                              involves planning a path from the starting position of a
Here, θi is the heading of an individual agent, K is a constant               robot to a target position using a previously constructed
and θm is the mean heading of all N agents. These updates                     infra-red sensor based map of the environment as shown in
are performed on each agent of the swarm till the heading                     Figure 6. Most of the arena shown in Figure 6 has been
converges to the same value.                                                  mapped; the mapped areas are shown as black patches along
                                                                              the boundaries of the arena. The planned path (shown in
   As seen from Figure 5, the robots eventually orient them-
                                                                              blue) is generated by the A-star algorithm, using cost and
selves in a direction that was determined using the consensus
                                                                              heuristic functions. The heuristic function is proportional to
algorithm.
                                                                              the euclidean distance from the target position. Some of the
   Swarm algorithms rely on the current information
                                                                              important issues while implementing this algorithm for path
regarding other members. So, delays in communication
                                                                              planning and for AI applications in general, are listed below:
can cause swarming algorithms to diverge instead of
                                                                                 Computing Resources: The A-star algorithm could have
converging. As mentioned in section III, performing
                                                                              also been implemented locally on the robots. However, this
routines to maintain the integrity of the exchanged data
                                                                              would have implied storing a copy of the IR map and
is critical. The communication rate should ideally be
                                                                              performing the A-star algorithm locally using this map. This
independent of the number of agents involved. Usually, in
                                                                              may not have been feasible due to the limited on-board
communication systems, it is common practice to make use
                                                                              memory and computational complexity of the algorithm.
of communication data buffers. For swarm applications,
                                                                              Also, for path planning, the map of the environment gets
one should ensure the latest data is picked out either by
                                                                              larger as the space explored gets larger. Accordingly, the
matching similarity in time-stamps or by refreshing the
                                                                              A-star path planning could potentially become more com-
buffer before each agent is queried.
                                                                              putationally intensive. So setting a resolution parameter (or
                                                                              deciding to have one) is recommended before starting the
                                                                              mapping.
D. Application in Artificial Intelligence
                                                                              In general, sufficient computing resources will be needed to
   One of the long-term goals of robotics is to develop                       carry out AI algorithms. Having a thin client system that
systems that are capable of adapting to unknown and unstruc-                  offloads the computational effort to a central server helps
tured environments autonomously. The general set of tools                     deal with the high computational costs imposed due to high


     FinE-R 2015                                                        Page 25                              IROS 2015, Hamburg - Germany
     The path to success: Failures in Real Robots                                                            October 2, 2015
dimensionality in certain problems.                                           [4] F. Arvin, Kh. Samsudin, A. R. Ramli, and M. Bekravi. Imitation
   Dealing with the real world: Readings from the robot’s                         of honeybee aggregation with collective behavior of swarm robots.
                                                                                  International Journal of Computational Intelligence Systems, 4:739–
sensors involve an inherent component of noise, even after                        748, 2011.
they have been calibrated. In addition, it may be common                      [5] P. Berman and J.A. Garay. Asymptotically optimal distributed con-
for one to consider an agent as a point moving through                            sensus (extended abstract). In Proceedings of the 16th International
                                                                                  Colloquium on Automata, Languages and Programming, ICALP ’89,
a space during the simulation stage. In reality, the agent                        pages 80–94, London, 1989. Springer-Verlag.
cannot be considered to be a point in space.These aspects                     [6] J. Betthauser, D. Benavides, J. Schornick, N. O Hara, J. Patel, J. Cole,
of reality could potentially hinder one from achieving the                        and E. Lobaton. Wolfbot: A distributed mobile sensing platform for
                                                                                  research and education. In Proceedings Conference of the American
desired results. Taking the physical dimensions of its body                       Society for Engineering Education (ASEE Zone 1), pages 1–8. IEEE,
into consideration is one way to counter this.                                    2014.
In the above mentioned case of path planning, noise points                    [7] H.J. Chiel and R.D. Beer. The brain has a body: adaptive behavior
                                                                                  emerges from interactions of nervous system, body and environment.
were first removed using median filters from the raw map.                         Trends in neurosciences, 20:553–557, 1997.
Also, in order to introduce a safety margin, (and to com-                     [8] D. Cliff, P. Husbands, and I. Harvey. Explorations in evolutionary
pensate for the loss of information introduced due to blind                       robotics. Adaptive Behavior, 2:73–110, 1993.
                                                                              [9] Peter I. Corke. Robotics, Vision & Control: Fundamental Algorithms
spots as well as due to the resolution parameter) the detected                    in Matlab. Springer, 2011.
obstacles were inflated by exaggerating their size. The safety               [10] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector
margin could also be expanded to account for the dimensions                       Machines: And Other Kernel-based Learning Methods. Cambridge
                                                                                  University Press, New York, 2000.
of the robot body, thus ensuring a smoother transition from                  [11] S. English, J. Gough, A. Johnson, R. Spanton, J. Sun, R. Crowder,
simulation to reality, without having to explicitly encode                        and K.P. Zauner. Strategies for maintaining large robot communities.
the details of the body geometry during simulation, thereby                       Artificial Life XI, pages 763–763, 2008.
                                                                             [12] D. Floreano and L. Keller. Evolution of adaptive behaviour in robots
saving considerable development time. It should be noted                          by means of darwinian selection. PLoS Biol, 8:1–8, 2010.
that excessive expansion of obstacles could lead to failure of               [13] P.E. Hart, N.J. Nilsson, and B. Raphael. A formal basis for the heuristic
A-star to find a valid path to the goal.                                          determination of minimum cost paths. IEEE Transactions on Systems
                                                                                  Science and Cybernetics, 4:100–107, 1968.
Noise, and mismatches between virtual and real environ-                      [14] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice
ments is inevitable. Noise removal using appropriate filters,                     Hall PTR, Upper Saddle River, NJ, USA, 2nd edition, 1998.
and compensating for mismatches with the real world by                       [15] S.J. Julier, J.K. Uhlmann, and H.F. Durrant-Whyte. A new approach
                                                                                  for filtering nonlinear systems. In Proceedings of the American Control
erring on the side of caution may be considered to be useful                      Conference, volume 3, pages 1628–1632, 1995.
measures to ensure a smooth transition from virtual to real                  [16] Y. Kanayama, Y. Kimura, F. Miyazaki, and T. Noguchi. A stable track-
environments.                                                                     ing control method for an autonomous mobile robot. In Proceedings
                                                                                  of 1990 IEEE International Conference on Robotics and Automation,
                          V. C ONCLUSION                                          pages 384–389, 1990.
                                                                             [17] T. Lauwers and I. Nourbakhsh. Designing the finch: Creating a
   This paper discussed solution strategies to address a set of                   robot aligned to computer science concepts. In AAAI Symposium
common problems which may be encountered while adapting                           on Educational Advances in Artificial Intelligence, pages 1902–1907,
                                                                                  2010.
a low-cost robotics platform such as the EvoBot for research.                [18] F. Mondada, M. Bonani, X. Raemy, J. Pugh, Ch. Cianci, A. Klaptocz,
Some insights into the factors to be considered while choos-                      S. Magnenat, J.Ch. Zufferey, D. Floreano, and A. Martinoli. The e-
ing the sensing, control and communication capabilities of                        puck, a robot designed for education in engineering. In Proceedings
                                                                                  of the 9th conference on autonomous robot systems and competitions,
low-cost platforms were discussed. The eventual success of                        volume 1, pages 59–65, 2009.
our approach was described in the context of four diverse yet                [19] F. Mondada, E. Franzi, and A. Guignard. The development of khepera.
common robotics tasks - state estimation, real-time control, a                    In Experiments with the Mini-Robot Khepera, Proceedings of the First
                                                                                  International Khepera Workshop, pages 7–14, 1999.
basic swarm consensus application and a path planning task                   [20] S. Nolfi and D. Floreano.           Evolutionary Robotics: The Biol-
that were carried out using the EvoBots platform.                                 ogy,Intelligence,and Technology. MIT Press, Cambridge, MA, USA,
Some of the potential issues faced by end users who are                           2000.
                                                                             [21] W.W. Peterson and D.T. Brown. Cyclic codes for error detection.
not familiar with real-world implementation of algorithms                         Proceedings of the IRE, 49(1):228–235, Jan 1961.
are also discussed and potential work-arounds for a smooth                   [22] M. Rubenstein, Ch. Ahler, and R. Nagpal. Kilobot: A low cost
transition from simulation to reality were suggested. The                         scalable robot system for collective behaviors. In Proceedings IEEE
                                                                                  International Conference onRobotics and Automation (ICRA), pages
points discussed in this paper aim to serve as a guide to                         3293–3298, 2012.
groups who intend to adapt similar low-cost platforms in                     [23] R.S. Sutton and A.G. Barto. Introduction to Reinforcement Learning.
the future, as well as to researchers working on theoretical                      MIT Press, Cambridge, MA, USA, 1st edition, 1998.
                                                                             [24] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics (Intelligent
aspects of robotics, who intend to validate their algorithms                      Robotics and Autonomous Agents). The MIT Press, 2005.
through real-world implementation.                                           [25] Trek SA. Ai-ball specs, 2014.

                            R EFERENCES
 [1] ROS: Robot operating system. http://www.ros.org/.
 [2] Adept Mobile Robots. AmigoBot robot for education and research,
     2014.
 [3] F. Arvin, J. C. Murray, L. Shi, C. Zhang, and Sh. Yue. Development of
     an autonomous micro robot for swarm robotics. In Proceedings IEEE
     International Conference on Mechatronics and Automation (ICMA),
     pages 635–640, 2014.



     FinE-R 2015                                                       Page 26                                  IROS 2015, Hamburg - Germany
     The path to success: Failures in Real Robots                                                               October 2, 2015