<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Software Agents for Autonomous Robots: the Eurobot 2006 Experience</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vincenzo Nicosia</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Concetto Spampinato</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Corrado Santoro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>for the Eurobot DIIT Team</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Facolta` di Informatica - Dipartimento di Matematica e Informatica Viale A. Doria</institution>
          ,
          <addr-line>6 - 95125, Catania</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Facolta` di Ingegneria - Dipartimento di Ingegneria Informatica e delle Telecomunicazioni</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>These people are Roberto Di Salvo</institution>
          ,
          <addr-line>Andrea Nicotra, Luca Nicotra, Massimiliano Nicotra, Stefano Palmeri, Francesco</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Universita` di Catania</institution>
        </aff>
      </contrib-group>
      <fpage>90</fpage>
      <lpage>95</lpage>
      <abstract>
        <p>- Agent-based software architectures have been used and exploited in many application fields. In this paper, we report our experience about using intelligent agents for an unusual task: controlling an autonomous robot playing a kind of “golf” game in an international robotic competition. Driving a real robot is a practical application field for software agents, because different subsystems need to be controlled and synchronised in order to realize a global game strategy: cooperating agents can easily fit the target. Since this application requires a soft real-time platform to guarantee fast and reliable actions, and also a valuable communication system to gain feedback from sensors and to issue commands to actuators, we chose Erlang as programming language. A two-layer multi-agent system was thus designed and realized, composed of a lower layer, hosting agents taking care of the interface with sensors and actuators, and a higher layer, where agents are in charge of “intelligent” activities related to game strategy.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Software agents are autonomous entities that, living in a
virtual world, are in charge of accomplishing the goal they
are programmed for. In doing so, agents interact with the
environment where they live in, by sensing its state and acting
onto it, in order to achieve their goal. For these reasons, they
are often called “software robots”.</p>
      <p>
        In spite of this similarity between (software) agents and
(real) robots, agents, and above all multi-agent systems, are
mainly exploited in realizing complex software systems and
applications requiring intelligence, flexibility, interoperability,
etc., while the area of robotics is often a matter of research on
real-time and control systems. However, when a (autonomous)
robot needs some intelligence to perform its activities in a
more efficient and effective manner, the use of agent
technology seems a natural choice [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>The issue is that, in these cases, agents have to face the
problems related to the interface to physical sensors and
actuators, which connect the computer system with a physical
environment that also changes during time. Therefore, an
agent–enabled robot has not only to tackle the problems related
to direct use of input/output ports, acquisition and driving
boards, serial ports etc., but it should also take in account
the fact that the scenario is time-constrained. In fact, as it is
known, an information acquired from sensors (e.g. the position
of the robot or of its arm) has a deadline after which the
data become stale and no more useful, unless a fresh value
is obtained. These problems are quite known in the area of
real-time systems and their solution is achieved by means
of platforms and/or operating systems that regulate program
execution—in terms of process/task scheduling, race condition
and delay control—in order to guarantee that deadlines are
met.</p>
      <p>
        Since such a real-time support is needed also in the case of
the use of an agent-based system to control robot activities, the
traditional and well-known agent platforms, which are mainly
based on Java, cannot be employed at all: at it is known,
the main problem of Java is the garbage collector, which
introduces unpredictable latencies that prevent any attempt
to build a time-constrained system. Indeed, RTSJ
specification [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] provides a set of classes and some programming rules
that allow the realization of real-time Java systems, but the
specification introduces hard constraints in object allocation
and reference that require an existing Java program (and thus
an agent platform) to be rewritten in order to make it
RTSJcompliant [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        In the context of agents and real-time systems, a language
that features some interesting characteristics is Erlang [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It is a functional and symbolic programming language
that has been proved to be suitable for the implementation of
multi-agent and intelligent systems [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ],
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]; moreover, since the Erlang runtime system
is able to provide soft real-time1 capabilities [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], it
seems also quite useful for the realization of an autonomous
robot controlled by autonomous agents. In this context, this
paper describes the authors’ experience in designing and
implementing an autonomous robot, for the Eurobot 2006
competition2;3, by means of a multi-agent system written
using the Erlang programming language. A layered
multiagent system has been designed, composed of two layers:
a back-end (lower layer), comprising agents performing the
interface with robot’s physical sensors and actuators, and
handling low-level control activities; and a front-end (upper
layer), hosting agents dealing with the game strategy. Thanks
to this layered architecture, hardware-level interactions and
1A system is called soft real-time if it is able to take into account deadlines,
but if a deadline is not met, it has no particular consequences [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
2http://www.eurobot.org
3http://pciso.diit.unict.it/~eurobot
intelligent activities are clearly decoupled, making the design
and implementation of the software system more easy, and also
allowing the programmer to easily reuse some parts and/or to
improve or change the functionalities of the system.
      </p>
      <p>The paper is structured as follows. Section II describes the
game that robots have to play at Eurobot 2006. Section III
illustrates the basic hardware and mechanical structure of
the robot developed. Section IV deals with the software
architecture of the control system of the robot, describing the
agents composing the system, their role and their activities.
Section V discusses some implementation issues. Section VI
reports our conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>II. THE GAME AT EUROBOT 2006</title>
      <p>Eurobot is an international robotics competition which
involves students and amateurs in challenging and amazing robot
games. The main target of the event is to encourage sharing of
technical knowledge and creativity among students and young
people from Europe and, in the last two editions, from all
around the world.</p>
      <p>Every year a different robotic game is chosen, so that all
teams start from the same initial status and new teams are
stimulated to participate. Here we report an overview of the
rules for the 2006 edition of Eurobot4, when the selected game
was “Funny Golf”, a simplified version of a golf game where
robots had to search balls in the play-field and to put them
into holes of a predefined colour.</p>
      <sec id="sec-2-1">
        <title>A. Field and Game Concepts</title>
        <p>As Figure 1 shows, the play-field is a green rectangle of
210x300 mm, surrounded by a wooden border. Borders on the
short sides of the field have a red (resp. blue) central stripe
which delimits the starting area for each robot. The field has
28 holes, 14 of them encircled by red rings and the other by
blue rings. A total amount of 31 white balls and 10 black
balls are available during the game. Fifteen white balls and
two black balls are placed into the playing area at predefined
positions, while four more black balls are randomly positioned
into holes, two for each colour. The remaining balls (sixteen
white and four black) could be released by automatic ejection
mechanisms positioned at each corner of the field. Finally,
four yellow “totems” are positioned into the field and are
both obstacles for robots and switches for the ball–ejection
mechanisms.</p>
        <p>Robots must be absolutely autonomous: any kind of
communication with the robot, both wired or wireless, is not
allowed during matches. Robots have spatial limits, in terms of
height, perimeter and so on, and have to pass a homologation
test before being accepted for the competition. Each robot
can also use any kind of positioning and obstacle-avoidance
system, and supports are provided at the borders of the playing
area to place (homologated) beacons, if needed.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4This edition took place in Catania, Italy.</title>
      <p>Before starting, each robot is assigned a colour, either red
or blue. Robots start from the border opposite to their playing
area, i.e. in the opponent’s field, and at least one side of
the robot must touch the starting area (short border of the
play-field). After robots are placed into the field and all setup
procedures by team members are over, the referees choose the
positions of totems and black balls, by means of a random
selection. When all the components in the play-field are set
up, one of the referees gives the start signal and robots can
play. Each robot has to put as many white balls as possible into
its holes in a time of 90 seconds. Robots can also put black
balls into opponent’s holes, suck them out of their holes, or
even suck white balls out of opponent’s holes. There is no
restriction about strategies or techniques adopted in order to
search, catch, release and suck out balls. It is not allowed to
hurt the other robot or to obstacle or damage it in any way.
It is neither permitted to damage the playing area or playing
objects (such as balls, holes, totems or ejecting mechanisms).
The ejecting mechanisms can be triggered by touching a totem
for a given amount of time: this closes a simple electric circuit
and allow balls into the ejector to be released. At the end of
the match, each white ball in the right hole is considered as a
point, and the robots which has the highest score is the winner.</p>
    </sec>
    <sec id="sec-4">
      <title>III. THE DIIT TEAM ROBOT</title>
      <p>Building an autonomous robot to play “Funny Golf” is not
a trivial task, since different subsystems are needed to perform
ball searching, catching and putting, and many physical
constraints are imposed by game rules themselves. The following
subsections describe the robot realized by the DIIT Team5,
which participates (for the first time) to the 2006 Eurobot
edition.</p>
      <sec id="sec-4-1">
        <title>A. The Core</title>
        <p>An embedded VIA 900Mhz CPU is the core of the robot.
We used a motherboard produced by AXIOM Inc. which
incorporates Ethernet, parallel port, 4 serial ports, USB, IDE
5“DIIT” means Dipartimento di Ingegneria Informatica e delle
Telecomunicazioni.
controller and other amenities (such as PC/104 bus, not
used in our configuration). The operating system used is
a Debian GNU/Linux (Etch), with kernel 2.6.12 and glibc
2.3.5. GNU/Linux was selected because of its stability and
robustness, that are important features when driving a robot.</p>
      </sec>
      <sec id="sec-4-2">
        <title>B. Locomotion System</title>
        <p>In order to guarantee fast movements, we decided to use a
locomotion system based on two independent double-wheels,
driven by DC motors. Wheels diameter is small enough to
allow fast rotation and large enough to avoid holes. DC motors
are directly connected to a motor-controller, driven by a RS232
serial line. The controller allows to set different speeds for
each wheel, both for forward and backward directions. Each
wheel is connected to an optical encoder, driven by a serial
mouse circuitry, which feeds back to the software system
information about real rotation speed and position of the
wheel. This information is then used by the Motion Control
agent to adjust the speed and the trajectory.</p>
      </sec>
      <sec id="sec-4-3">
        <title>C. Vision</title>
        <p>Searching balls in the playing area requires a kind of vision
system to find them. We chose to use a simple USB webcam
to capture video frames at a rate of about 4 frames/sec, still
enough to guarantee an accurate and fast analysis of objects
in the field. The webcam is able to “view” the field from 30
to 160 centimetres in front of the robot, with a visual angle
of about 100 degrees in total. Frame grabbed by the webcam
are passed to the “Object Detector” agent, which filters them
to find balls (both black and white) and holes (both red and
blue).</p>
      </sec>
      <sec id="sec-4-4">
        <title>D. Catching and putting balls</title>
        <p>Once balls are detected, it is necessary to put them,
somehow, into the right hole. We decided to suck balls using a fan,
and to choose where to put them using a simple selector, driven
by a servo-motor. Balls are saved into a small buffer if they are
white and the buffer has enough space, or ejected out if they
are black or if the buffer is full. The fan is powerful enough to
suck balls at a distance of about 12 centimetres from the front
side of the robot, and it is also able to suck balls out of holes
when a special small bulkhead on the front side is closed. A
simple release mechanism, which uses a servo-motor, allows
balls to be dropped down to the final piece of the buffer and
to fall into a hole.</p>
      </sec>
      <sec id="sec-4-5">
        <title>E. Sensors and Positioning</title>
        <p>Many sensors have been used onto the robot. First of all,
a colour sensor for balls is installed into the ball selector,
to recognise if a sucked ball is white or black. A complex
system of proximity sensors is installed in the bottom side
of the robot to recognise holes when the robot walks over
them, and to allow a smart and fine positioning during the
ball putting phase. A presence sensor (made by a simple LED–
photo-resistor couple) is placed in the final part of the buffer,
to reveal the presence of a ball ready to be dropped into a
hole. The same sensor is used to detect when the ball has
been successfully put into a hole.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>IV. THE ROBOT’S SOFTWARE ARCHITECTURE</title>
      <p>Given the robot structure illustrated in the previous Section,
it is clear that the implementation of the system to control it
has to face some problems that are not present in traditional
(only software) multi-agent systems: the interface with
physical sensors and actuators. For this reason, the basic software
architecture of the robot, which is sketched in Figure 3, is
composed of two layers, (i) a lower one, called the
backend, including reactive-only agents, responsible for a direct
interaction with the hardware, and (ii) a higher layer, called the
front-end, hosting the “robot’s intelligence” by means of a set
of agents implementing the artificial vision system, the game
strategy, the motion control, etc., and interacting with
backend’s agents in order to sense and act onto the environment.
All of these agents comply with an ad-hoc model which,
together with the details on functionality of the overall system,
is described in the following Subsections.</p>
      <sec id="sec-5-1">
        <title>A. Agent Model</title>
        <p>
          As reported in Section I, due to real time requirements
and other peculiarities of a robotic application, well-know
Java-based agent platforms cannot be employed; therefore,
according to authors’ past research work [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ],
[
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], we decided to use the Erlang language [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ],
[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] for the development of the robot’s software system. In
addition to its soft-real time features, Erlang has a concurrent
and distributed programming model that perfectly fit the
model of multi-agent systems: an Erlang application is in fact
composed by a set of independent processes, each having a
state, sharing nothing with other processes and communicating
only by means of message passing. Such processes can be all
local (i.e. in the same PC) or spread over a computer network;
this is transparent to the application because the language
constructs for sending and receiving messages do not change
should the interacting processes be local or remote.
        </p>
        <p>Given these features and the requirements for the robot
control application, a suited agent model has been developed,
which is based on two abstractions called BasicFSM and
PeriodicFSM. The former, BasicFSM, is essentially a
finitestate machine model, in which transitions are triggered by
either the arrival of a message or the elapsing of a given
timeout, and a specified per-state activity is executed
(oneshot) when a new state is reached. The latter, PeriodicFSM, is
instead a finite-state machine in which transitions are activated
only by the arrival of a message, while the per-state activity
is executed, when a state is reached, periodically, according
to a fixed time period and within a deadline, which is equal
to the period itself.</p>
        <p>As it will be illustrated in the following, BasicFSM model
is used for front-end agents, while the PeriodicFSM model
is essentially exploited for those interacting with sensors and
actuators and thus running in the back-end.</p>
      </sec>
      <sec id="sec-5-2">
        <title>B. The Back-End</title>
        <p>As Figure 3 illustrates, the back-end layer is composed
by the following agents: Motion Driver, RS485 Management,</p>
      </sec>
      <sec id="sec-5-3">
        <title>Start/Stop Control, Ball Control and Hole Detector. All of</title>
        <p>these agents use the PeriodicFSM model but only the first
two are directly connected with hardware resources.</p>
        <p>The Motion Driver agent is in charge of driving wheel
motors and gathering feedback from optical encoders. It
basically handles messages (sent by front-end agents) specifying
the speed to set for the left and right wheel, forwarding it
(after measurement unit conversion) to the motor controller
connected through the RS232 line. On the other hand, its
periodic activity entails receiving the feedback from optical
encoders (i.e. tick count), acquired through another RS232
line, and then computing tick frequency, thus evaluating the
real speed of the wheels: the obtained value is used to adjust
the value(s) sent to motor controller in order to make each
wheel to reach the desired speed6.</p>
        <p>The RS485 Management agent is responsible for driving two
external boards connected, to the PC, through the same RS485
serial bus: a controller for servo-motors and a board offering
a certain number of I/O digital lines. Since each servo-motor
and each I/O line is then used by different agents, the RS485
Management acts as a de-/multiplexer for actions and sensed
data. Its periodic activity is the sampling of digital inputs, by
means of a request/reply transaction through serial messages
exchanged with the I/O board; polled data are thus stored in
the agent’s state in order to make them available for requests
coming from other agents. In addition, the RS485 Management
is able to receive messages containing commands to be sent to
servo-motor, through the servo-controller; in particular, each
command specifies the servo-motor to drive and the rotation
angle to be set.</p>
        <p>The Start/Stop Control agent is a reactive one that
periodically queries the RS485 Management in order to check if
the “start” or “stop” buttons have been pushed. On this basis,
it sends appropriate start/stop messages to the Strategy agent
(see below) in order activate (resp. block) its behaviour when
a match begins (resp. ends). Since the duration of a match is
fixed (90 seconds), this agent embeds also a timer that, armed
after a start, automatically sends a stop message when the 90
seconds are due.</p>
        <p>The Ball Control agent is responsible for managing the
ball sucking system, the buffer and the ball release system.
During its periodic activity, it queries the RS485 Management
agent in order to check the input lines signalling that a new
ball has been sucked: if this event occurs, on the basis of
the colour of the ball7, it drives the sucking system’s arm
servo-motor in order to put the ball in the buffer—if the
ball is white and the buffer is not full—or to throw the ball
away—if the ball is black or the buffer is full. This agent
also holds the number of balls in the buffer, information that,
queried by the Strategy agent, is used by the latter to control
robot behaviour. As for ball release, the Ball Control agent,
following a proper command message, is able to interact with
the RS485 Management agent and thus drive the servo-motor
controlling the release of a ball. Finally, by checking the status
of another input digital line, the Ball Control agent is able to
understand if a released ball has been successfully put into a
hole.</p>
        <p>The last agent of the back-end, the Hole Detector, reads,
through a proper interaction with the RS485 Management
agent, the data coming from proximity sensors placed under
the robot for hole detection and positioning. It is able to
understand the position of the robot, with respect to the hole
to catch, and can thus forward this information to the Strategy
agent, which, in turn, will drive the wheels to centre the hole
and put the ball into it.</p>
      </sec>
      <sec id="sec-5-4">
        <title>C. The Front-End</title>
        <p>The front-end layer implements the high-level activities that
drive the robot to reach its goal, i.e. placing the most quantity
6This is obtained by means of a proportional-integrative-derivative software
controller.</p>
        <p>7The colour is detected through a sensor connected to another digital input
line.
of balls into its holes. This layer is composed of three agents:</p>
      </sec>
      <sec id="sec-5-5">
        <title>Object Detector, Motion Control and Strategy.</title>
        <p>The Object Detector has the task of observing the playing
area, by means of a USB camera, detecting the objects
needed for the game, i.e. balls and holes, and computing their
coordinates with respect to the robot position. Since it uses
a computation-intensive image manipulation algorithm, this is
the sole agent written in C and not in Erlang8. This algorithm,
whose execution is triggered by a suited message sent by
the Strategy agent, exploits artificial vision techniques and
performs a series of transformation (i.e. filtering, threshold,
binarisation) on RGB planes of each frame acquired in order
to isolate and recognise the required objects. Figure 4 reports
some screen-shots of the functioning of the Object Detector.
In particular, Figures 4a and 4c show two acquired frames,
while Figures 4b and 4d illustrate the filtered images with the
objects (respectively a white ball and two blue holes) detected
by the agent.</p>
        <p>
          The Motion Control agent, which is the only PeriodicFSM
type, has the task of controlling the robot’s path: it receives,
from the Strategy agent, messages containing commands for
robot positioning, such as go to X,Y or rotate T, computes
the speed of the wheels needed to reach the target, and sends
such speeds to the Motion Driver agents. Moreover, in order
to ensure that the target is reached, the Motion Control agent
periodically requests to Motion Driver the tick count of optical
encoders and calculates the absolute position and orientation of
the robot [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. These values are thus compared with the target,
making subsequent speed adjustment, if necessary9. Another
task of the Motion Control agent is obstacle detection. Since
the robot has no sensors to detect if an obstacle (e.g. the
opponent’s robot, a totem, etc.) is in front of it, the Motion
Control agent checks if there is no wheel movement within
a certain time window (given that wheel’s speeds are greater
than zero); if this is the case, an obstacle exiting algorithm is
started, which entails to move the robot backwards and then
rotate it.
        </p>
        <p>
          The last agent, Strategy, is the “brain” of the robot. Being
a BasicFSM agent, it is responsible of collecting and putting
8It uses the OpenCV library [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], which provides a set of fast and optimised
image manipulation functions. Proper Erlang-to-C library functions allows this
agent to interact with Erlang processes.
        </p>
        <p>9Also in this case, a proportional-integrative-derivative software controller
is employed.</p>
        <p>Move robot beyond
the black line
90 seconds after start
Search for opponent’s holes
and remove the bal s</p>
        <p>Search and gather
white bal s</p>
        <p>At least one bal
in the buffer</p>
        <p>Hole detected
Search for my hole</p>
        <p>Bal in hole and
no more bal s
in buffer
Release the bal and
try to reach the hole
Bal in hole and
more bal s in buffer
60 seconds after start</p>
        <p>Fig. 5. Strategy Agent Behaviour
together information about environment and robot subsystems
to obtain a valuable and effective playing strategy. Even if the
field is mostly immutable (except for the position of totems,
which are set before each match) and many of the balls
involved are still in fixed position, we chose to implements an
intelligent and adaptive strategy instead of a simple “fixed–
path” one. For this reason the Strategy agent has to adaptively
choose the right action to perform at each time, elaborating
data coming from other agents. As Figure 5 illustrates, the
very first step of the implemented strategy is “move beyond
the first black line”, since this guarantees the collection of
at least one point10. This is performed by suitable commands
sent to Motion Control agent. When the black line has been
passed, the main strategy loop begins. First the robot looks
for white balls and suck them into the buffer: if any white
ball is seen by the Object Detector, then the Motion Control
agent is issued the commands needed to reach the ball; on the
other hand, if no ball has been detected, the Strategy agent
tries to search elsewhere, by rotating of a random angle in
order to look at other zones of the field. When a ball has been
sucked and the Ball Control agent reports the presence of at
least one white ball into the buffer, the Strategy agent starts
to search a right hole to drop it into (i.e. a hole of the colour
assigned to the team, either red or blue), looking at messages
from Detector and moving toward a hole as soon as it has been
found. When the selected hole is no more visible (i.e. outside
the camera scope) a ball is released and, by means of messages
coming from Hole Detector, a sequence of commands for fine
positioning are sent to Motion Control. If the hole is centred
10If the robot does not pass the first black line, then it obtains no points at
the end of the match.
and the ball goes into it, the Ball Control agent sends a “Ball
Successfully Dropped” message, so the Strategy agent decides
to search another hole, if more white balls are present into
the buffer, or to look for more white balls. If the ball is not
dropped into a given amount of time (for example because of
errors in fine positioning) the Strategy agent searches another
hole and tries to drop the ball into it. Finally, in the last 30
seconds of game, the Strategy agent tries to find opponent’s
holes to suck white balls out of them.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>V. IMPLEMENTATION ISSUES</title>
      <p>
        As it has been previously said in the paper, with the
exception of the Object Detector, the system has been implemented
using the Erlang language. However, even if our research
group has realized a FIPA-compliant Erlang agent platform
(called eXAT [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]), we did not
use it in order to avoid overhead introduced by platform’s
components for inference, behaviour handling, standard FIPA
messaging, etc. This is required in order to have a fast and
effective support for agents, rather than the possibility of
interacting with other external agents (according to Eurobot
rules, the robot must be autonomous and not connected to
any network). To this aim, each agent of the robot has been
encapsulated in an Erlang process and a suitable library has
been developed to support the BasicFSM and PeriodicFSM
deadline-aware abstractions. Message passing has been
realized by means of the native Erlang constructs to perform
interprocess communication (which are designed to be very fast):
this resulted in an optimised code able to meet to real-time
requirements of the target application.
      </p>
    </sec>
    <sec id="sec-7">
      <title>VI. CONCLUSIONS</title>
      <p>This paper described the architecture of an autonomous
mobile robot, developed by the DIIT Team of the University
of Catania to participate to the Eurobot competition. A
multiagent system has been employed for this purpose, composed
of several agents in charge of both interacting with
physical sensors and actuators, and supporting the game strategy
for the robot. A layered architecture has been designed to
clearly separate the aspects above—physical world interface
and intelligence—and to favour design, modularity and reuse.
Due to real time constraints, the system has been implemented
using the Erlang language by means of a proper library to
support the abstraction needed for using agents in a robotic
environment. This allowed us to develop a fast code able to
effectively support robot’s activities.</p>
    </sec>
    <sec id="sec-8">
      <title>VII. ACKNOWLEDGEMENTS</title>
      <p>The authors wish to thank so much all the other components
of the Eurobot DIIT Team, who gave a terrific and fundamental
contribution in the realization of the robot described in this
paper and made this experience not only very useful but also
very funny.</p>
      <p>Pellegrino, Matteo Pietro Russo, Carmelo Sciuto, Danilo
Treffiletti and Carmelo Zampaglione.</p>
      <p>Moreover, the authors wish to thank also the official
sponsors of the Eurobot DIIT Team, which are Siatel Srl (from
Catania, Italy) and Erlang Training &amp; Consulting Ltd11
(from London, UK), that, with their support, contributed to
make our dream real.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] “http://www.erlang.org. Erlang Language Home Page,”
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2] “http://opencvlibrary.sourceforge.net/,”
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Armstrong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dacker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Virding</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Williams</surname>
          </string-name>
          , “
          <article-title>Implementing a Functional Language for Highly Parallel Real Time Applications</article-title>
          ,”
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Armstrong</surname>
          </string-name>
          , “
          <article-title>The development of Erlang,”</article-title>
          <source>in Proceedings of the ACM SIGPLAN International Conference on Functional Programming</source>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Press, Ed.,
          <year>1997</year>
          , pp.
          <fpage>196</fpage>
          -
          <lpage>203</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Armstrong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wikstrom</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Virding</surname>
          </string-name>
          ,
          <article-title>Concurrent Programming in Erlang, 2nd Edition</article-title>
          . Prentice-Hall,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Bollella</surname>
          </string-name>
          , Gosling, Brosgol, Dibble, Furr, Hardin, and
          <string-name>
            <surname>Turnbull</surname>
          </string-name>
          ,
          <source>The RealTime Specification for Java. Addison-Wesley</source>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Borenstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. R.</given-names>
            <surname>Everett</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <surname>Where am</surname>
            <given-names>I</given-names>
          </string-name>
          ?
          <article-title>- Systems and Methods for Mobile Robot Positioning</article-title>
          . WWW, University of Michigan, USA, http://www-personal.engin.umich.edu/ johannb/ position.htm,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Corsaro</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          , “
          <article-title>Design Patterns for RTSJ Application Development</article-title>
          ,”
          <source>in Proceedings of 2 nd JTRES 2004 Workshop, OTM'04 Federated Conferences. LNCS 3292</source>
          , Springer, Oct.
          <volume>25</volume>
          -
          <issue>29</issue>
          <year>2004</year>
          , pp.
          <fpage>394</fpage>
          -
          <lpage>405</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Di Stefano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Gangemi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          , “ERESYE: Artificial Intelligence in Erlang Programs,” in Erlang Workshop at 2005
          <source>Intl. ACM Conference on Functional Programming (ICFP</source>
          <year>2005</year>
          ), Tallinn, Estonia,
          <volume>25</volume>
          Sept.
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A. Di</given-names>
            <surname>Stefano</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          , “
          <article-title>eXAT: an Experimental Tool for Programming Multi-Agent Systems</article-title>
          in Erlang,” in
          <source>AI*IA/TABOO Joint Workshop on Objects and Agents (WOA</source>
          <year>2003</year>
          ), Villasimius, CA,
          <year>Italy</year>
          ,
          <fpage>10</fpage>
          -
          <lpage>11</lpage>
          Sept.
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11] --, “eXAT: A Platform to Develop Erlang Agents,” in Agent Exhibition Workshop at Net.
          <source>ObjectDays</source>
          <year>2004</year>
          , Erfurt, Germany,
          <fpage>27</fpage>
          -
          <lpage>30</lpage>
          Sept.
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12] --, “
          <article-title>Designing Collaborative Agents with eXAT,”</article-title>
          <source>in ACEC 2004 Workshop at WETICE</source>
          <year>2004</year>
          , Modena, Italy,
          <fpage>14</fpage>
          -
          <lpage>16</lpage>
          June 2004.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13] --, “
          <article-title>On the use of Erlang as a Promising Language to Develop Agent Systems</article-title>
          ,” in
          <source>AI*IA/TABOO Joint Workshop on Objects and Agents (WOA</source>
          <year>2004</year>
          ), Torino, Italy,
          <fpage>29</fpage>
          -
          <lpage>30</lpage>
          Nov.
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14] --, “
          <article-title>Supporting Agent Development in Erlang through the eXAT Platform,” in Software Agent-Based Applications</article-title>
          , Platforms and
          <string-name>
            <given-names>Development</given-names>
            <surname>Kits</surname>
          </string-name>
          .
          <source>Whitestein Technologies</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15] --, “
          <article-title>Using the Erlang Language for Multi-Agent Systems Implementation</article-title>
          ,” in
          <source>2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'05)</source>
          , Compie´gne, France,
          <fpage>19</fpage>
          -
          <lpage>22</lpage>
          Sept.
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Dibble</surname>
          </string-name>
          ,
          <string-name>
            <surname>Real-Time Java Platform Programming. Prentice Hall</surname>
            <given-names>PTR</given-names>
          </string-name>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>I.</given-names>
            <surname>Infantino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cossentino</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Chella</surname>
          </string-name>
          , “
          <article-title>An agent based multilevel architecture for robotics vision systems</article-title>
          .”
          <source>in Proceedings of the International Conference on Artificial Intelligence</source>
          , IC-AI '
          <volume>02</volume>
          ,
          <string-name>
            <surname>June</surname>
          </string-name>
          24 - 27,
          <year>2002</year>
          ,
          <string-name>
            <given-names>Las</given-names>
            <surname>Vegas</surname>
          </string-name>
          , Nevada, USA, Volume
          <volume>1</volume>
          ,
          <year>2002</year>
          , pp.
          <fpage>386</fpage>
          -
          <lpage>390</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>E.</given-names>
            <surname>Johansson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pettersson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Sagonas</surname>
          </string-name>
          ,
          <article-title>“A High Performance Erlang System,” in 2 nd International Conference on Principles and Practice of Declarative Programming (PPDP</article-title>
          <year>2000</year>
          ), Sept.
          <fpage>20</fpage>
          -22
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Liu</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Layland</surname>
          </string-name>
          , “
          <article-title>Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment,” JACM</article-title>
          , vol.
          <volume>20</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>46</fpage>
          -
          <lpage>61</lpage>
          , Jan.
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>J. W. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Real-Time Systems</surname>
          </string-name>
          . Prentice Hall,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>C.</given-names>
            <surname>Varela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Abalde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Castro</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Gulias</surname>
          </string-name>
          , “
          <article-title>On Modelling Agent Systems with Erlang,” in 3 rd ACM SIGPLAN Erlang Workshop</article-title>
          , Snowbird, Utah, USA,
          <fpage>22</fpage>
          <lpage>Sept</lpage>
          .
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Wellings</surname>
          </string-name>
          ,
          <article-title>Concurrent and Real-Time Programming in Java</article-title>
          . Wiley,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>