=Paper= {{Paper |id=Vol-204/paper-15 |storemode=property |title=Software Agents for Autonomous Robots: the Eurobot 2006 Experience |pdfUrl=https://ceur-ws.org/Vol-204/P17.pdf |volume=Vol-204 |dblpUrl=https://dblp.org/rec/conf/woa/NicosiaSS06 }} ==Software Agents for Autonomous Robots: the Eurobot 2006 Experience== https://ceur-ws.org/Vol-204/P17.pdf
             Software Agents for Autonomous Robots:
                   the Eurobot 2006 Experience
            Vincenzo Nicosia1 , Concetto Spampinato1 and Corrado Santoro2 for the Eurobot DIIT Team
                                                             Università di Catania
                   1
                       Facoltà di Ingegneria - Dipartimento di Ingegneria Informatica e delle Telecomunicazioni
                                   2
                                     Facoltà di Informatica - Dipartimento di Matematica e Informatica
                                                   Viale A. Doria, 6 - 95125, Catania, Italy


   Abstract— Agent-based software architectures have been used        data become stale and no more useful, unless a fresh value
and exploited in many application fields. In this paper, we report    is obtained. These problems are quite known in the area of
our experience about using intelligent agents for an unusual          real-time systems and their solution is achieved by means
task: controlling an autonomous robot playing a kind of “golf”
game in an international robotic competition. Driving a real          of platforms and/or operating systems that regulate program
robot is a practical application field for software agents, because   execution—in terms of process/task scheduling, race condition
different subsystems need to be controlled and synchronised           and delay control—in order to guarantee that deadlines are
in order to realize a global game strategy: cooperating agents        met.
can easily fit the target. Since this application requires a soft        Since such a real-time support is needed also in the case of
real-time platform to guarantee fast and reliable actions, and
also a valuable communication system to gain feedback from            the use of an agent-based system to control robot activities, the
sensors and to issue commands to actuators, we chose Erlang           traditional and well-known agent platforms, which are mainly
as programming language. A two-layer multi-agent system was           based on Java, cannot be employed at all: at it is known,
thus designed and realized, composed of a lower layer, hosting        the main problem of Java is the garbage collector, which
agents taking care of the interface with sensors and actuators,       introduces unpredictable latencies that prevent any attempt
and a higher layer, where agents are in charge of “intelligent”
activities related to game strategy.                                  to build a time-constrained system. Indeed, RTSJ specifica-
   Keywords— Mobile and Autonomous Robots, Computer Vision,           tion [6] provides a set of classes and some programming rules
Autonomous Agents, Real-Time Systems, Erlang.                         that allow the realization of real-time Java systems, but the
                                                                      specification introduces hard constraints in object allocation
                       I. I NTRODUCTION                               and reference that require an existing Java program (and thus
   Software agents are autonomous entities that, living in a          an agent platform) to be rewritten in order to make it RTSJ-
virtual world, are in charge of accomplishing the goal they           compliant [22], [16], [8].
are programmed for. In doing so, agents interact with the                In the context of agents and real-time systems, a language
environment where they live in, by sensing its state and acting       that features some interesting characteristics is Erlang [5], [4],
onto it, in order to achieve their goal. For these reasons, they      [1]. It is a functional and symbolic programming language
are often called “software robots”.                                   that has been proved to be suitable for the implementation of
   In spite of this similarity between (software) agents and          multi-agent and intelligent systems [21], [10], [12], [11], [13],
(real) robots, agents, and above all multi-agent systems, are         [15], [14], [9]; moreover, since the Erlang runtime system
mainly exploited in realizing complex software systems and            is able to provide soft real-time1 capabilities [18], [3], it
applications requiring intelligence, flexibility, interoperability,   seems also quite useful for the realization of an autonomous
etc., while the area of robotics is often a matter of research on     robot controlled by autonomous agents. In this context, this
real-time and control systems. However, when a (autonomous)           paper describes the authors’ experience in designing and
robot needs some intelligence to perform its activities in a          implementing an autonomous robot, for the Eurobot 2006
more efficient and effective manner, the use of agent technol-        competition2,3 , by means of a multi-agent system written
ogy seems a natural choice [17].                                      using the Erlang programming language. A layered multi-
   The issue is that, in these cases, agents have to face the         agent system has been designed, composed of two layers:
problems related to the interface to physical sensors and             a back-end (lower layer), comprising agents performing the
actuators, which connect the computer system with a physical          interface with robot’s physical sensors and actuators, and
environment that also changes during time. Therefore, an              handling low-level control activities; and a front-end (upper
agent–enabled robot has not only to tackle the problems related       layer), hosting agents dealing with the game strategy. Thanks
to direct use of input/output ports, acquisition and driving          to this layered architecture, hardware-level interactions and
boards, serial ports etc., but it should also take in account           1 A system is called soft real-time if it is able to take into account deadlines,
the fact that the scenario is time-constrained. In fact, as it is     but if a deadline is not met, it has no particular consequences [19], [20].
known, an information acquired from sensors (e.g. the position          2 http://www.eurobot.org

of the robot or of its arm) has a deadline after which the              3 http://pciso.diit.unict.it/~eurobot




                                                                90
intelligent activities are clearly decoupled, making the design
and implementation of the software system more easy, and also
allowing the programmer to easily reuse some parts and/or to
improve or change the functionalities of the system.
   The paper is structured as follows. Section II describes the
game that robots have to play at Eurobot 2006. Section III
illustrates the basic hardware and mechanical structure of
the robot developed. Section IV deals with the software
architecture of the control system of the robot, describing the
agents composing the system, their role and their activities.
Section V discusses some implementation issues. Section VI
reports our conclusions.
                                                                                          Fig. 1.   The playing area
                II. T HE G AME AT E UROBOT 2006

   Eurobot is an international robotics competition which in-     B. Playing Funny Golf
volves students and amateurs in challenging and amazing robot        Before starting, each robot is assigned a colour, either red
games. The main target of the event is to encourage sharing of    or blue. Robots start from the border opposite to their playing
technical knowledge and creativity among students and young       area, i.e. in the opponent’s field, and at least one side of
people from Europe and, in the last two editions, from all        the robot must touch the starting area (short border of the
around the world.                                                 play-field). After robots are placed into the field and all setup
   Every year a different robotic game is chosen, so that all     procedures by team members are over, the referees choose the
teams start from the same initial status and new teams are        positions of totems and black balls, by means of a random
stimulated to participate. Here we report an overview of the      selection. When all the components in the play-field are set
rules for the 2006 edition of Eurobot4 , when the selected game   up, one of the referees gives the start signal and robots can
was “Funny Golf”, a simplified version of a golf game where       play. Each robot has to put as many white balls as possible into
robots had to search balls in the play-field and to put them      its holes in a time of 90 seconds. Robots can also put black
into holes of a predefined colour.                                balls into opponent’s holes, suck them out of their holes, or
                                                                  even suck white balls out of opponent’s holes. There is no
A. Field and Game Concepts                                        restriction about strategies or techniques adopted in order to
                                                                  search, catch, release and suck out balls. It is not allowed to
   As Figure 1 shows, the play-field is a green rectangle of
                                                                  hurt the other robot or to obstacle or damage it in any way.
210x300 mm, surrounded by a wooden border. Borders on the
                                                                  It is neither permitted to damage the playing area or playing
short sides of the field have a red (resp. blue) central stripe
                                                                  objects (such as balls, holes, totems or ejecting mechanisms).
which delimits the starting area for each robot. The field has
                                                                  The ejecting mechanisms can be triggered by touching a totem
28 holes, 14 of them encircled by red rings and the other by
                                                                  for a given amount of time: this closes a simple electric circuit
blue rings. A total amount of 31 white balls and 10 black
                                                                  and allow balls into the ejector to be released. At the end of
balls are available during the game. Fifteen white balls and
                                                                  the match, each white ball in the right hole is considered as a
two black balls are placed into the playing area at predefined
                                                                  point, and the robots which has the highest score is the winner.
positions, while four more black balls are randomly positioned
into holes, two for each colour. The remaining balls (sixteen                        III. T HE DIIT T EAM ROBOT
white and four black) could be released by automatic ejection
                                                                     Building an autonomous robot to play “Funny Golf” is not
mechanisms positioned at each corner of the field. Finally,
                                                                  a trivial task, since different subsystems are needed to perform
four yellow “totems” are positioned into the field and are
                                                                  ball searching, catching and putting, and many physical con-
both obstacles for robots and switches for the ball–ejection
                                                                  straints are imposed by game rules themselves. The following
mechanisms.
                                                                  subsections describe the robot realized by the DIIT Team5 ,
   Robots must be absolutely autonomous: any kind of com-
                                                                  which participates (for the first time) to the 2006 Eurobot
munication with the robot, both wired or wireless, is not
                                                                  edition.
allowed during matches. Robots have spatial limits, in terms of
height, perimeter and so on, and have to pass a homologation      A. The Core
test before being accepted for the competition. Each robot
                                                                     An embedded VIA 900Mhz CPU is the core of the robot.
can also use any kind of positioning and obstacle-avoidance
                                                                  We used a motherboard produced by AXIOM Inc. which
system, and supports are provided at the borders of the playing
                                                                  incorporates Ethernet, parallel port, 4 serial ports, USB, IDE
area to place (homologated) beacons, if needed.
                                                                     5 “DIIT” means Dipartimento di Ingegneria Informatica e delle Telecomu-
  4 This edition took place in Catania, Italy.                    nicazioni.



                                                             91
controller and other amenities (such as PC/104 bus, not
used in our configuration). The operating system used is
a Debian GNU/Linux (Etch), with kernel 2.6.12 and glibc
2.3.5. GNU/Linux was selected because of its stability and
robustness, that are important features when driving a robot.

B. Locomotion System
   In order to guarantee fast movements, we decided to use a
locomotion system based on two independent double-wheels,
driven by DC motors. Wheels diameter is small enough to
allow fast rotation and large enough to avoid holes. DC motors
are directly connected to a motor-controller, driven by a RS232
serial line. The controller allows to set different speeds for
each wheel, both for forward and backward directions. Each
wheel is connected to an optical encoder, driven by a serial
                                                                                      Fig. 2.           The Robot in the playing area
mouse circuitry, which feeds back to the software system
information about real rotation speed and position of the
wheel. This information is then used by the Motion Control                       front−end
                                                                                             Motion                                                     Object
agent to adjust the speed and the trajectory.                                                Control
                                                                                                                               Strategy
                                                                                                                                                        Detector




                                                                                                                                                                            software
C. Vision                                                                                              Start/Stop                Ball              Hole
                                                                                 back−end              Control                  Control           Detector
   Searching balls in the playing area requires a kind of vision
system to find them. We chose to use a simple USB webcam                                     Motion                        RS485
                                                                                             Driver                      Management
to capture video frames at a rate of about 4 frames/sec, still
enough to guarantee an accurate and fast analysis of objects                        motion
                                                                                 commands
                                                                                                       optical
                                                                                                       feedback
                                                                                                                                  Servo Controller
in the field. The webcam is able to “view” the field from 30                            RS232−Driver

to 160 centimetres in front of the robot, with a visual angle




                                                                                                                                                                            hardware
                                                                                                                                                               Camera
                                                                                                                                  servo−motors

of about 100 degrees in total. Frame grabbed by the webcam                                                                          Digital I/O
                                                                                         whell motors             Start/Stop                            Digital I/O Lines
are passed to the “Object Detector” agent, which filters them                         and optical encoders        buttons


to find balls (both black and white) and holes (both red and
                                                                                                                                                     Hole position
blue).                                                                                                                               Ball color
                                                                                                                                     detection
                                                                                                                                                     detection




D. Catching and putting balls                                                       Fig. 3.            Hardware/Software Architecture
   Once balls are detected, it is necessary to put them, some-
how, into the right hole. We decided to suck balls using a fan,
and to choose where to put them using a simple selector, driven     hole. The same sensor is used to detect when the ball has
by a servo-motor. Balls are saved into a small buffer if they are   been successfully put into a hole.
white and the buffer has enough space, or ejected out if they
are black or if the buffer is full. The fan is powerful enough to          IV. T HE ROBOT ’ S S OFTWARE A RCHITECTURE
suck balls at a distance of about 12 centimetres from the front
side of the robot, and it is also able to suck balls out of holes      Given the robot structure illustrated in the previous Section,
when a special small bulkhead on the front side is closed. A        it is clear that the implementation of the system to control it
simple release mechanism, which uses a servo-motor, allows          has to face some problems that are not present in traditional
balls to be dropped down to the final piece of the buffer and       (only software) multi-agent systems: the interface with phys-
to fall into a hole.                                                ical sensors and actuators. For this reason, the basic software
                                                                    architecture of the robot, which is sketched in Figure 3, is
E. Sensors and Positioning                                          composed of two layers, (i) a lower one, called the back-
   Many sensors have been used onto the robot. First of all,        end, including reactive-only agents, responsible for a direct
a colour sensor for balls is installed into the ball selector,      interaction with the hardware, and (ii) a higher layer, called the
to recognise if a sucked ball is white or black. A complex          front-end, hosting the “robot’s intelligence” by means of a set
system of proximity sensors is installed in the bottom side         of agents implementing the artificial vision system, the game
of the robot to recognise holes when the robot walks over           strategy, the motion control, etc., and interacting with back-
them, and to allow a smart and fine positioning during the          end’s agents in order to sense and act onto the environment.
ball putting phase. A presence sensor (made by a simple LED–        All of these agents comply with an ad-hoc model which,
photo-resistor couple) is placed in the final part of the buffer,   together with the details on functionality of the overall system,
to reveal the presence of a ball ready to be dropped into a         is described in the following Subsections.


                                                              92
A. Agent Model                                                                       The RS485 Management agent is responsible for driving two
   As reported in Section I, due to real time requirements                        external boards connected, to the PC, through the same RS485
and other peculiarities of a robotic application, well-know                       serial bus: a controller for servo-motors and a board offering
Java-based agent platforms cannot be employed; therefore,                         a certain number of I/O digital lines. Since each servo-motor
according to authors’ past research work [21], [10], [12], [11],                  and each I/O line is then used by different agents, the RS485
[13], [15], [14], [9], we decided to use the Erlang language [5],                 Management acts as a de-/multiplexer for actions and sensed
[4], [1] for the development of the robot’s software system. In                   data. Its periodic activity is the sampling of digital inputs, by
addition to its soft-real time features, Erlang has a concurrent                  means of a request/reply transaction through serial messages
and distributed programming model that perfectly fit the                          exchanged with the I/O board; polled data are thus stored in
model of multi-agent systems: an Erlang application is in fact                    the agent’s state in order to make them available for requests
composed by a set of independent processes, each having a                         coming from other agents. In addition, the RS485 Management
state, sharing nothing with other processes and communicating                     is able to receive messages containing commands to be sent to
only by means of message passing. Such processes can be all                       servo-motor, through the servo-controller; in particular, each
local (i.e. in the same PC) or spread over a computer network;                    command specifies the servo-motor to drive and the rotation
this is transparent to the application because the language                       angle to be set.
constructs for sending and receiving messages do not change                          The Start/Stop Control agent is a reactive one that period-
should the interacting processes be local or remote.                              ically queries the RS485 Management in order to check if
   Given these features and the requirements for the robot                        the “start” or “stop” buttons have been pushed. On this basis,
control application, a suited agent model has been developed,                     it sends appropriate start/stop messages to the Strategy agent
which is based on two abstractions called BasicFSM and                            (see below) in order activate (resp. block) its behaviour when
PeriodicFSM. The former, BasicFSM, is essentially a finite-                       a match begins (resp. ends). Since the duration of a match is
state machine model, in which transitions are triggered by                        fixed (90 seconds), this agent embeds also a timer that, armed
either the arrival of a message or the elapsing of a given                        after a start, automatically sends a stop message when the 90
timeout, and a specified per-state activity is executed (one-                     seconds are due.
shot) when a new state is reached. The latter, PeriodicFSM, is                       The Ball Control agent is responsible for managing the
instead a finite-state machine in which transitions are activated                 ball sucking system, the buffer and the ball release system.
only by the arrival of a message, while the per-state activity                    During its periodic activity, it queries the RS485 Management
is executed, when a state is reached, periodically, according                     agent in order to check the input lines signalling that a new
to a fixed time period and within a deadline, which is equal                      ball has been sucked: if this event occurs, on the basis of
to the period itself.                                                             the colour of the ball7 , it drives the sucking system’s arm
   As it will be illustrated in the following, BasicFSM model                     servo-motor in order to put the ball in the buffer—if the
is used for front-end agents, while the PeriodicFSM model                         ball is white and the buffer is not full—or to throw the ball
is essentially exploited for those interacting with sensors and                   away—if the ball is black or the buffer is full. This agent
actuators and thus running in the back-end.                                       also holds the number of balls in the buffer, information that,
                                                                                  queried by the Strategy agent, is used by the latter to control
B. The Back-End                                                                   robot behaviour. As for ball release, the Ball Control agent,
                                                                                  following a proper command message, is able to interact with
   As Figure 3 illustrates, the back-end layer is composed
                                                                                  the RS485 Management agent and thus drive the servo-motor
by the following agents: Motion Driver, RS485 Management,
                                                                                  controlling the release of a ball. Finally, by checking the status
Start/Stop Control, Ball Control and Hole Detector. All of
                                                                                  of another input digital line, the Ball Control agent is able to
these agents use the PeriodicFSM model but only the first
                                                                                  understand if a released ball has been successfully put into a
two are directly connected with hardware resources.
                                                                                  hole.
   The Motion Driver agent is in charge of driving wheel
                                                                                     The last agent of the back-end, the Hole Detector, reads,
motors and gathering feedback from optical encoders. It basi-
                                                                                  through a proper interaction with the RS485 Management
cally handles messages (sent by front-end agents) specifying
                                                                                  agent, the data coming from proximity sensors placed under
the speed to set for the left and right wheel, forwarding it
                                                                                  the robot for hole detection and positioning. It is able to
(after measurement unit conversion) to the motor controller
                                                                                  understand the position of the robot, with respect to the hole
connected through the RS232 line. On the other hand, its
                                                                                  to catch, and can thus forward this information to the Strategy
periodic activity entails receiving the feedback from optical
                                                                                  agent, which, in turn, will drive the wheels to centre the hole
encoders (i.e. tick count), acquired through another RS232
                                                                                  and put the ball into it.
line, and then computing tick frequency, thus evaluating the
real speed of the wheels: the obtained value is used to adjust                    C. The Front-End
the value(s) sent to motor controller in order to make each                          The front-end layer implements the high-level activities that
wheel to reach the desired speed6 .                                               drive the robot to reach its goal, i.e. placing the most quantity
  6 This is obtained by means of a proportional-integrative-derivative software      7 The colour is detected through a sensor connected to another digital input
controller.                                                                       line.



                                                                            93
                 (a)                                      (b)                                       (c)                                                                    (d)
                                                          Fig. 4.   Recognition by Object Detector agent



of balls into its holes. This layer is composed of three agents:
                                                                                                      Move robot beyond               Search and gather
Object Detector, Motion Control and Strategy.                                                            the black line                   white balls                                  Ball in hole and
                                                                                                                                                                                       no more balls
   The Object Detector has the task of observing the playing                                                                                     At least one ball
                                                                                                                                                 in the buffer
                                                                                                                                                                                       in buffer


area, by means of a USB camera, detecting the objects                                                 90 seconds after start
                                                                                                                                                           Hole detected

needed for the game, i.e. balls and holes, and computing their                                                                        Search for my hole
                                                                                                                                                                           Release the ball and

coordinates with respect to the robot position. Since it uses                                     Search for opponent’s holes
                                                                                                      and remove the balls
                                                                                                                                                                           try to reach the hole



a computation-intensive image manipulation algorithm, this is
the sole agent written in C and not in Erlang8 . This algorithm,                                                                                                     Ball in hole and
                                                                                                                                                                     more balls in buffer
                                                                                                             60 seconds after start
whose execution is triggered by a suited message sent by
the Strategy agent, exploits artificial vision techniques and                                                    Fig. 5.          Strategy Agent Behaviour
performs a series of transformation (i.e. filtering, threshold,
binarisation) on RGB planes of each frame acquired in order
to isolate and recognise the required objects. Figure 4 reports                     together information about environment and robot subsystems
some screen-shots of the functioning of the Object Detector.                        to obtain a valuable and effective playing strategy. Even if the
In particular, Figures 4a and 4c show two acquired frames,                          field is mostly immutable (except for the position of totems,
while Figures 4b and 4d illustrate the filtered images with the                     which are set before each match) and many of the balls
objects (respectively a white ball and two blue holes) detected                     involved are still in fixed position, we chose to implements an
by the agent.                                                                       intelligent and adaptive strategy instead of a simple “fixed–
   The Motion Control agent, which is the only PeriodicFSM                          path” one. For this reason the Strategy agent has to adaptively
type, has the task of controlling the robot’s path: it receives,                    choose the right action to perform at each time, elaborating
from the Strategy agent, messages containing commands for                           data coming from other agents. As Figure 5 illustrates, the
robot positioning, such as go to X,Y or rotate T, computes                          very first step of the implemented strategy is “move beyond
the speed of the wheels needed to reach the target, and sends                       the first black line”, since this guarantees the collection of
such speeds to the Motion Driver agents. Moreover, in order                         at least one point10 . This is performed by suitable commands
to ensure that the target is reached, the Motion Control agent                      sent to Motion Control agent. When the black line has been
periodically requests to Motion Driver the tick count of optical                    passed, the main strategy loop begins. First the robot looks
encoders and calculates the absolute position and orientation of                    for white balls and suck them into the buffer: if any white
the robot [7]. These values are thus compared with the target,                      ball is seen by the Object Detector, then the Motion Control
making subsequent speed adjustment, if necessary9 . Another                         agent is issued the commands needed to reach the ball; on the
task of the Motion Control agent is obstacle detection. Since                       other hand, if no ball has been detected, the Strategy agent
the robot has no sensors to detect if an obstacle (e.g. the                         tries to search elsewhere, by rotating of a random angle in
opponent’s robot, a totem, etc.) is in front of it, the Motion                      order to look at other zones of the field. When a ball has been
Control agent checks if there is no wheel movement within                           sucked and the Ball Control agent reports the presence of at
a certain time window (given that wheel’s speeds are greater                        least one white ball into the buffer, the Strategy agent starts
than zero); if this is the case, an obstacle exiting algorithm is                   to search a right hole to drop it into (i.e. a hole of the colour
started, which entails to move the robot backwards and then                         assigned to the team, either red or blue), looking at messages
rotate it.                                                                          from Detector and moving toward a hole as soon as it has been
   The last agent, Strategy, is the “brain” of the robot. Being                     found. When the selected hole is no more visible (i.e. outside
a BasicFSM agent, it is responsible of collecting and putting                       the camera scope) a ball is released and, by means of messages
                                                                                    coming from Hole Detector, a sequence of commands for fine
   8 It uses the OpenCV library [2], which provides a set of fast and optimised
                                                                                    positioning are sent to Motion Control. If the hole is centred
image manipulation functions. Proper Erlang-to-C library functions allows this
agent to interact with Erlang processes.
   9 Also in this case, a proportional-integrative-derivative software controller      10 If the robot does not pass the first black line, then it obtains no points at
is employed.                                                                        the end of the match.



                                                                              94
and the ball goes into it, the Ball Control agent sends a “Ball    Pellegrino, Matteo Pietro Russo, Carmelo Sciuto, Danilo
Successfully Dropped” message, so the Strategy agent decides       Treffiletti and Carmelo Zampaglione.
to search another hole, if more white balls are present into          Moreover, the authors wish to thank also the official spon-
the buffer, or to look for more white balls. If the ball is not    sors of the Eurobot DIIT Team, which are Siatel Srl (from
dropped into a given amount of time (for example because of        Catania, Italy) and Erlang Training & Consulting Ltd11
errors in fine positioning) the Strategy agent searches another    (from London, UK), that, with their support, contributed to
hole and tries to drop the ball into it. Finally, in the last 30   make our dream real.
seconds of game, the Strategy agent tries to find opponent’s
                                                                                                 R EFERENCES
holes to suck white balls out of them.
                                                                    [1] “http://www.erlang.org. Erlang Language Home Page,” 2004.
                V. I MPLEMENTATION I SSUES                          [2] “http://opencvlibrary.sourceforge.net/,” 2006.
                                                                    [3] J. Armstrong, B. Dacker, R. Virding, and M. Williams, “Implementing a
   As it has been previously said in the paper, with the excep-         Functional Language for Highly Parallel Real Time Applications,” 1992.
                                                                    [4] J. L. Armstrong, “The development of Erlang,” in Proceedings of the
tion of the Object Detector, the system has been implemented            ACM SIGPLAN International Conference on Functional Programming,
using the Erlang language. However, even if our research                A. Press, Ed., 1997, pp. 196–203.
group has realized a FIPA-compliant Erlang agent platform           [5] J. L. Armstrong, M. C. Williams, C. Wikstrom, and S. C. Virding,
                                                                        Concurrent Programming in Erlang, 2nd Edition. Prentice-Hall, 1995.
(called eXAT [10], [12], [11], [13], [15], [14]), we did not        [6] Bollella, Gosling, Brosgol, Dibble, Furr, Hardin, and Turnbull, The Real-
use it in order to avoid overhead introduced by platform’s              Time Specification for Java. Addison-Wesley, 2000.
components for inference, behaviour handling, standard FIPA         [7] J. Borenstein, H. R. Everett, and L. Feng, Where am I? — Systems
                                                                        and Methods for Mobile Robot Positioning.             WWW, University
messaging, etc. This is required in order to have a fast and            of Michigan, USA, http://www-personal.engin.umich.edu/∼johannb/
effective support for agents, rather than the possibility of            position.htm, 1996.
interacting with other external agents (according to Eurobot        [8] A. Corsaro and C. Santoro, “Design Patterns for RTSJ Application
                                                                        Development,” in Proceedings of 2 nd JTRES 2004 Workshop, OTM’04
rules, the robot must be autonomous and not connected to                Federated Conferences. LNCS 3292, Springer, Oct. 25-29 2004, pp.
any network). To this aim, each agent of the robot has been             394–405.
encapsulated in an Erlang process and a suitable library has        [9] A. Di Stefano, F. Gangemi, and C. Santoro, “ERESYE: Artificial
                                                                        Intelligence in Erlang Programs,” in Erlang Workshop at 2005 Intl. ACM
been developed to support the BasicFSM and PeriodicFSM                  Conference on Functional Programming (ICFP 2005), Tallinn, Estonia,
deadline-aware abstractions. Message passing has been real-             25 Sept. 2005.
ized by means of the native Erlang constructs to perform inter-    [10] A. Di Stefano and C. Santoro, “eXAT: an Experimental Tool for
                                                                        Programming Multi-Agent Systems in Erlang,” in AI*IA/TABOO Joint
process communication (which are designed to be very fast):             Workshop on Objects and Agents (WOA 2003), Villasimius, CA, Italy,
this resulted in an optimised code able to meet to real-time            10–11 Sept. 2003.
requirements of the target application.                            [11] ——, “eXAT: A Platform to Develop Erlang Agents,” in Agent Exhibi-
                                                                        tion Workshop at Net.ObjectDays 2004, Erfurt, Germany, 27–30 Sept.
                                                                        2004.
                     VI. C ONCLUSIONS                              [12] ——, “Designing Collaborative Agents with eXAT,” in ACEC 2004
                                                                        Workshop at WETICE 2004, Modena, Italy, 14–16 June 2004.
   This paper described the architecture of an autonomous          [13] ——, “On the use of Erlang as a Promising Language to Develop Agent
mobile robot, developed by the DIIT Team of the University              Systems,” in AI*IA/TABOO Joint Workshop on Objects and Agents
                                                                        (WOA 2004), Torino, Italy, 29–30 Nov. 2004.
of Catania to participate to the Eurobot competition. A multi-     [14] ——, “Supporting Agent Development in Erlang through the eXAT
agent system has been employed for this purpose, composed               Platform,” in Software Agent-Based Applications, Platforms and De-
of several agents in charge of both interacting with physi-             velopment Kits. Whitestein Technologies, 2005.
                                                                   [15] ——, “Using the Erlang Language for Multi-Agent Systems Implemen-
cal sensors and actuators, and supporting the game strategy             tation,” in 2005 IEEE/WIC/ACM International Conference on Intelligent
for the robot. A layered architecture has been designed to              Agent Technology (IAT’05), Compiégne, France, 19–22 Sept. 2005.
clearly separate the aspects above—physical world interface        [16] P. Dibble, Real-Time Java Platform Programming. Prentice Hall PTR,
                                                                        2002.
and intelligence—and to favour design, modularity and reuse.       [17] I. Infantino, M. Cossentino, and A. Chella, “An agent based multilevel
Due to real time constraints, the system has been implemented           architecture for robotics vision systems.” in Proceedings of the Inter-
using the Erlang language by means of a proper library to               national Conference on Artificial Intelligence, IC-AI ’02, June 24 - 27,
                                                                        2002, Las Vegas, Nevada, USA, Volume 1, 2002, pp. 386–390.
support the abstraction needed for using agents in a robotic       [18] E. Johansson, M. Pettersson, and K. Sagonas, “A High Performance
environment. This allowed us to develop a fast code able to             Erlang System,” in 2 nd International Conference on Principles and
effectively support robot’s activities.                                 Practice of Declarative Programming (PPDP 2000), Sept. 20–22 2000.
                                                                   [19] C. L. Liu and J. W. Layland, “Scheduling Algorithms for Multipro-
                                                                        gramming in a Hard-Real-Time Environment,” JACM, vol. 20, no. 1,
                VII. ACKNOWLEDGEMENTS                                   pp. 46–61, Jan. 1973.
                                                                   [20] Liu, J. W. S., Real-Time Systems. Prentice Hall, 2000.
   The authors wish to thank so much all the other components      [21] C. Varela, C. Abalde, L. Castro, and J. Gulias, “On Modelling Agent
of the Eurobot DIIT Team, who gave a terrific and fundamental           Systems with Erlang,” in 3 rd ACM SIGPLAN Erlang Workshop, Snow-
contribution in the realization of the robot described in this          bird, Utah, USA, 22 Sept. 2004.
                                                                   [22] A. Wellings, Concurrent and Real-Time Programming in Java. Wiley,
paper and made this experience not only very useful but also            2004.
very funny.
   These people are Roberto Di Salvo, Andrea Nicotra, Luca
Nicotra, Massimiliano Nicotra, Stefano Palmeri, Francesco            11 http://www.erlang-consulting.com




                                                             95