<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Designing Autonomous Robots using GOLEM</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Fabrizio Messina, Giuseppe Pappalardo, Corrado Santoro University of Catania - Dept. of Mathematics and Computer Science Viale Andrea Doria</institution>
          ,
          <addr-line>6 - 95125 - Catania</addr-line>
          ,
          <country country="IT">ITALY</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-In this paper we present a goal model for the design of autonomous robots. A framework is proposed which allows a developer to design the behaviour of a robot in accordance with a rational model, which is conceived to let the autonomous system to mimic the human behaviour. The framework, called GOLEM, is based on the abstractions of goals and sub-goals, which are combined through specific relationships to specify the robot's activities; such activities, assembled together, form the rational behaviour of the autonomous system. The central motivation for the design of our framework is represented by the central issue of integrating full awareness and autonomy. Indeed, in the proposed solution the execution of goals does not obey a pre-fixed sequence; rather, the next goal to achieve is selected, each time, on the basis of an evaluation of its opportunity.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Programming the behaviour of autonomous robots requires
powerful abstractions, languages and suitable frameworks by
which the designer can implement behavioural elements
featuring autonomy and situatedness. By means of suitable specific
constructs, expressing the actions to be undertaken by the robot
to reach the goal, for which the robot itself has been designed,
can be easier.</p>
      <p>Autonomy is a special, fundamental aspect to be taken into
account, as it is the ability of a machine to select the right
action(s) to be performed to reach a certain goal without relying
to any additional (human) control. Selecting the appropriate
actions is an activity to be performed taking into account
the reference environment which, being in many cases the
physical and real world, is dynamic and often unpredictable.
For the reasons above (dynamic and unpredictability of the
environment) an action may fail. Indeed, since the environment
is dynamic, its state, which has been perceived before starting
the action, is changed, thus making the action no more feasible.
In this case, as humans being, a real autonomous robot should
be able to identify such a situation and adopt countermeasures,
if any.</p>
      <p>
        In order to deal with the issue discussed so far, the software
implementing the robot’s behaviour must be properly designed.
In general, the underlying model of a robot behaviour is
modelled as a continuous loop, involving the three phases
sense/evaluate/act: here, the action to be performed is selected
on the basis of the state of the environment (sense phase) and
the state of the robot itself1 (evaluate phase). At first sight
such a model could be easily implemented with a loop calling
functions for environment sensing and using a series of “if ”s
to let the program select the right action(s). Another
programming model which is often used in the field of autonomous
systems considers the use of finite-state machines [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
1In rational robots, this state can include also a form of “knowledge”.
or state-charts [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], which help the modelling of a
robot’s behaviour by exploiting a graphical approach which can
then be implemented either directly or using proper software
libraries [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Other proposals exploit the concept of
a reactive rule [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], thus providing languages or
platforms to express robot behaviour by means of a series of
rules in the form event/condition/action.
      </p>
      <p>Many of the solutions described above are derived from
computational models, which aim at trying to map
computer science abstractions to a framework which lets a robot
exhibit an autonomous behaviour. Nevertheless we followed
a different approach, consisting in starting from a rational
model, i.e. to understand how a human would behave in order
to autonomously reach a goal, in order to derive a proper
computational model capable of—more or less precisely—
represent it. As a consequence, in this work we introduces
an abstract framework, called GOLEM, in which a robot’s
behaviour is decomposed into a set of tasks and sub-tasks
(which are indeed referred to as goals), properly interconnected
to one another on the basis of certain dependency relationship.
The order of execution of tasks is not a-priori fixed, but the task
to be executed is dynamically chosen by means of a dynamic
scheduling policy, which takes into account the concepts of
feasibility, priority and opportunity. The idea is to endow the
robot with a high degree of deliberation ability, by letting it
autonomously decide which (sub-)task or (sub-)goal adopt at
each time instant.</p>
      <p>The paper is structured as follows. Section II provides
a real-life scenario which is used as a motivating example
to introduce the model. Section III formalises the GOLEM
framework, illustrating the entities, the relationships and the
execution semantics. Section IV describes the GOLEM version
of the scenario introduced in Section II. Section V deals
with related work. Section VI ends up the paper with our
conclusions.</p>
      <p>II.</p>
    </sec>
    <sec id="sec-2">
      <title>A DAILY LIFE EXAMPLE</title>
      <p>Supposing we should program a robot to perform a typical
daily task performed by humans: we wish the robot behave in
the same way as a human would do, so as to make the robot
exhibit a human-like (and in some sense “rational”) behaviour.
Starting from such an example is very useful to derive some
interesting properties which eventually are the basis of the
framework described in the subsequent Sections of the paper.</p>
      <sec id="sec-2-1">
        <title>A. The “Food Buyer” Robot</title>
        <p>Let us suppose we want to program a humanoid robot to go
to the supermarket and buy some food for us, given a specific
food list. In detail, we need some bread, milk, beer and pasta;
moreover, for the beer, we would like to specify an order of
preference for the brand, i.e. first Brand1, if available, then
Brand2 as a second choice, etc.</p>
        <p>We start by specifying the overall goal, say GR, as to buy
the food in the list. Moreover, it is natural to express GR by
splitting it in a set of sub-goals to be achieved by the robot,
as in the following sequence:</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>First, go to the supermarket.</title>
      <p>Pick each food type specified in the list, and put
it into the cart.</p>
      <p>When each item of the list has been picked, go to
the checkout and wait in the line.</p>
      <p>When there are no more people before you, pay
for the food which is the cart.</p>
      <p>Finally, go back home and bring me the food.</p>
      <p>We can say that fulfilling sub-goals {G1 . . . G5} implies the
automatic achievement of the overall goal GR, which is a way
of reasoning very close to that of humans.</p>
      <p>G2 can be decomposed into a set of sub-goals: G21 for
bread, G22 for milk, G23 for beer (with brand preference),
and G24 for pasta. As a consequence, the robot has to find
and reach the proper stand2 for each item specified in the list
above in order to pick the item. Just as a human might reason,
the order in which sub-goals {G21 . . . G24} are fulfilled does
not matter: it is up to the robot (as for a human) to make such
a choice: it only matters that all items, sooner or later, will be
picked. More in detail, a person would make a rational choice
on the basis, for example, of the distance among the various
stands, thus trying to minimise the length of the path, i.e. it
could apply a sort of policy, or a strategy.</p>
      <p>Moreover, irrespective of the specific policy used to
perform the choice, it is important to remark the difference
between the set of sub-goals {G1 . . . G5} and {G21 . . . G24}:
in the former case, the order of execution matters (strict
sequence, no autonomous choice is possible); in the latter
case, the robot can autonomously decide the order following
any—more or less rational—personal criteria. In making such
a choice, different aspects could be considered, some of them
subjective, and others, undoubtedly, objective; one of the latter
is obviously the feasibility of the sub-goal: e.g., if, at a certain
time instant, the stand of the beers is not reachable due to too
much confusion, sub-goal G23 is not feasible and thus it is
useless to choose it. Such an infeasibility is indeed temporary,
for we could try to achieve G23 after some time, as soon as,
e.g., there is no more crowd.</p>
      <p>Order of execution is not the sole difference between
sets {G1 . . . G5} and {G21 . . . G24}; while, as stated above,
achievement of GR requires the mandatory achievement of all
of its sub-goals {G1 . . . G5}, the same cannot be said for G2
and {G21 . . . G24}: indeed, if a certain food is unavailable
(e.g., milk), the robot should nevertheless proceed with the
shopping, that is, to consider G2 achieved the successful
completion of only some it sub-goals suffices.</p>
      <p>Also goal G23 deserves a special remark; here the objective
is to pick some beer based on brand preferences. To this aim,
2We are supposing that the robot has a knowledge of the physical location
of goods in the supermarket.
we can model G23 as further composed of a set of sub-goals,
one for each specific brand, e.g. GB 1 for Brand 1, GB 2 for
Brand 2, GB 3 for Brand 3.</p>
      <p>The sense of this (de)composition, in terms of selection
and achievement, is different from both GR and G2: if
GB 1 succeeds (Brand 1 is available), then G23 is achieved;
otherwise try GB 2 and so on.</p>
      <sec id="sec-3-1">
        <title>B. Discussion</title>
        <p>The above example shows some interesting properties
related to the “rational behaviour” of an autonomous entity
(being either a human or a robot).</p>
        <p>First of all, the behaviour can be represented as a set of
goals, some of which, in turn, can be further decomposed into
sub-goals. Making such a decomposition allows the
identification of the various tasks and actions to be accomplished by
the robot, together with their relationships.</p>
        <p>The relationship plays a fundamental role in establishing
the order of execution and the achievement policy. Indeed,
order of execution can be strictly in sequence in some cases,
but, in many other cases, which goal to try and achieve is
(or can be) the robot’s autonomous choice, which can be
performed on the basis of some parameters like (for example):
•
•
•
•
the state of the environment or the robot itself (“let’s
go and get pasta first, because it is nearer to my
position”);
past experience of the robot (“let’s get pasta because
I know where it is located”);
rational opportunities (“let’s put the beer last in the
cart, to lower the probability of breaking the bottles”);
other reasons.</p>
        <p>A goal (or sub-goal) can succeed or fail. Failure can be due
to various reasons, whether temporary, e.g. crowd in a certain
area of the supermarket, or permanent, e.g. there are no items
of a certain food type. A goal failed for a temporary reason
can be retried later, when the robot “thinks” there is again an
opportunity to execute it.</p>
        <p>All of these aspects contribute to conferring the robot a
complete awareness of its objectives, as well as the ability to
autonomously deliberate about the actions to be performed at
each time instant. All of these aspects are the basis of the
GOLEM framework which is described in the next Section.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>III. THE GOLEM FRAMEWORK</title>
      <p>GOLEM, which is the framework presented in this paper,
consists of a set of precise rules specifying the syntax to which
a robot program P must obey to be correctly executed, and an
abstract machine (GAM , GOLEM Abstract Machine) which,
in turn, has the responsibility of executing P , basing on some
semantics rules.</p>
      <p>P
g
sg
f
p
act
status</p>
      <p>As shown in Figure 1, which reports the basic GOLEM
syntax, a robot program P is represented by its main goal, g.
A goal g, in turn, represents the tasks that the robot must do
to achieve a certain state. Moreover, according to the nature
of such tasks, a goal can be simple or composite.</p>
      <p>A simple goal, sg , represents a goal which requires no
specific further decision on the actions to be undertaken. A
simple goal includes a computation which is executed “as is”
to achieve the goal itself; if the computation ends with success,
the goal is achieved, otherwise, it is failed. A simple goal is
represented with a tuple composed of the following parts:
•
•
•
feasibility function, f ;
opportunity evaluator, p;
goal action, act ;</p>
      <p>Feasibility function contains the necessary logic to evaluate
whether the goal is feasible or not. The internal structure of f
is not specified, it is supposed to be a program code which
senses the state of the environment and/or robot, performs
suitable evaluations and returns a boolean value indicating
the feasibility of the goal. The idea behind the concept of
feasibility is to verify that there are no conditions that would
impede the achievement of a goal; indeed, stating that a goal
is feasible does not imply that the goal will surely succeed:
other situations might happen, during the execution of the tasks
of the goal, which cause the failure of a specific action. For
example, feasibility of goal G4 in the supermarket example
(checking out) depends on an empty checkout line.</p>
      <p>Opportunity Evaluator is a function by which the designer
is able to specify the level of priority or importance of the goal,
with respect to the others. Such importance can be represented
by a number, e.g. an integer. As a consequence this value
is used to select, among different feasible goals, the one to
execute. Such a measure is not intended to be a static value,
as it should be dynamically evaluated before deciding which
goal to execute. Evaluating the opportunity could thus imply
to sense the state of the environment (or the robot itself)
and, on this basis, decide if the goal is, in this time instant,
more important than another one (or the most important one).
As an example, in order to try to minimise the path in our
supermarket example, the opportunity function of goals G21,
G22, G23 and G24 could be a factor proportional to the
distance to the (respective) stand to be reached. To perform
a correct evaluation, a proper metric has to be derived to map
the (notion of) importance of a goal to a positive integer: the
higher the value the higher the importance of the goal.</p>
      <p>Goal action is an entity representing the actions to be
performed in order to try to achieve the goal itself. It is
a piece of code including the required statements to make
the robot carry out the intended actions. From a conceptual
point of view, this code should not present rational choices
or deliberative actions, which instead should be modelled as
sub-goals of a composite goal (see below). Again, goal action
is modelled as a function returning three possible constants:
ACHIEVED. Execution of the action caused the
successful achievement of the goal.</p>
      <p>T FAIL. Execution failed, but such a failure is
considered temporary, i.e. retrying the execution later could
lead to success.</p>
      <p>P FAIL. Execution failed and such a failure is
considered permanent.</p>
      <p>A composite goal, cg , is instead represented as a pair
(rel , g set ) comprising a set of sub-goals g set and a
relationship condition rel , which can be one of the following:
ALL. The goal succeeds when all of its sub-goals
are achieved; the order in which the sub-goals are
achieved does not matter (i.e. the set g set is not
ordered).</p>
      <p>ALL SEQ. The goal succeeds when all of its
subgoals are achieved but the order in which the
subgoals are achieved must be a strict sequence (i.e. the
set g set is ordered).</p>
      <p>AT LEAST(k). The goal succeeds when at least k of
its sub-goals (with k specified) are achieved; the order
of achievement does not matter.</p>
      <p>SEQ UNTIL. The goal succeeds when (as soon as)
any sub-goal is achieved; the set g set is ordered and
sub-goal achievement is tried in strict sequence.</p>
      <p>Each sub-goal may be, in turn, either simple or composite.
The result is thus a hierarchy of goals, with proper
relationships, which represents the model of the overall autonomous
behaviour of the robot. Such a hierarchy is handled by the
GOLEM Abstract Machine in order to execute the goals
respecting the semantics of relationships, as it is described
in the next Subsection.</p>
      <sec id="sec-4-1">
        <title>B. GOLEM Abstract Machine</title>
        <p>A GOLEM robot program P is executed by the GOLEM
Abstract Machine (GAM) whose behaviour is essentially based
on a continuous loop iterating on the following steps:
1)
2)
3)</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Evaluation of feasible goals. Evaluation of their opportunity values. Selection of the feasible goal with highest opportunity value.</title>
      <p>4)</p>
    </sec>
    <sec id="sec-6">
      <title>Execution of the selected goal.</title>
      <p>The GAM stops executing the program when P either has been
successfully achieved or has permanently failed.</p>
      <p>For each iteration, the program P is scanned by looking
into its composite goals, also descending into the hierarchy
if needed, with the objective of finding a simple (sub-)goal
which can be executed; this is performed using proper policies
which depend on the relationship type of composite goals. At
each iteration, when execution of the goal is completed, it is
checked whether goal execution caused the achievement of P
or its permanent failure. In the former case, the program ends
with success, otherwise an error is reported.</p>
      <p>C. GAM state information and main functions</p>
      <p>The GAM has an internal state represented by the tuple
(P, AG , FG , TFG ) where P is the robot program, AG is the
set of achieved goals, FG is the set of permanently failed
goals, and TFG is the set of temporarily failed goals.</p>
      <p>Sets AG , FG and TFG are initially empty and then
properly modified during program execution; they may contain
only simple goals, while evaluation of achievement or failure
of a composite goal is performed by analysing its composing
sub-goals. To this aim, the GAM exploits two functions:
IS ACHIEVED(g) → {true, false} and IS P FAILED(g) →
{true, false}. The former evaluates whether a goal g has
been successfully achieved or not; Table I reports the specific
behaviour of this function which depends on the goal type
(simple or composite) and the relationship type (for composite
goals). The latter evaluates whether a goal g has to be
considered permanently failed; Table II reports the behaviour of such
a function. In all tables, for convenience, we assume that, when
g is composite, g.g set represents its goal set. Likewise, if g
is simple, g.p, g.f and g.act represent g’s feasibility function,
opportunity evaluator and goal action, respectively.</p>
      <p>IS ACHIEVED(g) → {true, false}
Goal Type/Rel. Function Behaviour
simple g ∈ AG</p>
      <p>ALL Vh∈g.g set {IS ACHIEVED(h)}</p>
      <p>ALL SEQ Vh∈g.g set {IS ACHIEVED(h)}
AT LEAST (k) |{h ∈ g.g set : IS ACHIEVED(h)}| ≥ k
SEQ UNTIL Wh∈g.g set {IS ACHIEVED(h)}</p>
      <p>Steps 1 and 2 of the GAM execution loop perform
evaluation of goals in order to select a feasible one. We assume
a function EVAL(g) that evaluates both the feasibility and
opportunity of a goal g as follows. If g is not feasible, or
belongs to either AG or FG , the EVAL(g) returns −∞;
otherwise it returns the value of the opportunity function g.p()
if g is simple, or a proper evaluation of its sub-goals if g is
composite. Its specific behaviour is reported in Table III.</p>
      <p>Goal Type/Rel.</p>
      <p>any
simple</p>
      <p>ALL</p>
      <p>ALL SEQ
AT LEAST (n)
SEQ UNTIL(n)</p>
      <p>EVAL(g)</p>
      <p>Function Behaviour
−∞ if g ∈ AG ∪ FG
if g.f() then −∞ else g.p()
max {EVAL(h)|h ∈ g.g set}
EVAL(first of ({h| ∈ g.g set − (AG ∪ FG)}))</p>
      <p>max {EVAL(h)|h ∈ g.g set}</p>
      <p>EVAL(first of ({h| ∈ g.g set − (AG ∪ FG)}))</p>
      <p>The functions introduced above are exploited by the GAM
to execute a program P . Algorithm 1 reports the pseudo code
of the main GAM behaviour: here a loop is executed until the
program P is considered achieved or permanently failed; the
body of the loop evaluates the feasibility of P and, if this is
the case, it executes P by calling EXEC FEASIBLE.
Algorithm 1 GAM Main Algorithm
1: procedure GAM(P:goal)
2: AG ← ∅; FG ← ∅; TFG ← ∅
3: while ¬IS ACHIEVED(P)∧¬IS P FAILED(P) do
4: if EVAL(P) 6= −∞ then
5: EXEC FEASIBLE(P)
6: end if</p>
      <sec id="sec-6-1">
        <title>7: end while</title>
      </sec>
      <sec id="sec-6-2">
        <title>8: end procedure</title>
        <p>Procedure EXEC FEASIBLE, whose pseudo-code is
illustrated in Algorithm 2, checks the goal type, i.e. whether it is
simple or composite: in the former case, the act () function is
called and its outcome is tested in order to update sets AG ,
FG and TFG ; in the latter case, a proper procedure is called
to perform execution according to the relationship type, as
detailed in the following:</p>
        <p>1) ALL relationship: The ALL relationship requires that
all subgoals must be executed but the order of execution does
not matter. The EXEC ALL procedure, reported in Algorithm 3,
determines the set of feasible sub-goals and, from it, selects the
one with the highest opportunity value; if this condition holds
for more than a goal, a non-deterministic (random) choice
is performed (this is done by the ONE OF function in line
11). Once the sub-goal has been selected, the EXEC FEASIBLE
procedure is called again to concretely execute the goal.</p>
        <p>2) ALL SEQ relationship: This kind of relationship is
more strict than the previous one: not only all sub-goals
must be executed, but also the order of execution must be
respected. As it is illustrated in Algorithm 4, the procedure
EXEC ALL SEQ scans the set of sub-goals in sequence in order
to find the first non-achieved sub-goal: if it is feasible, the
subgoal is directly executed.</p>
        <p>3) AT LEAST(k) relationship: This relationship is
equivalent to ALL, with the sole exception that the composite goal
is considered achieved when at least k sub-goals succeed.
This condition is stated in the IS ACHIEVED function, while,
with respect to sub-goal selection, the policy here is to find
Algorithm 2 Execute a feasible goal
1: procedure EXEC FEASIBLE(g:goal)
2: ⊲ This procedure supposes that P is feasible
3: if IS SIMPLE(g) then
4: r ← g.act ()
5: if r = ACHIEVED then
6: AG ← AG ∪ {g}; TFG ← TFG \ {g}
7: else if r = P FAILED then
8: FG ← FG ∪ {g}; TFG ← TFG \ {g}
9: else ⊲ T FAILED
10: TFG ← TFG ∪ {g}
11: end if
12: else
13: (rel , g set ) ← g
14: if rel = ALL then
15: EXEC ALL(g set )
16: else if rel = ALL SEQ then
17: EXEC ALL SEQ(g set )
18: else if rel = AT LEAST (k) then
19: EXEC ALL(g set )
20: else if rel = SEQ UNTIL then
21: EXEC SEQ UNTIL(g set )
22: end if
23: end if
24: end procedure
Algorithm 3
1: procedure EXEC ALL(g set:set of goal)
2: failed ← {g ∈ g set : IS P FAILED (g)}
3: if failed 6= ∅ then
4: return
5: end if
6: selectables ← {g ∈ g set : EVAL(g) 6= −∞}
7: if selectables = ∅ then</p>
      </sec>
      <sec id="sec-6-3">
        <title>8: return</title>
        <p>9: end if
10: candidates ← {g ∈ selectables : max EVAL(g)}
11: selected ←ONE OF(candidates)
12: EXEC FEASIBLE(selected )
13: end procedure
Algorithm 4
1: procedure EXEC ALL SEQ(g set : set of goal )
2: for g ∈ g set do
3: if g ∈ FG then
4: return
5: end if
6: if g ∈/ AG then
7: if EVAL(g) 6= ∞ then
8: EXEC FEASIBLE(g)
9: end if
10: return
11: end if
12: end for
13: end procedure
the feasible sub-goal with the highest opportunity value and
execute it. Such a behaviour is thus the same of the EXEC ALL
procedure, which is the one called also in this case as it is
shown in Algorithm 2.</p>
        <p>4) SEQ UNTIL relationship: Also in this case, the set of
sub-goals is ordered thus the policy adopted for execution is
to scan such a set in order to find the first feasible goal, also
ensuring that all the previous ones have already failed. Such
a behaviour is reported in the EXEC SEQ UNTIL procedure
illustrated in Algorithm 5.</p>
        <p>Algorithm 5
1: procedure EXEC SEQ UNTIL(g set:set of goal)
2: for g ∈ g set do
3: if g ∈ AG then
4: return
5: end if
6: if g ∈/ FG then
7: if EVAL(g) 6= ∞ then
8: EXEC FEASIBLE(g)
9: end if
10: return
11: end if
12: end for
13: end procedure</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>CASE-STUDY</title>
      <p>In order to show a practical use of the GOLEM framework,
as well as the related benefits, let us provide a case-study based
on the example provided in Section II. We report the complete
set of goals in the tree shown into Figure 2, in which some
nodes represent the (sub)goals, while some others represent
the relationship by which the (sub)goal must be executed. In
order to discuss the details of the case study, in the following
we will represent each goal by means of a symbolic frame
reporting the name of the goal, and an informal description of
the behaviour of feasibility, opportunity and action functions.</p>
      <p>The main goal of the robot is to go to the supermarket
and buy some food. This can be expressed as a mandatory
sequence of sub-goals:</p>
      <sec id="sec-7-1">
        <title>Name: Main</title>
        <p>Relationship: ALL SEQ
Goal set: Go to supermarket, Pick items, Pay,</p>
        <p>Go back home</p>
        <p>The first sub-goal, Go to supermarket, has a feasibility
condition implying to check current time in order to ensure
that the supermarket is open. Opportunity value is useless, in
this case, since, according to the ALL SEQ semantics, this is
the sole sub-goal which can be selected. Its representation is
therefore:</p>
      </sec>
      <sec id="sec-7-2">
        <title>Name: Go to supermarket</title>
        <p>Feasibility: Is the supermarket open? (check time)
Opportunity: any</p>
        <p>Description: Reach the supermarket</p>
        <p>Pick items is instead another composite goal and contains
a sub-goal for each food type to be brought. We consider that
Let’s buy food (Main)</p>
        <p>[ALL SEQ]</p>
        <sec id="sec-7-2-1">
          <title>Go to Supermarket Pick items Pay Go Back Home</title>
          <p>[AT LEAST(1)]</p>
        </sec>
        <sec id="sec-7-2-2">
          <title>Bread Milk Beer Pasta</title>
          <p>at least one type of food has to be put in the cart to consider
the goal achieved, thus:</p>
        </sec>
      </sec>
      <sec id="sec-7-3">
        <title>Name: Pick items</title>
        <p>Relationship: AT LEAST(1)
Goal set: Break, Milk, Beer, Pasta</p>
      </sec>
      <sec id="sec-7-4">
        <title>Name: Pay</title>
        <p>Feasibility: No people in the line?
Opportunity: any</p>
        <p>Description: Pay for the food</p>
        <p>Goal Pay is a simple one. Its feasibility implies the absence
of people in the line and the action requires to pay for the food
put in the cart:</p>
        <p>Last goal of the set, Go back home, is quite simple and
can be easily described as:</p>
      </sec>
      <sec id="sec-7-5">
        <title>Name: Go back home</title>
        <p>Feasibility: True
Opportunity: any</p>
        <p>Description: Get the food and go back home</p>
        <p>As for the goals related to food picking, Bread, Milk and
Pasta are simple and their model is, in practice, the same:</p>
      </sec>
      <sec id="sec-7-6">
        <title>Name: Bread/Milk/Pasta</title>
        <p>Feasibility: True
Opportunity: Evaluate distance from the stand
Description: Reach the stand and pick some
bread/milk/pasta</p>
        <p>Things are different for Beer since we expressed a priority
preference on brands. For this reason, we model this goal as
a composite one using the SEQ UNTIL relationship:</p>
      </sec>
      <sec id="sec-7-7">
        <title>Name: Beer</title>
        <p>Relationship: SEQ UNTIL</p>
        <p>Goal set: Beer brand 1, Beer brand 2, Beer brand 3
Each Beer brand n sub-goal can be viewed as simple goal
and represented like the ones related to the other food type
(i.e. Bread, Milk and Pasta); for this reason, we omit here
their description.</p>
        <p>
          As stated in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], in which the field of RDEs (Robotic
Development Environment) is analyzed by selecting and
comparing several different open source RDEs, architectures and
programming models for mobile robots have become very
important in the last years.
        </p>
        <p>
          Some previous works, like [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], address the problem
of expressing coordination of various concurrent activities in
autonomous mobile robots. In [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], an extension of C++, called
TDL, is designed for the purpose of syntactically supporting
task decomposition, synchronization, execution monitoring,
and exception handling. In [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], the authors address the
problem of limited physical and computational resources of
autonomous robots, which causes various constraints. They
present a solution called Task Control Architecture (TCA) that
supports the programmer in splitting various type of behaviors
onto layered components. Deliberative components handle
normal situations, while reactive behaviors handle exceptional
one.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], the authors discuss the difficulties related to the
practical application of evolutionary approaches to make a
mobile robot “learn” a complex behaviour. They propose a
modularization of the control architecture to make the robot
learn different behaviours by using a task decomposition
technique, such that the definition of fitness functions becomes
straightforward.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], the problem of automated task planning for service
robots is addressed by combining semantic knowledge
representation and previous approaches related to AI. A flexible
framework is presented, which assists service robots in task
planning by means of a semantic ontology constructed for
representing environmental description and robot primitive
actions. Authors also describe a simple scenario and demonstrate
the effectiveness of their work in a simulated environment.
        </p>
        <p>
          In the field of knowledge processing system for
autonomous personal robots, we mention [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] and [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. Authors
of [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] present a knowledge processing system exploiting a
first-order knowledge representation based on description logic
in order to provide action-centered representation, a way to
acquire the grounded concepts and experience by observation and
fast inference, which helps autonomous robots to take the right
decision in time. In [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] information available in the World
Wide Web is used as a knowledge base to help autonomous
robots to perform tasks. Web pages are categorized to be used
for knowledge processing by autonomous robots and, for this
aim, several processing methods are presented.
        </p>
        <p>
          By the well known DBI (Belief-Desire-Intention)
model [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] agent behaviour can be defined in terms of
beliefs (the informative component of the system state) ,
desires (the motivational state of the system), and intentions
(encapsulating the deliberative component of the system).
On the other hand, our framework focuses on supporting
the programmer in developing the deliberative component of
robot behaviour through the noted opportunity and feasibility
functions.
        </p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>CONCLUSIONS AND FUTURE WORK</title>
      <p>This paper has presented an abstract framework, called
GOLEM, for the design and modelling of behaviour of
autonomous robotic systems. In order to give the robot
deliberation abilities, GOLEM uses a rational approach by organising
robot’s actions into goals and sub-goals, which must be
individually achieved.</p>
      <p>
        The GOLEM framework has been implemented by the
authors in two different programming languages. A first
implementation is in the C language, specifically optimised for
the execution onto platforms based on microcontrollers3. This
implementation has been used to program the autonomous
behaviour of the robots built by the UNICT-TEAM4 for
the Eurobot student robotic competition5; it is currently not
published on the web but is available on request. The second
implementation (called ENIGMA6) has been done in the
Erlang language and belongs to the authors’ research in the
field of the application of parallel functional languages to robot
programming [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>Both implementations have been tested on realistic case
scenarios, in order to evaluate the effectiveness of the GOLEM
approach in real-life situations. Experience showed that the
organisation of activities in goals and sub-goals helps a lot the
development process, by allowing the programmer to clearly
identify the “pieces of the robot’s behaviour”, treating them
as single entities, thus using, also for behaviour programming,
the well-known and successful “divide-et-impera” principle.</p>
      <p>Detailed description of such implementations, as well as
experiences on the use of GOLEM, will be the object of future
work.</p>
      <p>VII.</p>
      <p>ACKNOWLEDGEMENTS</p>
      <p>This work has been supported by project PRISMA
PON04a2 A/F funded by the Italian Ministry of University.
3This implementation works in bare metal, without an operating system
4http://eurobot.dmi.unict.it
5http://www.eurobot.org
6https://github.com/ivaniacono/enigma</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Brooks</surname>
          </string-name>
          , “
          <article-title>A robust layered control system for a mobile robot,” Robotics and Automation</article-title>
          ,
          <source>IEEE Journal of</source>
          , vol.
          <volume>2</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>14</fpage>
          -
          <lpage>23</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Chow</surname>
          </string-name>
          , “
          <article-title>Testing software design modeled by finite-state machines</article-title>
          ,
          <source>” IEEE Trans. Software Eng.</source>
          , vol.
          <volume>4</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>178</fpage>
          -
          <lpage>187</lpage>
          ,
          <year>1978</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Coˆte´</surname>
          </string-name>
          , D. Le´tourneau,
          <string-name>
            <given-names>F.</given-names>
            <surname>Michaud</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-M. Valin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Brosseau</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Raievsky</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Lemay</surname>
            , and
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Tran</surname>
          </string-name>
          , “
          <article-title>Code reusability tools for programming mobile robots</article-title>
          ,
          <source>” in Intelligent Robots and Systems</source>
          ,
          <year>2004</year>
          .(
          <article-title>IROS 2004)</article-title>
          . Proceedings. 2004 IEEE/RSJ International Conference on, vol.
          <volume>2</volume>
          . IEEE,
          <year>2004</year>
          , pp.
          <fpage>1820</fpage>
          -
          <lpage>1825</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Fichera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Marletta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Nicosia</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          , “
          <article-title>Flexible robot strategy design using belief-desire-intention model</article-title>
          .” in Eurobot Conference,
          <year>2010</year>
          , pp.
          <fpage>57</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fortino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Russo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          , “
          <article-title>Translating statecharts-based into bdi agents: The dsc/profeta case,” in Multiagent System Technologies</article-title>
          . Springer,
          <year>2013</year>
          , pp.
          <fpage>264</fpage>
          -
          <lpage>277</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Harel</surname>
          </string-name>
          , “
          <article-title>Statecharts: A visual formalism for complex systems,” Science of computer programming</article-title>
          , vol.
          <volume>8</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>231</fpage>
          -
          <lpage>274</lpage>
          ,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ji</surname>
          </string-name>
          et al., “
          <article-title>Towards automated task planning for service robots using semantic knowledge representation,” in INDIN</article-title>
          . IEEE,
          <year>2012</year>
          , pp.
          <fpage>1194</fpage>
          -
          <lpage>1201</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kramer</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Scheutz</surname>
          </string-name>
          , “
          <article-title>Development environments for autonomous mobile robots: A survey</article-title>
          ,” Autonomous Robots, vol.
          <volume>22</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>101</fpage>
          -
          <lpage>132</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>W.-P.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hallam</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Lund</surname>
          </string-name>
          , “
          <article-title>Learning complex robot behaviours by evolutionary computing with task decomposition,” in Learning Robots</article-title>
          . Springer,
          <year>1998</year>
          , pp.
          <fpage>155</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Lees</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Leifer</surname>
          </string-name>
          , “
          <article-title>A graphical programming language for robots operating in lightly structured environments,” in Robotics and Automation, 1993</article-title>
          . Proceedings.,
          <source>1993 IEEE International Conference on. IEEE</source>
          ,
          <year>1993</year>
          , pp.
          <fpage>648</fpage>
          -
          <lpage>653</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Messina</surname>
          </string-name>
          , G. Pappalardo, and
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          , “
          <article-title>Integrating cloud services in behaviour programming for autonomous robots,” in Algorithms and Architectures for Parallel Processing</article-title>
          . Springer,
          <year>2013</year>
          , pp.
          <fpage>295</fpage>
          -
          <lpage>302</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>O.</given-names>
            <surname>Obst</surname>
          </string-name>
          , “
          <article-title>Specifying rational agents with statecharts and utility functions,” in RoboCup 2001: Robot Soccer World Cup V, ser</article-title>
          . Lecture Notes in Computer Science, A. Birk,
          <string-name>
            <given-names>S.</given-names>
            <surname>Coradeschi</surname>
          </string-name>
          , and S. Tadokoro, Eds. Springer Berlin Heidelberg,
          <year>2002</year>
          , vol.
          <volume>2377</volume>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rao</surname>
          </string-name>
          and M. Georgeff, “
          <article-title>BDI agents: From theory to practice,” in Proceedings of ICMAS</article-title>
          . San Francisco, CA,
          <year>1995</year>
          , pp.
          <fpage>312</fpage>
          -
          <lpage>319</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          , “
          <article-title>An erlang framework for autonomous mobile robots,”</article-title>
          <source>in ERLANG '07: Proceedings of the 2007 ACM SIGPLAN workshop on Erlang. ACM Press</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Simmons</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Apfelbaum</surname>
          </string-name>
          , “
          <article-title>A task description language for robot control</article-title>
          ,” in IEEE/RSJ, vol.
          <volume>3</volume>
          . IEEE,
          <year>1998</year>
          , pp.
          <fpage>1931</fpage>
          -
          <lpage>1937</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>R. G</surname>
          </string-name>
          . Simmons, “
          <article-title>Structured control for autonomous robots,” Robotics and Automation, IEEE Transactions on</article-title>
          , vol.
          <volume>10</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>43</lpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>A. D. Stefano</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Gangemi</surname>
            , and
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Santoro</surname>
          </string-name>
          , “
          <source>ERESYE: Artificial Intelligence in Erlang Programs,” in ERLANG '05: Proceedings of the 2005 ACM SIGPLAN workshop on Erlang</source>
          . New York, NY, USA: ACM Press,
          <year>2005</year>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tenorth</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          , “Knowrobknowledge processing for autonomous personal robots,
          <source>” in Intelligent Robots and Systems</source>
          ,
          <year>2009</year>
          .
          <article-title>IROS 2009</article-title>
          . IEEE/RSJ International Conference on. IEEE,
          <year>2009</year>
          , pp.
          <fpage>4261</fpage>
          -
          <lpage>4266</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tenorth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Klank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pangercic</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          , “
          <article-title>Web-enabled robots,” Robotics &amp; Automation Magazine</article-title>
          , IEEE, vol.
          <volume>18</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>58</fpage>
          -
          <lpage>68</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>