=Paper= {{Paper |id=Vol-2333/paper1 |storemode=property |title=Towards Flexible Assistive Robots Using Artificial Intelligence |pdfUrl=https://ceur-ws.org/Vol-2333/paper1.pdf |volume=Vol-2333 |authors=Amedeo Cesta,Gabriella Cortellessa,Andrea Orlandini,Alessandro Umbrico |dblpUrl=https://dblp.org/rec/conf/aiia/CestaCOU18 }} ==Towards Flexible Assistive Robots Using Artificial Intelligence== https://ceur-ws.org/Vol-2333/paper1.pdf
Towards Flexible Assistive Robots Using
         Artificial Intelligence

    Amedeo Cesta, Gabriella Cortellessa, Andrea Orlandini and
                     Alessandro Umbrico

                   National Research Council of Italy,
     Institute of Cognitive Sciences and Technologies (ISTC-CNR),
        Via S. Martino della Battaglia, 44, 00185, Rome (Italy),
                     {name.surname}@istc.cnr.it



Abstract. New generations of robotic systems capable of taking care
of human-level tasks are becoming more and more desirable especially if
considering the lack of human support in health-care assistance for the
elderly. Assistive Robotics is a growing research field that is also applied
to support both older adults and caregivers in a variety of situations
and contexts. It leverages and integrates results from different research
areas like e.g., Artificial Intelligence (AI), Cognitive Systems, Psychol-
ogy and of course Robotics. Concerning AI, many of the technological
skills that assistive robots could benefit of to achieve their objectives
represent important challenges for that field. Some of the most relevant
are the capability of monitoring and understanding information coming
from the environment, the capability of interacting with humans in a flex-
ible and human-compliant way, the capability of proactively performing
supporting tasks inside the environment and also the capability of per-
sonalizing both interactions and services according to the specific needs
of the assisted person. Thus, there are many techniques of AI that must
be “integrated in a loop” to realize a needed set of advanced capabili-
ties. This paper presents a research initiative which aims at realizing an
enhanced (cognitive) control architecture to endow autonomous and so-
cially interacting robots with a number of such advanced functionalities.
Specifically, the paper presents an initial version of the envisaged control
architecture, called KOaLa (Knowledge-based cOntinuous Loop) which
integrates sensor data representation, knowledge reasoning and decision
making functionalities.


1    Introduction

Nowadays, there are many widely diffused commercial robotic solutions
like e.g., robot vacuums or industrial lightweight robots, while a new
generation of Intelligent Robots are entering our working and living en-
vironments, taking care of human-level tasks. Such robotic systems are
becoming more and more important also in elderly healthcare assistance.
Indeed, recent advancements in Artificial Intelligence (AI) and Robotics
are fostering the diffusion of robotic agents with the capabilities needed
to support both older adults and their caregivers in a variety of situa-
tions (e.g., in their homes, in hospitals, etc.). Such robotic agents must be
capable of monitoring and understanding information coming from the
environment, interacting with humans in a flexible and human-compliant
way, autonomously performing tasks inside the environment and also per-
sonalizing interactions and services according to the specific needs of the
assisted person.
The ability of representing and reasoning diverse kind of knowledge con-
stitutes a key feature for allowing intelligent robotic assistants to under-
stand the actual (and possibly time changing) needs of older persons as
well as the status of the environment in which they are acting and infer-
ring new knowledge to adapt their behaviors and better assist humans.
The need of supporting long-term monitoring and deploying personalized
services for different users opens to the exploitation of sensor networks to
gather information about the status of the assisted persons and their liv-
ing environments in order to figure out which is the actual situation and
how effective assistance can be provided. New social robots are entering
the market (e.g., Pepper by SoftBank Robotics) but they still lack ad-
vanced reasoning capabilities to provide well suited and effective impact
in healthcare assistance, contributing in prolonging elderly independence
as well as increasing their quality of life.
AI techniques constitute a key enabling technology for realizing adap-
tive assistive services to implement continuous monitoring and support
daily-home living of seniors. This paper tries to identify the main re-
quirements that an intelligent assistive robot must satisfy to realize ef-
fective services aimed at taking care of older adults inside their home
living environment. According to this requirements we make an hypoth-
esis concerning the main AI techniques that can contribute to achieve the
desired objectives. We propose an advanced cognitive architecture inte-
grating these AI techniques into a unified control loop and we discuss
the related responsibilities and contributions with respect to the desired
intelligent assistive behaviors. Then, we show a (partial) implementation
of the envisaged cognitive architecture called KOaLa (Knowledge-based
cOntinuous Loop) which has been designed by leveraging the results and
the experience earned with GiraffPlus [8] (a research project funded by
the European Commission representing a successful example o the use
of AI in domestic care contexts).


2    Requirements for Daily-Home Assistance
The development of reliable AI and robotic technologies aimed at sup-
porting the daily-home living of persons directly at home is a really
challenging research objective. There are many heterogeneous situations
such systems must properly deal with to effectively support a person
and, therefore many features and capabilities must be taken into account.
Taking inspiration from the experience in GraffPlus [8], it is possible to
identify a set of key requirements characterizing the capabilities of intelli-
gent assistive robotic systems. These requirements can be characterized
according to four correlated perspectives: (i) environment perspective;
(ii) autonomy perspective; (iii) interaction perspective; (iv) adaptation
perspective.
  – Environment perspective. Pursuing the idea of GiraffPlus dif-
     ferent types of sensor can be used to gather information about the
     environment and the health status of the assisted person. The num-
     ber and the type of the sensors deployed into the environment de-
     pend on the specific purposes and objectives that must be achieved.
     Broadly speaking, there are two categories of sensing devices that
     are relevant in domestic assistance scenarios. Environmental sensors
     produce data about the state of a particular area of the house like
     e.g., the kitchen, the living-room and so on. Physiological sensors
     produce data about physiological parameters of a person like e.g.,
     blood pressure, hearth rate and so on. Thus, IoT and sensing de-
     vices represent a precious source of information characterizing dif-
     ferent features of a working context. The envisaged assistive robotic
     system must be capable of dealing with a continuous flow of het-
     erogeneous data coming from sensors to monitor the state of the
     environment and autonomously recognize particular situations that
     require support. Namely, the system must be capable of recognizing
     activities the assisted person is performing inside the house as well
     as recognizing events related to the state of the house or the health
     of the assisted person.
  – Autonomy perspective. Analyzing the knowledge gathered from
     the environment the envisaged system can recognize particular situa-
     tions that may require to proactively execute supporting tasks. This
     means that a “causal knowledge” is needed to characterize the basic
     rules that enable a safe and correct interaction of the system with
     the environment. According to this knowledge, the system knows its
     internal capabilities and how they interact with the environment and
     therefore can autonomously decide the sequence of activities needed
     to achieve a desired objective like e.g., a supporting task. A decision
     making process is needed to achieve the level of autonomy needed
     to automatically synthesize and carry out supportive actions.
  – Interaction perspective. The assistive robot must be capable of
     interacting with humans at different levels and with different modal-
     ities like e.g., gestures and/or voice. In general, the interactions be-
     tween humans and robots must be as safe and “natural” as possible.
     Older adults should interact with an assistive robot in a “natural
     way” using a “natural language” and they should not feel the robot
     as an obstacle into the house and/or as a danger for their safety. To
     achieve this an assistive robot must correctly understand commands
     and instructions coming from humans and must show behaviors that
     are safe but also socially acceptable by humans. Namely, the behav-
     iors of an assistive robot must comply with so-called social norms
     that are necessary to effectively take part to “social life”. The work
     [3] represents an interesting contribution in this context consider-
     ing the task of a robot serving some coffee to a patient. Even for a
     “simple task” like this a robot must comply with “social norms” in
     order to carry out the task in human-compliant way. Indeed, both
     cups and watering cans are capable of containing fluids and there-
   fore coffe but, it would be strange (or not human-compliant) to serve
   coffe using a watering can.
 – Adaptation perspective. Different persons have different habits
   and different needs that may also change over time. Assistive robots
   must tightly interact with persons during their daily-home living
   and therefore a general and “static behavior” would not be effective.
   An assistive robot must be capable of adapting its behaviors and
   interactions according to the specific needs of the particular assisted
   person. Namely, an assistive robot should be able to build profiles
   according to its experience and personalize its behaviors to different
   persons accordingly.


3 Conceptual AI3 Architecture for Assistive
Robots

The envisaged assistive robotic system must be capable of integrating
a heterogeneous and complex set of (intelligent) capabilities. There are
several techniques in AI that address some of the challenges raised by
the requirements described above. Machine learning, knowledge repre-
sentation and reasoning, automated planning and execution represent
three well-established field of AI that can play a key role in this context.
A proper integration of these techniques can endow an assistive robot
with the capabilities needed to achieve the desired objectives. Thus, the
long-term research objective we are pursuing aims at realizing an en-
hanced cognitive architecture for assistive robots integrating three core
AI techniques.
Research in cognitive architecture aims at endowing an artificial agent
with a hybrid set of cognitive capabilities that range from learning and
perception to problem solving and acting. As stated in [13], research in
cognitive architecture is important because it enables the creation and
understanding of (synthetic) agents that support the same capabilities as
humans by integrating results in cognitive sciences and AI. A key point in
the design of cognitive architectures is the management of different source
of knowledge and the basic capabilities needed to access and process such
knowledge. For example, knowledge from environment comes through
perception, knowledge about opportunities of a particular state of the
environment comes through planning, reasoning and prediction. Many
works in the literature have analyzed cognitive architecture [13, 15] and
have introduced and applied systems based on cognitive architectures
with interesting results. ACT-R [2, 1] SOAR [10, 11] and ICARUS [12]
are just some examples of the most relevant cognitive systems realized
in this field. Although not so recent, we take the work [13] as a reference
for the design of our cognitive system. In our view, that work provides
a good and complete discussion of the principal capabilities a cognitive
system must be endowed with which is well suited for our purpose.
Figure 1 shows the elicited three-core architecture (AI3 ) showing the
main building blocks and their relationships within the control flow. The
architecture is composed by three different layers encapsulating the three
                                                                                                                                                   Machine
                                                                                                                    Behavior




                                                                                                                                                   Learning
  Environment / Robot / Sensor Network
                                                                           Behavior Learning




                                                                                                                                                     Core
                                                                                                                   Prediction




                                                                                                                                                     Knowledge Representation
                                                    Knowledge                                   User Profiling




                                                                                                                                WHO ICF Taxonomy




                                                                                                                                                        and Reasoning Core
                                                  Acquisition and




                                                                       Context-based
                                                                       Ontology and
                                                 Data Interpretation




                                                                         Processing
                                         SENSE
                                                    Multimodal                                   Assistive Robot
                                                   Human-Robot                                     Knowledge
                                                    Interaction




                                                                                                                                                     Planning and
                                                                        Online Task Reasoning               Daily Task Planning




                                                                                                                                                      Acting Core
                                          ACT

                                                                                       Task Execution and Interaction



                                            Fig. 1. The three-core-based conceptual architecture



AI techniques mentioned above. The Knowledge Representation and Rea-
soning Core is the part of the architecture responsible for processing
information coming from the environment. There can be two types of
information the system must deal with. Sensor data about the envi-
ronment the system must properly acquire, interpret and contextualize.
Human instructions and commands the system can receive by interact-
ing with persons through different modalities (voice and gestures). The
elements Knowledge Acquisition and Data Interpretation and Multimodal
Human-Robot Interaction are responsible for dealing with these two dif-
ferent sources of information respectively. The information received from
these two different “channels” must be processed in a uniform way, ac-
cording to a well-defined semantics in order to extract useful knowledge.
Specifically, the Ontology and Context-based Processing relies on an on-
tological approach to define a semantics guiding the interpretation of
sensor data and the related processing mechanisms. Indeed, it defines
a set of context-based rules used to process external data and extract
knowledge about the events, activities and tasks that characterize the
state of the environment and the assisted person. it enables a context-
based knowledge processing mechanism which allows an assistive robot
to continuously refine its internal knowledge, i.e., the element Assistive
Robot Knowledge shown in Figure 1.
The knowledge generated by means of this processing mechanism is cen-
tral to the (enhanced) control loop and synchronizes the three AI-cores
composing the architecture. The Machine Learning Core is in charge
of further enriching this knowledge by taking track of interactions and
events to learn the particular needs of a specific person/patient and build
a model of his/her possible behaviors. Broadly speaking it is possible
to distinguish two basic elements. The Behavior Learning element is in
charge of recognizing patterns or repetitive behaviors by analyzing stored
information about interactions between the assistive robots and a patient
as well as the activities of a patient inside the house. The Behavior Pre-
        diction element is in charge of analayzing learned behaviors to build a
        suitable model of a patient and “predict” his/her possible activities and
        interactions accordingly. Such a model can be used to enrich the knowl-
        edge of the assistive robot and build a profile of a patient. The profile of
        a patient can also include a description of his/her health-related needs
        by leveraging a proper representation of the ICF Taxonomy made by
        WHO1 . Leveraging all this information an assistive robot can build a
        quite rich model of a patient characterizing his/her health status as well
        as his/her behaviors inside the house.
        The Planning and Acting Core is the part of the architecture respon-
        sible for actually interacting with the patient and the environment. It
        leverages the built knowledge of the assistive robot to characterize the
        operations that can be performed into the environment as well as the set
        of events and/or activities that can require support. Specifically, the On-
        line task reasoning element is responsible for proactively identify tasks
        that must be executed according to the events and activities detected
        by knowledge processing mechanisms of the architecture as well as com-
        mands/instructions received by a patient. These tasks are integrated into
        the Daily Task Planning element which maintains the (temporal) plan
        of the supportive tasks planned within the day. The daily plan is synthe-
        sized by taking into account a (temporal) model of the assisted person
        characterizing his/her specific needs and behaviors. The tight integra-
        tion of these two elements allow an assistive robot to dynamically adapt
        its behaviors and the executed supportive tasks according to the specific
        profile of the assisted person. Then, these tasks are executed by actually
        acting into the environment through a closed-loop control cycle which
        executes actions and receives feedbacks about their execution from the
        environment.


        4    The KOaLa Cognitive Architecture

        The AI3 architecture elicited in Figure 1 represents a sort of roadmap
        for the challenging research objective we are pursuing within the KOaLa
        research initiative recently started. This initiative has been inspired by
        the successful results obtained with the GiraffPlus research project [8].
        This project developed an integrated system composed by a sensor net-
        work and a telepresence robot aimed at supporting and monitoring the
        daily-home living of a senior person directly in his/her house. Several pi-
        lot studies were made during which a telepresence robot (the Giraff) was
        actually deployed in the house of people for several months [5]. Among
        these, particularly relevant is the case of “Nonna Lea” who represented
        an ideal and inspiring user for the project2 .
        GiraffPlus envisages an application context consisting of a mobile telep-
        resence robot capable of autonomously interact with an older person
        through audio/text messages and gestures [9] as well as navigate her
        living environment. The robot is also endowed with videoconferencing
1
    http://www.who.int/classifications/icf/en/
2
    https://youtu.be/9pTPrA9nH6E
 functionalities that allow the user to communicate with a caregiver in
 the “external world” (e.g., a relative or a doctor). The robot is supposed
 to move inside a sensorized environment that can produce data about the
 status of the house as well as the status and activities of the user. Thus,
 the control architecture of a GiraffPlus-like assistive robot must continu-
 ously process sensor data to understand the status of a person (according
 to his/her specific health-related needs) and the environment (the oper-
 ative context) and, then dynamically synthesize the actions needed to
 better support the user.
 Following the AI3 architecture, the improvement introduced by KOaLa
 consists of the integration of a knowledge processing module, called the
 KOaLa Semantic Module, and a planning and execution module, called
 the KOaLa Acting Module. The integration of these two modules realizes
 an cognitive high-level control loop enhancing the capabilities of Giraff-
 Plus. Fig. 2 shows a conceptual representation of the envisaged cognitive
 architecture and highlights the different phases of the control flow which
 starts with the gathering of data from the sensor network and ends with
 the execution of actions in the environment involving, e.g., the robot or
 sensor configurations.



         KOaLa Semantic Module                    KOaLa Acting Module

                KOaLa Ontology
                                               Problem
                                             Formulation          Timeline
                           Goal                                    -based
         KB             Recognition   New Goals                     Plan

                Data Processing                   Planning & Execution
       Sensed                                                     Action
       Data                                                       Execution
                       Environment / Robot / Sensor Network


Fig. 2. Semantic and Acting modules of KOaLa sense-reason-act in details




 4.1    The Semantic Module

 The KOaLa Semantic Module is responsible for the interpretation of sen-
 sor network data and the management of the resulting knowledge of the
 robot. This module relies on the KOaLa Ontology to provide sensor data
 with semantics and incrementally build an abstract representation of
 the application context i.e., the Knowledge Base (KB). A data process-
 ing mechanism uses standard semantic technologies based on the Web
 Ontology Language (OWL) [4] to continuously refine the KB and infer
 additional knowledge (e.g., about user activities). Then, a goal recogni-
 tion process analyzes the KB in order to identify specific situations that
        require a proactive “intervention” of the robot and dynamically generates
        related goals for the acting module.


        The KOaLa Ontology The KOaLa ontology has been defined by
        leveraging SSN [7] and DUL3 , two stable and publicly available ontolo-
        gies. The KOaLa ontology has been structured according to a context-
        based approach which characterizes the knowledge by taking into ac-
        count different level of abstractions and perspectives. Specifically, three
        levels (i.e. contexts) have been identified: (i) the sensor context; (ii) the
        environment context; (iii) the observation context. The sensor context
        characterizes the knowledge about the sensing devices that compose a
        particular environment, their deployment and the properties they may
        observe. This context strictly relies on SSN by providing a more de-
        tailed representation of the different types of sensor that can compose
        an environment as well as the different types of property that can be ob-
        served. Leveraging this general knowledge, it is possible to dynamically
        recognize the actual monitoring capabilities as well as the set of opera-
        tions that can be performed according to the types of sensor available
        and their deployment. The environment context characterizes the knowl-
        edge about the structure and physical elements that compose a home
        environment, and the deployment of sensors. This context models the
        different physical objects that may compose a home environment, their
        properties and the particular deployment of the sensors. Thus, this con-
        text provides a complete characterization of a domestic environment and
        the relate configuration of the sensor network. Finally, the observation
        context characterizes the knowledge about the features that can actually
        produce information in a give configuration as well as the events and
        the activities that can be observed through them. This context identifies
        the observable features of a domestic environment as the physical ele-
        ments that are actually capable of producing information through the
        deployed sensors. Similarly, this context identifies the observable prop-
        erties as the properties of the observable features that can be actually
        observed through the deployed sensors. In this way, the KB is capable of
        representing observations and processing/interpreting received data by
        taking into account the associated environmental information like e.g.,
        the are of the house data comes from or the type of object data refers
        to.


        Knowledge Processing Given the semantics defined by the KOaLa
        ontology, a knowledge processing mechanism elaborates sensor data to
        incrementally build a KB. The pipeline depicted in Fig. 3 shows the main
        steps of this knowledge processing mechanism. The pipeline is composed
        by a sequence of reasoning modules each of which elaborates data and
        the KB at a different level of abstraction (i.e., ontological context) by
        means of a dedicated set of inference rules. Such rules define a semantics
        to link different ontological contexts in order to incrementally abstract
        data and infer additional knowledge which is integrated into the KB.
3
    http://www.loa-cnr.it/ontologies/DUL.owl
                           Configuration
          Data Filtering                                  Event and
                           Detection and     Feature                       Goal
              and                                          Activity
                                Data        Extraction                  Recognition
          Normalization                                   Detection
                           Interpretation




     Fig. 3. Data processing pipeline for the knowledge inference and maintenance


        The KB is initialized on a configuration specification which describes the
        structure of the domestic environment, the set of sensors available and
        their deployment. Then, the Configuration Detection and Data Interpre-
        tation module generates an initial KB by analyzing the configuration
        specification. Such initial KB is then refined by interpreting (filtered and
        normalized) sensor data coming from the environment. The Feature Ex-
        traction module identifies the observable features of the environment and
        the related properties. It processes sensor data in order to infer obser-
        vations and refine the KB accordingly. Finally, the Event and Activity
        Detection module analyzes inferred observations by taking into account
        the knowledge about the environment. Different inference rules detect
        different types of events and activities according to the particular set of
        features and properties involved within the observations4 .


        4.2     The Acting Module
        The KOaLa Acting Module is responsible of planning and executing op-
        erations according to the events or activities inferred by the semantic
        module. These events are inferred by the Goal Recognition module (GR)
        of the knowledge processing pipeline show in Fig. 3. GR is a key element
        of the cognitive architecture because it provides the link between the se-
        mantic and the acting modules. Specifically, it leverages the inferred KB
        to connect knowledge representation with planning. It can be seen as a
        background process that monitors the updated KB in order to generate
        operations the GiraffPlus robot must perform. GR is the key feature of
        KOaLa to achieve proactivity. Operations that GR generates are modeled
        as planning goals the problem formulation process encodes into a plan-
        ning problem specification. Such a problem specification is then given to a
        timeline-based planner which synthesizes a plan describing the sequences
        of operations needed to support the user. Thus, a planning and execution
        process leverages the timeline-based approach [6] and the PLATINUm
        framework [16] to continuously execute and refine the plan according to
        the input goals and the status of the execution.


        Planning and Execution with PLATINUm The planning and
        execution capabilities of the acting module rely on a novel timeline-
        based framework, called PLATINUm [16]. PLATINUm complies with
        the formal characterization of the timeline-based approach proposed in
4
    The knowledge processing mechanism has been developed by means of the Apache
    Jena software library (https://jena.apache.org/)
[6] which takes into account temporal uncertainty, and has been success-
fully applied in real-world manufacturing scenarios [14] recently. Broadly
speaking, a timeline-based model is composed by a set of state variables
describing the possible temporal behaviors of the domain features that
are relevant from the control perspective. Each state variable specifies
a set of values that represent the states or actions the related feature
may assume or perform over time. Each value is associated with a flex-
ible duration and a controllability tag which specifies whether the value
is controllable or not. A state transition function specifies the valid tem-
poral behaviors of a state variable by modeling the allowed sequences of
values (i.e., the transitions between the values of a state variable). State
variables model “local” constraints a planner must satisfy to generate
valid temporal behaviors of single features of the domain i.e., valid time-
lines. it could be necessary to further constrain the behaviors of state
variables in order to coordinate differente domain features and realize
complex functionalities or achieve complex goals (e.g., perform assis-
tive functionalities). A dedicated set of rules called synchronization rules
model “global” constraints that a planner must satisfy to build a valid
plan. Such rules can be used also to specify planning goals.
Given such a model, a PLATINUm planner synthesizes a set of time-
lines each of which represents an envelope of valid temporal behaviors
of a particular state variable. These timelines allow the GiraffPlus robot
to perform the desired assistive tasks. Then, an PLATINUm executive
carries out the timelines by temporally instantiating the associated se-
quences of values, called tokens. Namely, an executive decides the exact
start time of the tokens composing the timelines of the plan. In general,
the actual execution of these tokens cannot be controlled by the execu-
tive which must dynamically adapt the plan according to the feedbacks
received during execution. For example, the actual time the GiraffPlus
robot needs to navigate the environment and reach a particular location
cannot be decided by the planner or the executive. Indeed, the navigation
can be slowed-down by obstacles and therefore the end and therefore the
actual duration of a navigation operation is known only when the exec-
utive receives the associated execution feedback.


5    KOaLa in Action
Let us consider a typical assistive scenario consisting of older adult liv-
ing alone in his/her single floor apartment composed by a living room, a
kitchen, a bathroom, a bedroom and a central corridor connecting all the
rooms with the entrance. There are many sensors that can be installed
to track activities and events inside the house. Each window and the
entrance door have been endowed with a sensor to check whether they
are open or close. There is (at least) one sensor for each room to track
temperature and luminosity and detect motions. Finally, there are addi-
tional sensors to track the usage of electronic devices like e.g., the TV,
the oven or the microwave. In addition to these environmental sensors,
there are other sensing devices that track physiological parameters of
the assisted person like blood pressure, heart rate, glucose level, etc. All
these sensing devices provide a rich and heterogeneous set of data the
assistive robot can continuously analize through KOaLa to recognize ac-
tivities the person is performing, or events/situations affecting the status
of the house or the health of the person. Below there are some examples
concerning typical situations we are focusing to inside the house during
the daily-home living of a person in need of assistance. These examples
show the objective of the enhanced assistive services:

 – The sensor network detects some activities in the kitchen of the
   house. The information gathered from sensors is saying that some-
   one is moving inside the kitchen, the TV is on, the luminosity is high
   and also that the temperature close to the flame is a bit higher than
   usual. Given this information and given the time, the assistive robot
   understands that “Nonna Lea” is cooking and therefore it plans to
   move towards the kitchen to remind the dietary restrictions she must
   follows. Then, the assistive robot plans to remind the patient to take
   his/her pills for the therapy in forty-five minutes which is the time
   she usually takes to complete the meal. In addition, the robot plans
   to send a message to her sons to ask them to call their mother in
   one hour and a half in order to check whether she has actually taken
   the pills or not.
 – The sensor network detects some activities in the living room of the
   house. The information gathered from sensors is saying that someone
   is sitting on the sofa, the TV is on and that the luminosity of the
   room is high. According to these information the robot understands
   that the person is watching the TV and therefore it plans to move
   inside the living room and inform the person about the programming
   of the day. In addition, the robot notices that the person has neither
   made any calls to nor received calls from her sons today and therefore
   it plans to suggest to the person to call her sons before going to sleep.
 – The sensor network detects some activities in the bedroom of the
   house. The information gathered from sensors is saying that someone
   is moving inside the bedroom and that the light inside the room is on.
   The robot checks the time and recognizes that it is the time at which
   the person usually goes to bed. However, it detects that the window
   inside the kitchen is open and that the light of the bathroom is on.
   Thus, the robot plans to move towards the bedroom in order to alert
   the person about the fact that the window must be closed and that
   the light in the bathroom must be turned off before going to bed.
   After a while, the information gathered from sensors in the bedroom
   says that someone is laying on the bed and that the luminosity of the
   room is low. The robot understands that the person is sleeping and
   decides to notify her sons about this. In addition, the robot notices
   that the temperature inside the bedroom is a bit higher than the ideal
   temperature for a good sleep. Thus, it decides to cool down a bit the
   temperature of the room by controlling either the air-conditioner or
   the heater according to season. The day after, the robot detects that
   the person is still sleeping at the typical time she wakes up. Thus,
   the robot plans to send a message to her sons about this unusual
   behavior within thirty minutes if she does not wake up before.
Such scenarios show ordinary life situations of a senior person and some
roles that an assistive robot, with the help of KOaLa, can play to sup-
port her living at home. In particular, these scenarios show that KOaLa,
through the combination of simple inference rules like e.g., someone is
moving inside the kitchen and the temperature close to the flame is higher
then usual, can endow a telepresence robot with the capability of au-
tonomously reasoning on the state of the environment, inferring com-
plex situations and dynamically triggering goals accordingly. A first set
of inference rules has been developed to realize a “stratified” reasoning
mechanisms capable of abstracting sensor data and inferring events situ-
ations concerning the status of the environment. Currently an extended
set of rules is under development to realize the goal triggering mechanism
needed to proactively link knowledge reasoning to planning and acting.


6    Conclusions and Future Works
This paper presented an AI-based cognitive architecture which integrates
sensing, knowledge representation and automated planning techniques to
constitute a high-level control loop to enhance proactivity features of an
assistive robot designed to support an older persons living at home in her
daily routine. A semantic module leverages a dedicated ontology to build
a KB by properly processing data collected by means of a sensor network
installed in the environment. An acting module takes advantage of the
timeline-based planning approach to control robot behaviors. A goal trig-
gering process acts as a bridge between the two modules and provides
the key enabling feature to endow the robot with suitable proactivity
levels. At this stage, some tests have been performed to show the feasi-
bility of the approach. Further work is ongoing to enable more extensive
integrated laboratory tests to better assess performance and capabilities
of the overall system. Future work will also investigate the opportunity
to integrate machine learning techniques to better adapt the behavior of
the assistive robot to specific daily behaviors of different targeted people.


References
 1. Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C.,
    Qin, Y.: An integrated theory of the mind. Psychological review
    111(4), 1036 (2004)
 2. Anderson, J.R., Matessa, M., Lebiere, C.: Act-r: A theory of higher
    level cognition and its relation to visual attention. Human-Computer
    Interaction 12(4), 439–462 (1997)
 3. Awaad, I., Kraetzschmar, G.K., Hertzberg, J.: The role of func-
    tional affordances in socializing robots. International Journal of So-
    cial Robotics 7(4), 421–438 (2015)
 4. Bechhofer, S.: OWL: Web Ontology Language, pp. 2008–2009.
    Springer US, Boston, MA (2009)
 5. Cesta, A., Cortellessa, G., Fracasso, F., Orlandini, A., Turno, M.:
    User Needs and Preferences on AAL Systems that Support Older
    Adults and Their Carers. Journal of Ambient Intelligence and Smart
    Environments 10(1), 49–70 (2018)
 6. Cialdea Mayer, M., Orlandini, A., Umbrico, A.: Planning and ex-
    ecution with flexible timelines: a formal account. Acta Informatica
    53(6-8), 649–680 (2016)
 7. Compton, M., Barnaghi, P., Bermudez, L., Garcı́a-Castro, R., Cor-
    cho, O., Cox, S., Graybeal, J., Hauswirth, M., Henson, C., Herzog,
    A., Huang, V., Janowicz, K., Kelsey, W.D., Phuoc, D.L., Lefort,
    L., Leggieri, M., Neuhaus, H., Nikolov, A., Page, K., Passant, A.,
    Sheth, A., Taylor, K.: The ssn ontology of the w3c semantic sensor
    network incubator group. Web Semantics: Science, Services and
    Agents on the World Wide Web 17(Supplement C), 25 – 32 (2012),
    http://www.sciencedirect.com/science/article/pii/S1570826812000571
 8. Coradeschi, S., Cesta, A., Cortellessa, G., Coraci, L., Gonzalez, J.,
    Karlsson, L., Furfari, F., Loutfi, A., Orlandini, A., Palumbo, F., Pec-
    ora, F., von Rump, S., Štimec, A., Ullberg, J., Ötslund, B.: Giraff-
    Plus: Combining social interaction and long term monitoring for pro-
    moting independent living. In: The 6th International Conference on
    Human System Interactions (HSI). pp. 578–585 (2013)
 9. Cortellessa, G., Fracasso, F., Sorrentino, A., Orlandini, A., Bernardi,
    G., Coraci, L., De Benedictis, R., Cesta, A.: Enhancing the interac-
    tive services of a telepresence robot for aal: Developments and a
    psycho-physiological assessment. In: Cavallo, F., Marletta, V., Mon-
    teriù, A., Siciliano, P. (eds.) Ambient Assisted Living. pp. 337–357.
    Springer International Publishing, Cham (2017)
10. Laird, J.E.: Extending the soar cognitive architecture. Frontiers in
    Artificial Intelligence and Applications 171, 224 (2008)
11. Laird, J.E., Newell, A., Rosenbloom, P.S.: Soar: An architecture
    for general intelligence. Artificial Intelligence 33(1), 1 – 64 (1987).
    https://doi.org/https://doi.org/10.1016/0004-3702(87)90050-6,
    http://www.sciencedirect.com/science/article/pii/0004370287900506
12. Langley, P., Cummings, K., Shapiro, D.: Hierarchical skills and cog-
    nitive architectures. In: Proceedings of the Annual Meeting of the
    Cognitive Science Society. vol. 26 (2004)
13. Langley,      P.,   Laird,     J.E.,    Rogers,    S.:   Cognitive   ar-
    chitectures:       Research      issues     and    challenges.    Cogni-
    tive     Systems      Research       10(2),    141     –  160    (2009).
    https://doi.org/https://doi.org/10.1016/j.cogsys.2006.07.004,
    http://www.sciencedirect.com/science/article/pii/S1389041708000557
14. Pellegrinelli, S., Orlandini, A., Pedrocchi, N., Umbrico, A., Tolio,
    T.: Motion planning and scheduling for human and industrial-robot
    collaboration. CIRP Annals - Manufacturing Technology 66, 1–4
    (2017)
15. Samsonovich, A.V.: Toward a unified catalog of implemented cogni-
    tive architectures. BICA 221(2010), 195–244 (2010)
16. Umbrico, A., Cesta, A., Cialdea Mayer, M., Orlandini, A.: PLAT-
    INUm: A new Framework for Planning and Acting. In: AI*IA
    2016 Advances in Artificial Intelligence: XVth International Con-
    ference of the Italian Association for Artificial Intelligence, Genova,
    Italy, November 29 – December 1, 2016, Proceedings. pp. 508–522.
    Springer International Publishing (2017)