<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Human behaviours simulation in ubiquitous computing environments</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Teresa Garcia-Valverde, Francisco Campuzano, Emilio Serrano and Juan A. Botia University of Murcia</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-Ambient Assisted Living (AAL) systems' main goal is to augment live quality of elderly people, by using ICT based systems. In this paper, we are concerned with the artificial reproduction of a physical environment (i.e. a house) and an elder (i.e. the attended) living in such environment. An agent based social simulation system is used for such purpose. Such simulator will allow the integration of ubiquitous computing appliances, services and applications in such environment. A realistic reproduction of human behaviour in the simulator helps, in this context, in the validation of silent monitorisation, diagnosis and action based applications. Proofs are given in the paper which demonstrate the level of reality reached by comparing the artificial behaviour with real ones. Index Terms-Ubiquitous computing, Ambient Assisted Living, behaviour simulation, user modelling.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        The main thesis presented in this paper is the following:
agent based social simulation (ABSS) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] may help in the
engineering of Ambient Intelligence systems. ABSS is a
simulation paradigm in which the focus is put on the definition
of the separate components of the simulation in an isolated
manner. In such simulation runs, the emergence of behaviours
is the main subject under study. And the metaphor of agent is
used for specification of single components and interactions
among them. An ambient intelligence [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] system is a set of
appliances, services and applications which silently surrounds
and interact with the user in an intelligent manner. In such
kind of systems, the user is the central entity of the model.
Starting from the user, services and applications are built.
      </p>
      <p>
        A main difficulty one may find in the development process
of an AmI system is that of testing and validation. Testing is
the process of executing a program with the intent of finding
errors in the code [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Such errors must be debugged [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Some
of the errors may be found by using a Unit approach with the
system under test (SUT). Common errors which are found
in this stage are related to common programming mistakes
(e.g. values of variables out of range, shoddy checking of
return values from methods and so on). Thus, robustness of
the program is a must here. But a more elaborated test set may
be defined in order to assess the functionality of the system
(i.e. it behaves as expected). But, if the main issue in AmI
systems is a smooth interaction with the user, an Unit based
approach is no more valid here. It is clear that the user, or
at least a model of the user, should be incorporated in the
development process in order to measure to what extent, the
SUT is behaving as expected when interacting with him. In
this paper, an approach to test and validate AmI systems in a
stage prior to deployment is presented. The main idea behind
this is that the user is modelled with a computational model
and integrated into an ABSS model which incorporates as a
simulated artifact, the environment, the hardware (i.e. mainly
sensors and interfaces with the user) and integrates the real
software (i.e. services and applications). The real software is
precisely the SUT here.
      </p>
      <p>
        The proposal is articulated by means of a methodological
work. Such a methodology is a set of procedures which
guides the developer in the definition, creation, testing and
validation of the AmI system. It is based on a methodology
previously described by Gilbert et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It comprises the
creation of the necessary ABSS models and how they should
be employed to find errors in AmI services. The application
of the methodology is exemplified in a real domain. The
application domain is AAL (Ambient Assisted Living). An
AAL system is an ICT based solution which is devoted to
augment quality of live for elderly people. In this case, the
interest is focused on a system called Necesity [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. It is
based on a sensor network deployed through the house and
a central processing unit. Sensors include presence, pressure
and open-door. The system is designed to work on single
person environments (e.g. an elderly who lives alone and
independently in his house). It is in charge of monitoring
activity regime of the elderly 24x7 in a manner that when
his activity pattern is anomalous, an emergency response is
started. The rest of the paper will demonstrate that an artificial
reproduction of a house, sensors and the attended, together
integrated with the real software, helps in the fine tuning of
the activity pattern management software.
      </p>
      <p>The rest of the paper is structured as follows. Section II
introduces the methodology used for the engineering of AmI
services. Section III introduces the computational models
employed to artificially reproduce the behaviour of the attended.
Such behaviour is based on a probabilistic and hierarchical
automata which governs the activity and location of the
modeled elderly in each instant of time. In section IV, the
validation approach is presented. It is based on the statistical
contrast of artificial data traces obtained by simulating the
automata just mentioned with similar traces coming from real
users in similar context.</p>
      <p>II. AVA, AN AGENT BASED METHODOLOGY FOR THE</p>
      <p>VALIDATION OF AMI SYSTEMS</p>
      <p>
        This section explains an agent based methodology for the
validation of AmI systems called AVA. This methodology
proposes the development of ABSS in order to validate AmI
applications. AVA is an extension of the methodology
promulgated by Gilbert et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for the development and use of
general ABSS. The two main innovations with regarding the
classical methodology by Giltber et al. are: (1) the existence
of a step to generate simpler simulations and (2) the
consideration of including real elements in the simulation to get
more realistic results. This section will show the advantages
of these innovations for the specific purpose of validating
AmI applications. The AVA methodology is expressed as a
flowchart in figure 11.
      </p>
      <p>
        Gilbert et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] defines the target of a social simulation as
some “real world” phenomenon which the researcher is
interested in. The AVA methodology is proposed to validate an AmI
application including their interactions with the environment
and users. Thus, AVA starts considering an AmI target (step 1)
which includes an environment, users and an AmI application
which may be finished or at an advanced stage of development.
Typically, the use of ABSS to generate knowledge involves a
necessary familiarization with the domain in the first step. This
is required to generate models of the target in the following
steps of the methodology [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The main elements to be studied
in an AmI system are: the environment (step 2), users (step
3) and the AmI application (step 4). Note that while the
environment and the user are inputs, the application system is
a process. In principle, the environment and users are external
and do not support changes. On the other hand, the AmI
application can be modified. This application will be refined
along the iterations of AVA to get a realistic validation. This
1The flowchart uses standard elements of classic flowcharts [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] as flow
of control (represented as arrows), processes (represented as rectangles),
decisions (rhombus), input/output (parallelograms), start and end symbols
(ovals) and predefined processes (rectangles with vertical lines at the sides).
paper discusses the performance of these steps for an AAL
system for elderly people
      </p>
      <p>
        The development of an AmI system model is performed
in step 5. The model design associates the real system with
a representation of this system (the model) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Here, the
AmI application must be modelled but also the users and
the environment. These models are necessary to validate the
AmI application because they interact with it. Moreover, a
realistic validation of the application needs realistic models
for users and environment. Therefore, models must describe
reality before being as simple as possible [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Section III
explains the construction of a user model for an specific AmI
application.
      </p>
      <p>
        Step 6 deals with the implementation of the AmI system in
a simulation language. The implementation from the concepts
of the model is not a trivial task [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. A general programming
language or a specific one of the available frameworks for
the development of ABSS can be used for the construction of
the simulation. The second option is much more convenient
because several of the typical tasks in the construction of
ABSS have been included in this kind of software packages
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Examples of these tasks are scheduling agents’ actions
or building basic environments. The web of the Open Agent
Based Modeling Consortium2 nowadays lists 22 of these
frameworks. Section III shows the use of a specific software
package for the implementation of a realistic environment
model: 3D Sweethome.
      </p>
      <p>After building the simulation, this must be executed (step
7). Quick, cheap and numerous experiments can be performed
thanks to the ABSS. These executions produce large amounts
of data regarding the behaviour of users, environment and
application models. Forensic analysis, step 8, is an offline
analysis to be conducted on the data stored from the
previous step. The analysis should consider whether the AmI
application functionality is correct. Furthermore, this step must
validate that the behaviour of users and environment models
is consistent with the observed reality (steps 2 and 3). Without
this validation, the theories generated from simulations have
no relation to reality (as they are based on non-descriptive
models). Therefore, the functionality of the AmI application
model is linked to the users and environment models. Section
IV deals with the validation of the users model for an AAL
system</p>
      <p>One of the innovative points in the AVA methodology is the
use of simple simulations as a means to validate descriptive
simulations. Step 9 checks if any elements of the simulation
are too complex. In this case, complexity means that it is
difficult to assess whether the behaviour of an element is the
expected one. For example, some behaviours of users can be
so complex that they need to be evaluated in isolation. An
example of this type of behaviour would be the resolution of
collisions on the motion of a large number of agents. This
behaviour is a problem to be studied itself and its validation
would be much more complicated with additional elements in
2OpenABM Consortium website: http://www.openabm.org/
the simulation (more users’ behaviours, a realistic environment
model, a realistic model of an AmI application, etc). In these
cases, the methodology proposes to consider these complex
elements as an object of study itself (step 10) and repeat the
AVA methodology for them (step 11). The reuse of models
and code in this new iteration will be direct because a more
descriptive simulation is available as result of the previous
steps. Once the complex element is validated in a simpler
simulation, which is the final result of the methodology, the
next step for the overall simulation (step 12) can be performed.</p>
      <p>Step 12 checks if the developer has found errors in
functionality. If that is the case, the AmI application of step 4 must
be modified in order to correct these errors and the process
repeated. Besides the primary objective (validating the AmI
application), is typical to find bugs of previous steps at this
point in the form of implementation failures or unrealistic
models.</p>
      <p>The final decision, step 13, checks if actual elements of
the AmI system can be connected to the simulator. The AVA
methodology proposes to inject or connect real elements in
the simulation progressively in order to make more realistic
validations3. This process is called “reality injection” and the
basic idea is that real elements can coexist with simulated
elements. After connecting real elements, the methodology
must be repeated from step 4 to improve the application and
the models. The result is an exhaustive validation which is
as realistic as possible. The obvious question is why models
and simulations are necessary if real elements (as real users)
can be injected. The answer is that a model, by definition,
is somewhat easier to study than the modelled reality. The
purpose of including real elements in simulations is to improve
the realism of the models. Then, in subsequent iterations, the
real elements will not be included because the pure simulations
allow faster and cheaper tests.</p>
      <p>Finally, if models are descriptive enough, the bugs found in
the functionality of the AmI application model will correspond
to failures of functionality in the real AmI application. These
failures should have been corrected in each iteration of the
AVA methodology. Therefore, the result of the methodology
(step 14) is that the AmI target is exhaustively validated.</p>
    </sec>
    <sec id="sec-2">
      <title>III. REALISTIC BEHAVIOUR MODELLING</title>
      <p>In this section, the particular models used, in the application
of the AVA methodology in the AAL domain, are introduced.
In section III.A, it is presented how the physical environment
(i.e. the house and furniture) and sensors were defined. Section
III.B refers to the production of realistic computational models
for elders living in such environment and making sensors to
react on their presence. Having such models (i.e. the house,
sensors and persons) within a simulation, and its integration
with the ubiquitous computing software, such software can be
tested.</p>
      <p>3Notice that this not involves necessarily a Participatory Multiagent
Simulation. The real elements do not have to be humans playing the role of simulation
components. These elements can be software applications, hardware or even
parts of the environment.</p>
      <sec id="sec-2-1">
        <title>A. Environment Modelling</title>
        <p>For Multi-Agent Based Simulation (MABS), it is available
Ubiksim4, a simulator developed by University of Murcia
that works over MASON5. It has integrated an environment
modeling tool based on SweetHome3D6, an application
conveniently adapted for modelling attended people and their
environment. It is possible to create houses over a 2D plane,
also it offers a 3D navigable view (figure 2). This view is used
for simulation’s visualization at real time.</p>
        <p>In the physical environment generated by using the Ubik
editor, a simple house (see figure 2) with a kitchen, a
bathroom, a bedroom and a living room is modelled. Presence
sensors are included in every room of the house. And a sensor
for open door (it is necessary for knowing when the elder
leaves the home) is also included in the outdoor. When a
simulation is run, the person moves in the house and stimulates
sensors when he is detected by them. Such sensors, through
the ubiquitous computing software, generate events. And these
events generate log entries. Such simulated log entries are used
afterwards to check if the virtual elder behaves in a realistic
manner (see section IV).</p>
        <p>
          Notice that log entries (both in the simulator and the
real setting) are generated by the same monitoring service
which continuously checks if the elder may be suffering some
problem, by using a pattern recognition approach (more details
on this may be found in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]) on the events coming from
sensors. In the first case, the person is virtual, in the second
it is real. But the monitoring service is the same.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>B. Behaviour Modelling</title>
        <p>The target of the modelling activity is a typical aged person,
who lives independently and alone in his own house. As
he lives alone, the following situation may occur: he may
suffer some health problem and stay immobilised in the
floor for too much time before anybody comes and notices
4UbikSim: http://ubiksim.sourceforge.net, last access: 20 May 2010
5MASON Toolkit: http://cs.gmu.edu/∼eclab/projects/mason/, last access: 20
May 2010</p>
        <p>
          6Sweet Home 3D: http://www.sweethome3d.eu/es/index.jsp, last access: 20
May 2010
that something is wrong. But it is possible to develop an
ubiquitous computing system which detects it and generates
some emergency response process [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. By following the AVA
methodology, we may use a simulated elder within a simulated
environment to test such system before it is deployed in a real
environment for pilot testing. Such simulated elder should be
necessarily simulated along the 24 hours of the day, repeatedly
for a determined number of weeks. For this, it is assumed
that the day is divided into time slots (i.e. morning, noon,
afternoon and night). In each time slot, it is also assumed that
the simulated person behaves specifically for such slot.
        </p>
        <p>The behaviour of simulated people are modelled
probabilistically. In this approach, behaviours are defined as
situations the agent should play in each moment. Transitions
between behaviours are probabilistic. The underlying model
is a hierarchical automaton (i.e. in a higher level there is a
number of complex behaviours that the agent may play and
once it is in a concrete state, within the state there is another
automaton with more simple behaviours). So, the modelling
of each behaviour is treated separately and the modeller is
abstracted of unnecessary details. So, in the lowest level (basic
actions), each state is atomic. An agent never conducts two
behaviours of the same level simultaneously.</p>
        <p>The behaviours used for modelling elders are of three types:
• Monotonous behaviours: the kind of behaviour the elder
manifest always approximately in the same time slot, and
on a daily basis (e.g. sleeping, having meal, medication
and so on).
• Non monotonous behaviours: the kind of behaviour the
elder usually manifest, not bounded to a concrete time
slot, and repeated within a non constant period (e.g. going
to the toilet, having a shower, cleaning the house and so
on).
• Any time behaviours: such behaviours will sometimes
interrupt others the elder is already doing, and will be
generated regardless they were already generated in a
temporal proximity (e.g. in his spare time).</p>
        <p>
          A probabilistic automaton [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] is defined as the quintuple
(Q, V, P (0), F, M ) where Q is a finite set of states, V is a
finite set of input symbols, P (0) is an initial state vector,
F is a set of final states and M is a matrix that represents
probabilities of transition for every state. In this definition,
transition’s probabilities depend on time. According to
different daily time slots, the agent’s behaviour acts on a different
pattern. There are time slots for eating, sleeping and taking
medication (Monotonous behaviours).
        </p>
        <p>Notice that, when the elder is at any state, the necessity
of changing to another state may arise. But this is not done
immediately. Moreover, a number of different changes (i.e.
transitions) may be pending simultaneously. Thus, a list of
pending tasks (i.e. or events) is maintained. Such tasks are
ordered by a static priority (e.g. going to the toilet goes before
cleaning).</p>
        <p>For generating transitions in real time, probability
distribution functions are used according to the type of the
behaviour and its features. These distributions are member
Q = {a0 = NormalT ime, a1 = MedicationT ime,
a2 = MealT ime, a3 = SleepT ime, a4 = Anomalous}</p>
        <p>(a)</p>
        <p>Q = {a00 = SpareT ime, a10 = MedicationT ime,
a20 = MealT ime, a30 = SleepT ime, ax1 = T oiletT ime,
a02 = ShowerT ime, a03 = CleanT ime}</p>
        <p>(b)
Q = {a200 = GoingT oF ridge, a201 = GoingT oCooker,
a202 = Cooking, a203 = GoingT oT able, a204 = Eating}</p>
        <p>
          (c)
of the exponential family [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Section IV defines all
distribution functions used.
        </p>
        <p>Notice that monotonous behaviours must be activated at
specific time hours or within specific time intervals. For
example, in the case of MedicationTime, a new necessity of
changing to such state will be generated exactly at the time to
take medicines. In the case of MealTime and SleepTime, the
necessity is generated within a time interval. The distribution
that models these transitions is bounded in this time slot. It
must be assured that agent eats and sleeps every day, because
of that, if not transition is generated into the time slot, a
transition is generated at the end time.</p>
        <p>To provide more realism, an automata hierarchy is
introduced representing each state by lower level automata which
define more specialized behaviours.</p>
        <p>As shown in figure 3(a), in level 0 the initial state
is a0 =NormalTime. For every one of the other states
{a1 =MedicationTime,a2 =MealTime,a3 =SleepTime} exist
a list of times generated by a probability function. When a
time counter arrives to one of these times, a transition to state
owner of the list is added to pending tasks list. When the action
is finished, the automaton returns to initial state if there is not
another pending task. The final state is a4 =Anomalous, if it
is reached, the execution will be stopped.</p>
        <p>In level 1 (figure 3(b)) non monotonous behaviours are
represented. There is an initial state in every refinement where
the person does the main task of the upper level (eating,
sleeping or taking medication). The state ax1 =ToiletTime is
considered in every refinement of level 0. However, the states
a02 =ShowerTime and a03 =CleanTime only may be activated
in normal state of level 0.</p>
        <p>In the lowest level (level 2), the new automata define some
specialized actions refining every state of level 1. These new
states do not give relevant information, but the agent gains
more realistic behaviours.</p>
        <p>With the sequence of actions in figure 3(c) the person cooks
before eating. It refines state a20 of level 1, the agent is not
going to be static in the kitchen, because it must be going to
different places and spends some time in every state.</p>
        <p>In figure 4 a base implementation for agent’s behaviour is
presented. First of all, all possible automata’s states are iterated
for initializing lists which contain time instants (lines 13-24).
These time instants are generated by a probability distribution
function, and they represent when a transition to its associated
state is going to be launched. After of that, the automaton
begins to run. At every time instant, if actual time is equal
than first item of a time list, a transition is added to pending
tasks list (lines 57-60). When there is one state with higher
priority than actual state in pending tasks list, a transition is
also generated. Then, if actual state is unfinished, it is stored
as a pending task and a new state is reached (lines 46-50).
When this new state is finished, leaved state may be resumed.</p>
        <p>The configuration parameters for the whole simulation
involve:
• Temporal limit in every room before entering in
anomalous state (tmax)
• The probability distribution parameters according to the
kind of behaviour
• Time slots of routinary temporal behaviours, like eating
or sleeping (inis, ends, s ∈ M ealT ime, SleepT ime)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>IV. VALIDATION OF THE APPROACH</title>
      <p>From section II, it is clear that user modelling is a means for
testing AmI services and applications without a real
environment. This task is performed in the third step of the AVA
methodology. Regarding to the validation, other important
steps within AVA are the steps from 5 to 8. Model validation
is needed in order to show that models are able to describe
the users’ behaviours. The rest of the section shows how the
model validation has been approached. But basically, activity
data from real users is compared (in statistical form) with the
same type of data produced by the artificial models.</p>
      <sec id="sec-3-1">
        <title>A. Data Preprocessing</title>
        <p>Activity data from real users were obtained within a pilot
project devoted to the validation of the Necesity system.
Around 25 users, all elderly people living independently, were
used in such Pilot project. In this paper, data coming from
three users, under monitorisation during two months, were
used. Data is in the form of a Necesity log. The logs offer the
possibility to represent where the user is, at any moment, at
the house (including also if he leaves the house or he is seated
or sleeping). Thus, three different data sets, with log entries
corresponding to sensor events are available. These data sets
need further processing.</p>
        <p>Validating the artificial models is assuring that the right
probability distribution function is used to reproduce the
transition between the different states (i.e. behaviours). Thus,
data series are obtained from log data as they were a random
number series generated by the corresponding probability
distribution. Such series are used afterwards in a goodness of
fit test. For example, in case of the non monotonous behaviours
and anytime behaviours, preprocessing the log data involves
the extraction of the time series of the moments in which each
event is produced. These time series only include values within
the typical awake period of the corresponding user. Obviously,
while the user are sleeping, his daytime routines change.</p>
        <p>The data preprocessing for the monotonous behaviours is
slightly different. This kind of behaviour is usually produced
in a bounded time slots. For example, having dinner, having
lunch or sleeping are behaviour which occur during specific
time periods of the day. So, the preprocessing involves the
extraction of the behaviours events inside the time slots. The
time slots can be slightly different for each person, but it is
possible to define an approximation of them which will be
valid for all (see in section III the configuration parameters).
Finally, in each slot time, the time intervals between each
ocurred event are measured. The extracted time series are
composed of these time intervals.</p>
      </sec>
      <sec id="sec-3-2">
        <title>B. Model Diagnosis</title>
        <p>
          Validation is one of the most important issues in a
simulation system. Validation consists in the determination that
the simulated model is an acceptable representation of the
real system, for the particular objectives of the model [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
There are many techniques for validating simulations [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ],
[
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], and specially, for validating agents based on simulated
models [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>
          The models which describe social processes, as the model
proposed here, are generally hard to validate. In this approach,
the behaviour is probabilistically modelled. However, some
statistic tests should be done to assume that a probabilistic
model is reasonable to explain the data. This process is called
model diagnosis. And this section is devoted to make the
diagnosis of the models presented above. The most serious
problem that one usually faces in this kind of validation is the
lack of real data [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. However, in this work the data from the
Necesity project is available and can be used.
        </p>
        <p>From these preprocessed data, some histograms for different
behaviours and people are shown in Fig. 5. The sample
density is shown with a black line. The dashed line shows
the probability density function of the theoretical distribution
that models that behaviour.</p>
        <p>Graphs 5 (a) and (b) show two monotonous behaviours,
sleeping and having dinner (having lunch is similar). In these
kinds of behaviours the event that raises the behaviour occurs
(a)
(b)
(c)
(d)
Fig. 5. (a) Minutes between 21:00 hours and the instant the attended C goes
to bed, (b) Time between dinners for attended B, (c) Time between uses of
the toilet for attended A, (d) Time between spare time for attended A
during a time interval. The interval is usually the same for
each attended person, i.e., a person usually goes to sleep or
to have dinner at the same hours. Because of this, the curve
of the behaviour can be fitted to a gamma distribution. The
gamma distribution is usually employed as a probability model
for waiting times, in this case, the waiting time until the next
event of the behaviour. So, a gamma distribution is suitable
for modelling monotonous behaviours.</p>
        <p>A kind of non monotonous behaviour, i.e. going to the toilet,
is shown in Fig. 5 (c). In this case, the behaviour is fitted with
an exponential distribution. Non monotonous behaviours are
also waiting time models, but they are defined in wake periods.
This is a special case of the gamma distribution which can be
modelled with an exponential distribution.</p>
        <p>Finally, the group of anytime behaviours are behaviours
which will be often interrupted for the other behaviours.
Because of this, size of intervals in an anytime behaviour
are small due the interruptions. This causes the characteristic
curved heavy-tailed distribution, specifically, the Pareto II
distribution, also known as Lomax distribution.</p>
        <p>
          Gamma, exponential and Lomax distributions are members
of the exponential family of probability distributions. Actually,
they are special cases of the beta prime distribution (also
known as beta distribution of the second kind, Beta II). The
Beta II distribution nests many important distributions as the
gamma, the exponential or the Lomax distribution. The gamma
and the Lomax distributions are special cases of Beta II. On the
other hand, the exponential distribution is a special case of the
gamma distribution Γ(α, β) when α = 1, and a special case
of the Lomax distribution with some restrictions [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
So, the Beta II distribution can be used here as a generalized
distribution for all group of behaviours: monotonous, non
monotonous and anytime behaviours. Then, each behaviour
can be specified according to its features in order to obtain a
better fitting.
        </p>
        <p>Now, in the rest of the section, empirical evidences
which support using gamma, exponential and Lomax for
monotonous, non monotonous and anytime behaviours are
given.</p>
        <p>
          Notice that it is possible to estimate the distance between
time series generated, as explained above, from real log
data and time series generated by simulation of theoretical
distributions. Such estimation is done by using some statistical
test, like the Kolmogorov-Smirnov (K-S) test [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. The K-S
test is a nonparametric and distribution-free goodness-of-fit
test. This means that they do not rely on parameter estimation
or precise distributional assumptions [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. The proposed model
in this work does not assume any concrete distribution and
does not require parameter estimation. This way, the K-S
properties are suitable for the hypothesis test.
        </p>
        <p>
          The K-S test and the chi-square test are the most commonly
used and for large size sample both tests have the same power.
However, the chi-square test requires a sufficient sample size
in order to obtain a valid chi-square approximation [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
        <p>The K-S is a goodness-of-fit test to indicate whether it is
reasonable or not to assume that a random sample comes from
a specific distribution. It is a form of hypothesis testing where
the null hypothesis says that sample data follows the stated
distribution. The hypothesis regarding the distributional form
is rejected if the test statistic, Dn, is greater than the critical
value obtained from a table, or, which is the same, if the
pvalue is lower than the significance level. The significance
level is fixed in this work at 0.05, which it is the value usually
referred in statistical literature.</p>
        <p>Table I shows the p-values obtained from the K-S test for
each validated behaviour with the adequate distribution. The
null hypothesis is that the behaviour sample data come from
the stated distribution and it is rejected if p-value is lower than
the significance level.</p>
        <p>Person</p>
        <p>A
B
C</p>
        <p>From these results, none of the stated null hypothesis can
be rejected. Therefore behaviour of the attended people could
be fitted by the specified distribution, as it is described above.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>V. RELATED WORKS</title>
      <p>
        Various approaches have been proposed to create
autonomous characters. For example, in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] every character is
provided with a small KBS (Knowledge-Based System). Such
method is very flexible, but defining the knowledge base is a
complex and time-consuming task. A reasoning system is also
used in [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
      </p>
      <p>
        The approach to behavioural autonomy presented in section
III is based in [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], in that approach the idea is to develop
agents that act and choose in the way actual humans do. The
agents are represented using parametrized decision algorithms,
and choose and calibrate these algorithms so that the agents’
behaviour matches real human behaviour observed in the same
decision context. For this purpose, they uses a parametrized
learning automaton with a vector of actions associated that can
be weighted to choose actions along the time the way humans
would.
      </p>
      <p>
        The decision of representing the automaton’s transitions
with probabilities instead of using a vector of strengths is
based in [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], where the behaviour sequences are modelled
through probabilistic automata (Probabilistic Finite-State
Machine, PFSMs). Probabilistic personality influence implies that
one cannot fully predict how a character will react to a
stimulus.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] and [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], the behavioural models described use
a hierarchical structure of finite state automata similar as
model described in section III. Each behaviour of a behaviour
sequence is called a behaviour cell. At the top of the structure
there is a behaviour entity with a finite state automaton
composed of at least one behaviour cell. An elementary behaviour
is situated at the bottom of the hierarchical decomposition and
encapsulates a specialized behaviour which directly controls
one or more actions. The list of prioritized events is based in
[
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], where human agents have a pending task list. Priorities
give more realism to human behaviours.
      </p>
      <p>VI. CONCLUSION</p>
      <p>In this work, a general behaviour model for different people
is proposed. Adjusting the model to specific persons would
imply using the suitable configuration parameters for the
corresponding probability distributions governing transitions
between states of the probabilistic automata. This process is
part of a more general task which is testing AmI services
and applications. For such purpose, the AVA methodology
was presented. It has been applied for the validation of an
AAL system called Necesity. More specifically, this paper is
focused in producing models of humans (i.e. step 3 of AVA)
and validation of the models (steps form 5 to 8).</p>
      <p>Such an approach requires, as a means to validate the
artificial behaviours, for all the artificial models, a method to
check if gamma, exponential and Lomax distributions are well
suited. Using the K-S test is possible to quantify the distance
between the empirical distribution functions of two samples.
Then the K-S test is used to validate the behaviours of the
artificial models, by using real data of real elders. A high
level of the p-value (higher that the significance level) means
that the behaviours are drawn from the same distribution (the
null hypothesis is not rejected). Notice that, in most cases, the
obtained p-values are higher than the significance level and
the null hypothesis is not rejected. But, it is also necessary to
remark that a good fitting of the configuration parameters will
always be required. And this is due to inevitable heterogeneity
in behaviour of people (including elders). As a conclusion, it
can be said that the proposed model is suitable for modeling
probabilistically the behaviour of simulated people, as the
work states.</p>
      <p>Future works include a deep study of source data in order
to generate a taxonomy of elders in terms of their behaviour
within their houses, depending on mobility and habits. Such a
taxonomy would be useful for an automatic parameter tuning
of the models of elders. The user of the simulator, instead of
configuring parameters by hand, would simply choose between
a catalogue of elderly people patterns of behaviour.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Gilbert</surname>
          </string-name>
          and
          <string-name>
            <given-names>K. G.</given-names>
            <surname>Troitzsch</surname>
          </string-name>
          ,
          <article-title>Simulation for the Social Scientist</article-title>
          . Open University Press,
          <year>February 2005</year>
          . [Online]. Available: http://www.amazon.com/exec/obidos/redirect?tag=
          <fpage>citeulike07</fpage>
          -
          <lpage>20</lpage>
          &amp;path=ASIN/0335216013
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Aarts</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Encarnac</surname>
          </string-name>
          <article-title>¸a˜o, “True visions: Tales on the realization of ambient intelligence,” in Into Ambient Intelligence, Chapter 1</article-title>
          . Springer Verlag, Berlin, Heidelberg, New York,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Myers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sandler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Badgett</surname>
          </string-name>
          , and T. M. Thomas,
          <source>The Art of Software Testing, Second Edition</source>
          . Wiley,
          <year>June 2004</year>
          . [Online]. Available: http://www.amazon.ca/exec/obidos/redirect?tag=
          <fpage>citeulike09</fpage>
          -
          <lpage>20</lpage>
          \&amp;amp;path=ASIN/0471469122
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Bot</surname>
          </string-name>
          <article-title>´ıa, A</article-title>
          . Villa,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Palma</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>Pe´rez, and</article-title>
          <string-name>
            <surname>E. Iborra.</surname>
          </string-name>
          , “
          <article-title>Detecting domestic problems of elderly people: simple and unobstrusive sensors to generate the context of the attended</article-title>
          .” in First Internationa Workshop on Ambient Assisted Living,
          <string-name>
            <surname>IWAAL</surname>
          </string-name>
          , Salamanca, Spain,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] “Flowcharting techniques,
          <source>” IBM GC20-8152-1 edition</source>
          ,
          <year>1969</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Drogoul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vanbergue</surname>
          </string-name>
          , and T. Meurisse,
          <article-title>“Multi-agent based simulation: Where are the agents?</article-title>
          ”
          <source>in Proceedings of the Third International Workshop on Multi-Agent-Based Simulation MABS</source>
          <year>2002</year>
          , Bologna, Italy, ser. LNAI 2581,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Sichman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bousquet</surname>
          </string-name>
          , and P. Davidsson, Eds. Berlin Heidelberg: Springer Verlag,
          <year>July 2002</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B.</given-names>
            <surname>Edmonds</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Moss</surname>
          </string-name>
          , “
          <article-title>From kiss to kids - an 'anti-simplistic' modelling approach</article-title>
          ,” in MABS,
          <year>2004</year>
          , pp.
          <fpage>130</fpage>
          -
          <lpage>144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rabin</surname>
          </string-name>
          , “
          <article-title>Probabilistic automata*,” Information and control</article-title>
          , vol.
          <volume>6</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>230</fpage>
          -
          <lpage>245</lpage>
          ,
          <year>1963</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Darmois</surname>
          </string-name>
          , “
          <article-title>Sur les lois de probabilit a estimation exhaustive,”</article-title>
          <source>CR Acad. Sci. Paris</source>
          , vol.
          <volume>260</volume>
          , pp.
          <fpage>1265</fpage>
          -
          <lpage>1266</lpage>
          ,
          <year>1935</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B.</given-names>
            <surname>Koopman</surname>
          </string-name>
          , “
          <article-title>On distributions admitting a sufficient statistic,”</article-title>
          <source>Transactions of the American Mathematical Society</source>
          , pp.
          <fpage>399</fpage>
          -
          <lpage>409</lpage>
          ,
          <year>1936</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Law</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Kelton</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Kelton</surname>
          </string-name>
          ,
          <article-title>Simulation modeling and analysis</article-title>
          .
          <source>McGraw-Hill New York</source>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Troitzsch</surname>
          </string-name>
          , “Validating simulation models,
          <source>” Networked Simulations and Simulated Networks</source>
          , pp.
          <fpage>265</fpage>
          -
          <lpage>270</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Richiardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Leombruni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Saam</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Sonnessa</surname>
          </string-name>
          , “
          <article-title>A common protocol for agent-based social simulation</article-title>
          ,
          <source>” Journal of Artificial Societies and Social Simulation</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>1</issue>
          , p.
          <fpage>15</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>F.</given-names>
            <surname>Klu</surname>
          </string-name>
          <article-title>¨gl, “A validation methodology for agent-based simulations</article-title>
          ,”
          <source>in Proceedings of the 2008 ACM symposium on Applied computing. ACM</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>43</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>J. B. McDonald</surname>
          </string-name>
          , “
          <article-title>Some generalized functions for the size distribution of income,” Econometrica</article-title>
          , vol.
          <volume>52</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>647</fpage>
          -
          <lpage>663</lpage>
          ,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>McDonald</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          , “
          <article-title>A generalization of the beta distribution with applications</article-title>
          ,
          <source>” Journal of Econometrics</source>
          , vol.
          <volume>66</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>133</fpage>
          -
          <lpage>152</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>H.</given-names>
            <surname>Neave</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Worthington</surname>
          </string-name>
          ,
          <article-title>Distribution-free tests</article-title>
          .
          <source>Routledge London</source>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Sheskin</surname>
          </string-name>
          ,
          <article-title>Handbook of parametric and nonparametric statistical procedures</article-title>
          .
          <source>CRC Pr I Llc</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>F.</given-names>
            <surname>Massey</surname>
          </string-name>
          Jr, “
          <article-title>The Kolmogorov-Smirnov test for goodness of fit</article-title>
          ,
          <source>” Journal of the American Statistical Association</source>
          , vol.
          <volume>46</volume>
          , no.
          <issue>253</issue>
          , pp.
          <fpage>68</fpage>
          -
          <lpage>78</lpage>
          ,
          <year>1951</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>F.</given-names>
            <surname>David</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , “
          <article-title>The probability integral transformation when parameters are estimated from the sample</article-title>
          ,
          <source>” Biometrika</source>
          , vol.
          <volume>35</volume>
          , no.
          <issue>1-2</issue>
          , p.
          <fpage>182</fpage>
          ,
          <year>1948</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>G.</given-names>
            <surname>Anastassakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Panayiotopoulos</surname>
          </string-name>
          , and T. Ritchings, “
          <article-title>Virtual agent societies with the mVITAL intelligent agent system,” in Intelligent Virtual Agents</article-title>
          . Springer,
          <year>2001</year>
          , pp.
          <fpage>112</fpage>
          -
          <lpage>125</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>H.</given-names>
            <surname>Noser</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Thalmann</surname>
          </string-name>
          , “
          <article-title>Towards autonomous synthetic actors,” Synthetic Worlds. TL Kunii and A</article-title>
          .
          <string-name>
            <surname>Luciani</surname>
          </string-name>
          , John Wiley and Sons, Ltd,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>W.</given-names>
            <surname>Arthur</surname>
          </string-name>
          , “
          <article-title>On designing economic agents that behave like human agents</article-title>
          ,
          <source>” Journal of Evolutionary Economics</source>
          , vol.
          <volume>3</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chittaro</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Serra</surname>
          </string-name>
          , “
          <article-title>Behavioral programming of autonomous characters based on probabilistic automata and personality</article-title>
          ,”
          <source>Computer Animation and Virtual Worlds</source>
          , vol.
          <volume>15</volume>
          , no.
          <issue>34</issue>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>326</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>D.</given-names>
            <surname>Thalmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Musse</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kallmann</surname>
          </string-name>
          , “Virtual Humans' Behaviour: Individuals, Groups, and Crowds,
          <source>” Proceedings of Digital Media Futures</source>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>P.</given-names>
            <surname>Be</surname>
          </string-name>
          <article-title>´cheiraz and D. Thalmann, “A behavioral animation system for autonomous actors personified by emotions</article-title>
          ,”
          <source>in Proceedings of the 1998 Workshop on Embodied Conversational Characters. Citeseer</source>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>L.</given-names>
            <surname>Temime</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pannet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kardas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Opatowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Guillemot</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Bo</surname>
          </string-name>
          <article-title>”elle, “NOSOSIM: an agent-based model of pathogen circulation in a hospital ward,” in Proceedings of the 2009 Spring Simulation Multiconference</article-title>
          . Society for Computer Simulation International,
          <year>2009</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>