Human behaviours simulation in ubiquitous computing environments Teresa Garcia-Valverde, Francisco Campuzano, Emilio Serrano and Juan A. Botia University of Murcia, Spain. E-mails: {mtgarcia,fjcampuzano,emilioserra,juanbot}@um.es Abstract—Ambient Assisted Living (AAL) systems’ main goal stage prior to deployment is presented. The main idea behind is to augment live quality of elderly people, by using ICT based this is that the user is modelled with a computational model systems. In this paper, we are concerned with the artificial and integrated into an ABSS model which incorporates as a reproduction of a physical environment (i.e. a house) and an elder (i.e. the attended) living in such environment. An agent simulated artifact, the environment, the hardware (i.e. mainly based social simulation system is used for such purpose. Such sensors and interfaces with the user) and integrates the real simulator will allow the integration of ubiquitous computing software (i.e. services and applications). The real software is appliances, services and applications in such environment. A precisely the SUT here. realistic reproduction of human behaviour in the simulator helps, The proposal is articulated by means of a methodological in this context, in the validation of silent monitorisation, diagnosis and action based applications. Proofs are given in the paper work. Such a methodology is a set of procedures which which demonstrate the level of reality reached by comparing the guides the developer in the definition, creation, testing and artificial behaviour with real ones. validation of the AmI system. It is based on a methodology Index Terms—Ubiquitous computing, Ambient Assisted Living, previously described by Gilbert et al. [1]. It comprises the behaviour simulation, user modelling. creation of the necessary ABSS models and how they should be employed to find errors in AmI services. The application I. I NTRODUCTION of the methodology is exemplified in a real domain. The The main thesis presented in this paper is the following: application domain is AAL (Ambient Assisted Living). An agent based social simulation (ABSS) [1] may help in the AAL system is an ICT based solution which is devoted to engineering of Ambient Intelligence systems. ABSS is a augment quality of live for elderly people. In this case, the simulation paradigm in which the focus is put on the definition interest is focused on a system called Necesity [4]. It is of the separate components of the simulation in an isolated based on a sensor network deployed through the house and manner. In such simulation runs, the emergence of behaviours a central processing unit. Sensors include presence, pressure is the main subject under study. And the metaphor of agent is and open-door. The system is designed to work on single used for specification of single components and interactions person environments (e.g. an elderly who lives alone and among them. An ambient intelligence [2] system is a set of independently in his house). It is in charge of monitoring appliances, services and applications which silently surrounds activity regime of the elderly 24x7 in a manner that when and interact with the user in an intelligent manner. In such his activity pattern is anomalous, an emergency response is kind of systems, the user is the central entity of the model. started. The rest of the paper will demonstrate that an artificial Starting from the user, services and applications are built. reproduction of a house, sensors and the attended, together A main difficulty one may find in the development process integrated with the real software, helps in the fine tuning of of an AmI system is that of testing and validation. Testing is the activity pattern management software. the process of executing a program with the intent of finding The rest of the paper is structured as follows. Section II errors in the code [3]. Such errors must be debugged [3]. Some introduces the methodology used for the engineering of AmI of the errors may be found by using a Unit approach with the services. Section III introduces the computational models em- system under test (SUT). Common errors which are found ployed to artificially reproduce the behaviour of the attended. in this stage are related to common programming mistakes Such behaviour is based on a probabilistic and hierarchical (e.g. values of variables out of range, shoddy checking of automata which governs the activity and location of the return values from methods and so on). Thus, robustness of modeled elderly in each instant of time. In section IV, the the program is a must here. But a more elaborated test set may validation approach is presented. It is based on the statistical be defined in order to assess the functionality of the system contrast of artificial data traces obtained by simulating the (i.e. it behaves as expected). But, if the main issue in AmI automata just mentioned with similar traces coming from real systems is a smooth interaction with the user, an Unit based users in similar context. approach is no more valid here. It is clear that the user, or at least a model of the user, should be incorporated in the II. AVA, AN AGENT BASED METHODOLOGY FOR THE development process in order to measure to what extent, the VALIDATION OF A M I SYSTEMS SUT is behaving as expected when interacting with him. In This section explains an agent based methodology for the this paper, an approach to test and validate AmI systems in a validation of AmI systems called AVA. This methodology paper discusses the performance of these steps for an AAL system for elderly people The development of an AmI system model is performed in step 5. The model design associates the real system with a representation of this system (the model) [6]. Here, the AmI application must be modelled but also the users and the environment. These models are necessary to validate the AmI application because they interact with it. Moreover, a realistic validation of the application needs realistic models for users and environment. Therefore, models must describe reality before being as simple as possible [7]. Section III explains the construction of a user model for an specific AmI application. Step 6 deals with the implementation of the AmI system in a simulation language. The implementation from the concepts of the model is not a trivial task [6]. A general programming language or a specific one of the available frameworks for the development of ABSS can be used for the construction of the simulation. The second option is much more convenient Fig. 1. AVA, An Agent based methodology for the Validation of AmI systems because several of the typical tasks in the construction of expressed as a flowchart ABSS have been included in this kind of software packages [1]. Examples of these tasks are scheduling agents’ actions or building basic environments. The web of the Open Agent proposes the development of ABSS in order to validate AmI Based Modeling Consortium2 nowadays lists 22 of these applications. AVA is an extension of the methodology pro- frameworks. Section III shows the use of a specific software mulgated by Gilbert et al. [1] for the development and use of package for the implementation of a realistic environment general ABSS. The two main innovations with regarding the model: 3D Sweethome. classical methodology by Giltber et al. are: (1) the existence After building the simulation, this must be executed (step of a step to generate simpler simulations and (2) the con- 7). Quick, cheap and numerous experiments can be performed sideration of including real elements in the simulation to get thanks to the ABSS. These executions produce large amounts more realistic results. This section will show the advantages of data regarding the behaviour of users, environment and of these innovations for the specific purpose of validating application models. Forensic analysis, step 8, is an offline AmI applications. The AVA methodology is expressed as a analysis to be conducted on the data stored from the pre- flowchart in figure 11 . vious step. The analysis should consider whether the AmI Gilbert et al. [1] defines the target of a social simulation as application functionality is correct. Furthermore, this step must some “real world” phenomenon which the researcher is inter- validate that the behaviour of users and environment models ested in. The AVA methodology is proposed to validate an AmI is consistent with the observed reality (steps 2 and 3). Without application including their interactions with the environment this validation, the theories generated from simulations have and users. Thus, AVA starts considering an AmI target (step 1) no relation to reality (as they are based on non-descriptive which includes an environment, users and an AmI application models). Therefore, the functionality of the AmI application which may be finished or at an advanced stage of development. model is linked to the users and environment models. Section Typically, the use of ABSS to generate knowledge involves a IV deals with the validation of the users model for an AAL necessary familiarization with the domain in the first step. This system is required to generate models of the target in the following One of the innovative points in the AVA methodology is the steps of the methodology [1]. The main elements to be studied use of simple simulations as a means to validate descriptive in an AmI system are: the environment (step 2), users (step simulations. Step 9 checks if any elements of the simulation 3) and the AmI application (step 4). Note that while the are too complex. In this case, complexity means that it is environment and the user are inputs, the application system is difficult to assess whether the behaviour of an element is the a process. In principle, the environment and users are external expected one. For example, some behaviours of users can be and do not support changes. On the other hand, the AmI so complex that they need to be evaluated in isolation. An application can be modified. This application will be refined example of this type of behaviour would be the resolution of along the iterations of AVA to get a realistic validation. This collisions on the motion of a large number of agents. This 1 The flowchart uses standard elements of classic flowcharts [5] as flow behaviour is a problem to be studied itself and its validation of control (represented as arrows), processes (represented as rectangles), would be much more complicated with additional elements in decisions (rhombus), input/output (parallelograms), start and end symbols (ovals) and predefined processes (rectangles with vertical lines at the sides). 2 OpenABM Consortium website: http://www.openabm.org/ the simulation (more users’ behaviours, a realistic environment model, a realistic model of an AmI application, etc). In these cases, the methodology proposes to consider these complex elements as an object of study itself (step 10) and repeat the AVA methodology for them (step 11). The reuse of models and code in this new iteration will be direct because a more descriptive simulation is available as result of the previous steps. Once the complex element is validated in a simpler simulation, which is the final result of the methodology, the next step for the overall simulation (step 12) can be performed. Step 12 checks if the developer has found errors in func- tionality. If that is the case, the AmI application of step 4 must be modified in order to correct these errors and the process repeated. Besides the primary objective (validating the AmI Fig. 2. A plane model in Ubik’s editor and its 3D representation application), is typical to find bugs of previous steps at this point in the form of implementation failures or unrealistic models. A. Environment Modelling The final decision, step 13, checks if actual elements of For Multi-Agent Based Simulation (MABS), it is available the AmI system can be connected to the simulator. The AVA Ubiksim4 , a simulator developed by University of Murcia methodology proposes to inject or connect real elements in that works over MASON5 . It has integrated an environment the simulation progressively in order to make more realistic modeling tool based on SweetHome3D6 , an application con- validations3 . This process is called “reality injection” and the veniently adapted for modelling attended people and their basic idea is that real elements can coexist with simulated environment. It is possible to create houses over a 2D plane, elements. After connecting real elements, the methodology also it offers a 3D navigable view (figure 2). This view is used must be repeated from step 4 to improve the application and for simulation’s visualization at real time. the models. The result is an exhaustive validation which is In the physical environment generated by using the Ubik as realistic as possible. The obvious question is why models editor, a simple house (see figure 2) with a kitchen, a bath- and simulations are necessary if real elements (as real users) room, a bedroom and a living room is modelled. Presence can be injected. The answer is that a model, by definition, sensors are included in every room of the house. And a sensor is somewhat easier to study than the modelled reality. The for open door (it is necessary for knowing when the elder purpose of including real elements in simulations is to improve leaves the home) is also included in the outdoor. When a the realism of the models. Then, in subsequent iterations, the simulation is run, the person moves in the house and stimulates real elements will not be included because the pure simulations sensors when he is detected by them. Such sensors, through allow faster and cheaper tests. the ubiquitous computing software, generate events. And these Finally, if models are descriptive enough, the bugs found in events generate log entries. Such simulated log entries are used the functionality of the AmI application model will correspond afterwards to check if the virtual elder behaves in a realistic to failures of functionality in the real AmI application. These manner (see section IV). failures should have been corrected in each iteration of the Notice that log entries (both in the simulator and the AVA methodology. Therefore, the result of the methodology real setting) are generated by the same monitoring service (step 14) is that the AmI target is exhaustively validated. which continuously checks if the elder may be suffering some problem, by using a pattern recognition approach (more details III. R EALISTIC BEHAVIOUR MODELLING on this may be found in [4]) on the events coming from In this section, the particular models used, in the application sensors. In the first case, the person is virtual, in the second of the AVA methodology in the AAL domain, are introduced. it is real. But the monitoring service is the same. In section III.A, it is presented how the physical environment (i.e. the house and furniture) and sensors were defined. Section B. Behaviour Modelling III.B refers to the production of realistic computational models The target of the modelling activity is a typical aged person, for elders living in such environment and making sensors to who lives independently and alone in his own house. As react on their presence. Having such models (i.e. the house, he lives alone, the following situation may occur: he may sensors and persons) within a simulation, and its integration suffer some health problem and stay immobilised in the with the ubiquitous computing software, such software can be floor for too much time before anybody comes and notices tested. 4 UbikSim: http://ubiksim.sourceforge.net, last access: 20 May 2010 3 Notice that this not involves necessarily a Participatory Multiagent Simula- 5 MASON Toolkit: http://cs.gmu.edu/∼eclab/projects/mason/, last access: 20 tion. The real elements do not have to be humans playing the role of simulation May 2010 components. These elements can be software applications, hardware or even 6 Sweet Home 3D: http://www.sweethome3d.eu/es/index.jsp, last access: 20 parts of the environment. May 2010 that something is wrong. But it is possible to develop an ubiquitous computing system which detects it and generates some emergency response process [4]. By following the AVA methodology, we may use a simulated elder within a simulated environment to test such system before it is deployed in a real environment for pilot testing. Such simulated elder should be necessarily simulated along the 24 hours of the day, repeatedly for a determined number of weeks. For this, it is assumed that the day is divided into time slots (i.e. morning, noon, afternoon and night). In each time slot, it is also assumed that Q = {a0 = N ormalT ime, a1 = M edicationT ime, the simulated person behaves specifically for such slot. a2 = M ealT ime, a3 = SleepT ime, a4 = Anomalous} The behaviour of simulated people are modelled proba- (a) bilistically. In this approach, behaviours are defined as sit- uations the agent should play in each moment. Transitions between behaviours are probabilistic. The underlying model is a hierarchical automaton (i.e. in a higher level there is a number of complex behaviours that the agent may play and once it is in a concrete state, within the state there is another automaton with more simple behaviours). So, the modelling of each behaviour is treated separately and the modeller is abstracted of unnecessary details. So, in the lowest level (basic actions), each state is atomic. An agent never conducts two behaviours of the same level simultaneously. The behaviours used for modelling elders are of three types: • Monotonous behaviours: the kind of behaviour the elder manifest always approximately in the same time slot, and on a daily basis (e.g. sleeping, having meal, medication and so on). Q = {a00 = SpareT ime, a10 = M edicationT ime, • Non monotonous behaviours: the kind of behaviour the a20 = M ealT ime, a30 = SleepT ime, ax1 = T oiletT ime, elder usually manifest, not bounded to a concrete time a02 = ShowerT ime, a03 = CleanT ime} slot, and repeated within a non constant period (e.g. going (b) to the toilet, having a shower, cleaning the house and so on). Q = {a200 = GoingT oF ridge, a201 = GoingT oCooker, • Any time behaviours: such behaviours will sometimes a202 = Cooking, a203 = GoingT oT able, a204 = Eating} interrupt others the elder is already doing, and will be (c) generated regardless they were already generated in a temporal proximity (e.g. in his spare time). Fig. 3. (a) Level 0 automaton, (b) Level 1 automata, (c) Level 2 automaton for state a20 A probabilistic automaton [8] is defined as the quintuple (Q, V, P (0), F, M ) where Q is a finite set of states, V is a finite set of input symbols, P (0) is an initial state vector, F is a set of final states and M is a matrix that represents of the exponential family [9], [10]. Section IV defines all probabilities of transition for every state. In this definition, distribution functions used. transition’s probabilities depend on time. According to differ- Notice that monotonous behaviours must be activated at ent daily time slots, the agent’s behaviour acts on a different specific time hours or within specific time intervals. For pattern. There are time slots for eating, sleeping and taking example, in the case of MedicationTime, a new necessity of medication (Monotonous behaviours). changing to such state will be generated exactly at the time to Notice that, when the elder is at any state, the necessity take medicines. In the case of MealTime and SleepTime, the of changing to another state may arise. But this is not done necessity is generated within a time interval. The distribution immediately. Moreover, a number of different changes (i.e. that models these transitions is bounded in this time slot. It transitions) may be pending simultaneously. Thus, a list of must be assured that agent eats and sleeps every day, because pending tasks (i.e. or events) is maintained. Such tasks are of that, if not transition is generated into the time slot, a ordered by a static priority (e.g. going to the toilet goes before transition is generated at the end time. cleaning). To provide more realism, an automata hierarchy is intro- For generating transitions in real time, probability dis- duced representing each state by lower level automata which tribution functions are used according to the type of the define more specialized behaviours. behaviour and its features. These distributions are member As shown in figure 3(a), in level 0 the initial state is a0 =NormalTime. For every one of the other states 1 Let pt be a list of pending tasks ordered by priority {a1 =MedicationTime,a2 =MealTime,a3 =SleepTime} exist 2 Let states be a list with all posible automata’s states 3 Let times be a list of time instants when transitions a list of times generated by a probability function. When a 4 are going to be generated time counter arrives to one of these times, a transition to state 5 Distribution Functions to model the ocurrence 6 of monotonous behaviours: owner of the list is added to pending tasks list. When the action 7 Let mb be the function to model monotonous behaviours is finished, the automaton returns to initial state if there is not 8 Let nb be the function to model non monotonous behaviours 9 Let ab be the function to model anytime behaviours another pending task. The final state is a4 =Anomalous, if it 10 is reached, the execution will be stopped. 11 //Instants of time initialization 12 In level 1 (figure 3(b)) non monotonous behaviours are 13 for all s in states do represented. There is an initial state in every refinement where 14 if isAnytime(s) then 15 times(s)<-ab() the person does the main task of the upper level (eating, 16 else if isMonotonous(s) then sleeping or taking medication). The state ax1 =ToiletTime is 17 //Initial and final instants of bounded time slot 18 i <- ini(s) considered in every refinement of level 0. However, the states 19 e <- end(s) a02 =ShowerTime and a03 =CleanTime only may be activated 20 times(s)<-mb(i, e) 21 else if isNonMonotonous(s) then in normal state of level 0. 22 times(s)<-nb() In the lowest level (level 2), the new automata define some 23 endif 24 endfor specialized actions refining every state of level 1. These new 25 states do not give relevant information, but the agent gains 26 actualTime <- 0 27 //actual state is defined by 3 numbers, one per level more realistic behaviours. 28 level0 <- 0 With the sequence of actions in figure 3(c) the person cooks 29 level1 <- 0 30 level2 <- 0 before eating. It refines state a20 of level 1, the agent is not 31 actualState<-newState(level0,level1,level2) going to be static in the kitchen, because it must be going to 32 33 //Once initialized the times, the automata begin to move different places and spends some time in every state. 34 In figure 4 a base implementation for agent’s behaviour is 35 //If an anomalous state is reached, the execution stops 36 while(level0(actualState)<>4) presented. First of all, all possible automata’s states are iterated 37 //If actual task ends, initial state is activated for initializing lists which contain time instants (lines 13-24). 38 if timeLeft(actualState)=0 then 39 actualState <- newState(0,0,0) These time instants are generated by a probability distribution 40 endif function, and they represent when a transition to its associated 41 //If next state has higher priority than actual state, 42 //actual state becomes a pending task and it is stored state is going to be launched. After of that, the automaton 43 //in pt begins to run. At every time instant, if actual time is equal 44 if (size(pt))>0 45 nextState <- first(pt) than first item of a time list, a transition is added to pending 46 if priority(nextState)>priority(actualState) then tasks list (lines 57-60). When there is one state with higher 47 add(pt,actualState) 48 actualState<-nextState priority than actual state in pending tasks list, a transition is 49 remove(pt,nextState) also generated. Then, if actual state is unfinished, it is stored 50 endif 51 endif as a pending task and a new state is reached (lines 46-50). 52 for all s in states do When this new state is finished, leaved state may be resumed. 53 time <- first(times(s)) 54 //If actual time matches first time instant in The configuration parameters for the whole simulation in- 55 //times, the task associated with this time instant volve: 56 //is added to pending tasks 57 if actualTime=time then • Temporal limit in every room before entering in anoma- 58 remove(times(s),time) 59 add(pt,s,priority(s)) lous state (tmax ) 60 endif • The probability distribution parameters according to the 61 endfor 62 actualTime <- actualTime + 1 kind of behaviour 63 //Decrement remaining time for finishing actual task • Time slots of routinary temporal behaviours, like eating 64 time <- timeLeft(ActualState) - 1 65 setTimeLeft(actualState, time) or sleeping (inis , ends , s ∈ M ealT ime, SleepT ime) 66 endwhile IV. VALIDATION OF THE APPROACH Fig. 4. Realistic behaviour implementation From section II, it is clear that user modelling is a means for testing AmI services and applications without a real environ- ment. This task is performed in the third step of the AVA methodology. Regarding to the validation, other important A. Data Preprocessing steps within AVA are the steps from 5 to 8. Model validation Activity data from real users were obtained within a pilot is needed in order to show that models are able to describe project devoted to the validation of the Necesity system. the users’ behaviours. The rest of the section shows how the Around 25 users, all elderly people living independently, were model validation has been approached. But basically, activity used in such Pilot project. In this paper, data coming from data from real users is compared (in statistical form) with the three users, under monitorisation during two months, were same type of data produced by the artificial models. used. Data is in the form of a Necesity log. The logs offer the possibility to represent where the user is, at any moment, at the house (including also if he leaves the house or he is seated or sleeping). Thus, three different data sets, with log entries corresponding to sensor events are available. These data sets need further processing. Validating the artificial models is assuring that the right probability distribution function is used to reproduce the transition between the different states (i.e. behaviours). Thus, data series are obtained from log data as they were a random number series generated by the corresponding probability distribution. Such series are used afterwards in a goodness of fit test. For example, in case of the non monotonous behaviours (a) and anytime behaviours, preprocessing the log data involves the extraction of the time series of the moments in which each event is produced. These time series only include values within the typical awake period of the corresponding user. Obviously, while the user are sleeping, his daytime routines change. The data preprocessing for the monotonous behaviours is slightly different. This kind of behaviour is usually produced in a bounded time slots. For example, having dinner, having lunch or sleeping are behaviour which occur during specific time periods of the day. So, the preprocessing involves the extraction of the behaviours events inside the time slots. The time slots can be slightly different for each person, but it is possible to define an approximation of them which will be (b) valid for all (see in section III the configuration parameters). Finally, in each slot time, the time intervals between each ocurred event are measured. The extracted time series are composed of these time intervals. B. Model Diagnosis Validation is one of the most important issues in a sim- ulation system. Validation consists in the determination that the simulated model is an acceptable representation of the real system, for the particular objectives of the model [11]. There are many techniques for validating simulations [11], [12], and specially, for validating agents based on simulated (c) models [13]. The models which describe social processes, as the model proposed here, are generally hard to validate. In this approach, the behaviour is probabilistically modelled. However, some statistic tests should be done to assume that a probabilistic model is reasonable to explain the data. This process is called model diagnosis. And this section is devoted to make the diagnosis of the models presented above. The most serious problem that one usually faces in this kind of validation is the lack of real data [14]. However, in this work the data from the Necesity project is available and can be used. From these preprocessed data, some histograms for different behaviours and people are shown in Fig. 5. The sample (d) density is shown with a black line. The dashed line shows Fig. 5. (a) Minutes between 21:00 hours and the instant the attended C goes the probability density function of the theoretical distribution to bed, (b) Time between dinners for attended B, (c) Time between uses of the toilet for attended A, (d) Time between spare time for attended A that models that behaviour. Graphs 5 (a) and (b) show two monotonous behaviours, sleeping and having dinner (having lunch is similar). In these kinds of behaviours the event that raises the behaviour occurs during a time interval. The interval is usually the same for a specific distribution. It is a form of hypothesis testing where each attended person, i.e., a person usually goes to sleep or the null hypothesis says that sample data follows the stated to have dinner at the same hours. Because of this, the curve distribution. The hypothesis regarding the distributional form of the behaviour can be fitted to a gamma distribution. The is rejected if the test statistic, Dn , is greater than the critical gamma distribution is usually employed as a probability model value obtained from a table, or, which is the same, if the p- for waiting times, in this case, the waiting time until the next value is lower than the significance level. The significance event of the behaviour. So, a gamma distribution is suitable level is fixed in this work at 0.05, which it is the value usually for modelling monotonous behaviours. referred in statistical literature. A kind of non monotonous behaviour, i.e. going to the toilet, Table I shows the p-values obtained from the K-S test for is shown in Fig. 5 (c). In this case, the behaviour is fitted with each validated behaviour with the adequate distribution. The an exponential distribution. Non monotonous behaviours are null hypothesis is that the behaviour sample data come from also waiting time models, but they are defined in wake periods. the stated distribution and it is rejected if p-value is lower than This is a special case of the gamma distribution which can be the significance level. modelled with an exponential distribution. Finally, the group of anytime behaviours are behaviours Behaviour Person Sleep Dinner Eat Toilet Spare time which will be often interrupted for the other behaviours. A 0.404 0.311 0.361 0.111 0.086 Because of this, size of intervals in an anytime behaviour B 0.488 0.467 0.542 0.079 0.108 are small due the interruptions. This causes the characteristic C 0.337 0.489 0.575 0.137 0.103 curved heavy-tailed distribution, specifically, the Pareto II TABLE I distribution, also known as Lomax distribution. P- VALUES Gamma, exponential and Lomax distributions are members of the exponential family of probability distributions. Actually, they are special cases of the beta prime distribution (also From these results, none of the stated null hypothesis can known as beta distribution of the second kind, Beta II). The be rejected. Therefore behaviour of the attended people could Beta II distribution nests many important distributions as the be fitted by the specified distribution, as it is described above. gamma, the exponential or the Lomax distribution. The gamma V. R ELATED WORKS and the Lomax distributions are special cases of Beta II. On the other hand, the exponential distribution is a special case of the Various approaches have been proposed to create au- gamma distribution Γ(α, β) when α = 1, and a special case tonomous characters. For example, in [21] every character is of the Lomax distribution with some restrictions [15], [16]. provided with a small KBS (Knowledge-Based System). Such So, the Beta II distribution can be used here as a generalized method is very flexible, but defining the knowledge base is a distribution for all group of behaviours: monotonous, non complex and time-consuming task. A reasoning system is also monotonous and anytime behaviours. Then, each behaviour used in [22]. can be specified according to its features in order to obtain a The approach to behavioural autonomy presented in section better fitting. III is based in [23], in that approach the idea is to develop Now, in the rest of the section, empirical evidences agents that act and choose in the way actual humans do. The which support using gamma, exponential and Lomax for agents are represented using parametrized decision algorithms, monotonous, non monotonous and anytime behaviours are and choose and calibrate these algorithms so that the agents’ given. behaviour matches real human behaviour observed in the same Notice that it is possible to estimate the distance between decision context. For this purpose, they uses a parametrized time series generated, as explained above, from real log learning automaton with a vector of actions associated that can data and time series generated by simulation of theoretical be weighted to choose actions along the time the way humans distributions. Such estimation is done by using some statistical would. test, like the Kolmogorov-Smirnov (K-S) test [17]. The K-S The decision of representing the automaton’s transitions test is a nonparametric and distribution-free goodness-of-fit with probabilities instead of using a vector of strengths is test. This means that they do not rely on parameter estimation based in [24], where the behaviour sequences are modelled or precise distributional assumptions [18]. The proposed model through probabilistic automata (Probabilistic Finite-State Ma- in this work does not assume any concrete distribution and chine, PFSMs). Probabilistic personality influence implies that does not require parameter estimation. This way, the K-S one cannot fully predict how a character will react to a properties are suitable for the hypothesis test. stimulus. The K-S test and the chi-square test are the most commonly In [25] and [26], the behavioural models described use used and for large size sample both tests have the same power. a hierarchical structure of finite state automata similar as However, the chi-square test requires a sufficient sample size model described in section III. Each behaviour of a behaviour in order to obtain a valid chi-square approximation [19], [20]. sequence is called a behaviour cell. At the top of the structure The K-S is a goodness-of-fit test to indicate whether it is there is a behaviour entity with a finite state automaton com- reasonable or not to assume that a random sample comes from posed of at least one behaviour cell. An elementary behaviour is situated at the bottom of the hierarchical decomposition and [5] “Flowcharting techniques,” IBM GC20-8152-1 edition, 1969. encapsulates a specialized behaviour which directly controls [6] A. Drogoul, D. Vanbergue, and T. Meurisse, “Multi-agent based simu- lation: Where are the agents?” in Proceedings of the Third International one or more actions. The list of prioritized events is based in Workshop on Multi-Agent-Based Simulation MABS 2002, Bologna, Italy, [27], where human agents have a pending task list. Priorities ser. LNAI 2581, J. S. Sichman, F. Bousquet, and P. Davidsson, Eds. give more realism to human behaviours. Berlin Heidelberg: Springer Verlag, July 2002, pp. 1–15. [7] B. Edmonds and S. Moss, “From kiss to kids - an ’anti-simplistic’ VI. C ONCLUSION modelling approach,” in MABS, 2004, pp. 130–144. [8] M. Rabin, “Probabilistic automata*,” Information and control, vol. 6, In this work, a general behaviour model for different people no. 3, pp. 230–245, 1963. is proposed. Adjusting the model to specific persons would [9] G. Darmois, “Sur les lois de probabilit a estimation exhaustive,” CR Acad. Sci. Paris, vol. 260, pp. 1265–1266, 1935. imply using the suitable configuration parameters for the [10] B. Koopman, “On distributions admitting a sufficient statistic,” Trans- corresponding probability distributions governing transitions actions of the American Mathematical Society, pp. 399–409, 1936. between states of the probabilistic automata. This process is [11] A. Law, W. Kelton, and W. Kelton, Simulation modeling and analysis. McGraw-Hill New York, 1991. part of a more general task which is testing AmI services [12] K. Troitzsch, “Validating simulation models,” Networked Simulations and applications. For such purpose, the AVA methodology and Simulated Networks, pp. 265–270, 2004. was presented. It has been applied for the validation of an [13] M. Richiardi, R. Leombruni, N. Saam, and M. Sonnessa, “A common protocol for agent-based social simulation,” Journal of Artificial Soci- AAL system called Necesity. More specifically, this paper is eties and Social Simulation, vol. 9, no. 1, p. 15, 2006. focused in producing models of humans (i.e. step 3 of AVA) [14] F. Klügl, “A validation methodology for agent-based simulations,” in and validation of the models (steps form 5 to 8). Proceedings of the 2008 ACM symposium on Applied computing. ACM, 2008, pp. 39–43. Such an approach requires, as a means to validate the [15] J. B. McDonald, “Some generalized functions for the size distribution artificial behaviours, for all the artificial models, a method to of income,” Econometrica, vol. 52, no. 3, pp. 647–663, 1984. check if gamma, exponential and Lomax distributions are well [16] J. McDonald and Y. Xu, “A generalization of the beta distribution with applications,” Journal of Econometrics, vol. 66, no. 1, pp. 133–152, suited. Using the K-S test is possible to quantify the distance 1995. between the empirical distribution functions of two samples. [17] H. Neave and P. Worthington, Distribution-free tests. Routledge Then the K-S test is used to validate the behaviours of the London, 1989. [18] D. Sheskin, Handbook of parametric and nonparametric statistical artificial models, by using real data of real elders. A high procedures. CRC Pr I Llc, 2004. level of the p-value (higher that the significance level) means [19] F. Massey Jr, “The Kolmogorov-Smirnov test for goodness of fit,” that the behaviours are drawn from the same distribution (the Journal of the American Statistical Association, vol. 46, no. 253, pp. 68–78, 1951. null hypothesis is not rejected). Notice that, in most cases, the [20] F. David and N. Johnson, “The probability integral transformation when obtained p-values are higher than the significance level and parameters are estimated from the sample,” Biometrika, vol. 35, no. 1-2, the null hypothesis is not rejected. But, it is also necessary to p. 182, 1948. [21] G. Anastassakis, T. Panayiotopoulos, and T. Ritchings, “Virtual agent remark that a good fitting of the configuration parameters will societies with the mVITAL intelligent agent system,” in Intelligent always be required. And this is due to inevitable heterogeneity Virtual Agents. Springer, 2001, pp. 112–125. in behaviour of people (including elders). As a conclusion, it [22] H. Noser and D. Thalmann, “Towards autonomous synthetic actors,” Synthetic Worlds. TL Kunii and A. Luciani, John Wiley and Sons, Ltd, can be said that the proposed model is suitable for modeling 1995. probabilistically the behaviour of simulated people, as the [23] W. Arthur, “On designing economic agents that behave like human work states. agents,” Journal of Evolutionary Economics, vol. 3, no. 1, pp. 1–22, 1993. Future works include a deep study of source data in order [24] L. Chittaro and M. Serra, “Behavioral programming of autonomous to generate a taxonomy of elders in terms of their behaviour characters based on probabilistic automata and personality,” Computer within their houses, depending on mobility and habits. Such a Animation and Virtual Worlds, vol. 15, no. 34, pp. 319–326, 2004. [25] D. Thalmann, S. Musse, and M. Kallmann, “Virtual Humans’ Behaviour: taxonomy would be useful for an automatic parameter tuning Individuals, Groups, and Crowds,” Proceedings of Digital Media Fu- of the models of elders. The user of the simulator, instead of tures, pp. 13–15, 1999. configuring parameters by hand, would simply choose between [26] P. Bécheiraz and D. Thalmann, “A behavioral animation system for autonomous actors personified by emotions,” in Proceedings of the 1998 a catalogue of elderly people patterns of behaviour. Workshop on Embodied Conversational Characters. Citeseer, 1998. [27] L. Temime, Y. Pannet, L. Kardas, L. Opatowski, D. Guillemot, and P. Bo R EFERENCES ”elle, “NOSOSIM: an agent-based model of pathogen circulation in a [1] N. Gilbert and K. G. Troitzsch, Simulation for the hospital ward,” in Proceedings of the 2009 Spring Simulation Multi- Social Scientist. Open University Press, February 2005. conference. Society for Computer Simulation International, 2009, pp. [Online]. Available: http://www.amazon.com/exec/obidos/redirect?tag= 1–8. citeulike07-20&path=ASIN/0335216013 [2] E. Aarts and J. L. Encarnação, “True visions: Tales on the realization of ambient intelligence,” in Into Ambient Intelligence, Chapter 1. Springer Verlag, Berlin, Heidelberg, New York, 2005. [3] G. J. Myers, C. Sandler, T. Badgett, and T. M. Thomas, The Art of Software Testing, Second Edition. Wiley, June 2004. [Online]. Available: http://www.amazon.ca/exec/obidos/redirect?tag= citeulike09-20\&path=ASIN/0471469122 [4] J. A. Botı́a, A. Villa, J. T. Palma, D. Pérez, and E. Iborra., “Detecting domestic problems of elderly people: simple and unobstrusive sensors to generate the context of the attended.” in First Internationa Workshop on Ambient Assisted Living, IWAAL, Salamanca, Spain, 2009.