=Paper= {{Paper |id=Vol-1742/MRT16_paper_12 |storemode=property |title=Runtime Models Based on Dynamic Decision Networks: Enhancing the Decision-making in the Domain of Ambient Assisted Living Applications |pdfUrl=https://ceur-ws.org/Vol-1742/MRT16_paper_12.pdf |volume=Vol-1742 |authors=Luis Hernan Garcia Paucar,Nelly Bencomo |dblpUrl=https://dblp.org/rec/conf/models/PaucarB16 }} ==Runtime Models Based on Dynamic Decision Networks: Enhancing the Decision-making in the Domain of Ambient Assisted Living Applications== https://ceur-ws.org/Vol-1742/MRT16_paper_12.pdf
    Runtime Models Based on Dynamic Decision
   Networks: Enhancing the Decision-making in the
   Domain of Ambient Assisted Living Applications
              Luis H. Garcia Paucar, Nelly Bencomo                                    Kevin Kam Fung Yuen
                    ALICE, Aston University, UK                             Xi’an Jiaotong-Liverpool University, China
              Email: garciapl@aston.ac.uk,nelly@acm.org                          Email: kevin.yuen@xjtlu.edu.cn



   Abstract—Dynamic decision-making for self-adaptive systems          neglected [10], [6]. The steps of monitoring the environment,
(SAS) requires the runtime trade-off of multiple non-functional        detecting the need of (self-) adaptation and deciding how to
requirements (NFRs) -aka quality properties- and the costs-            react are the challenges identified in the research area of SAS
benefits analysis of the alternative solutions. Usually, it requires
the specification of utility preferences for NFRs and decision-        [2]. We argue that these challenges should involve the role
making strategies. Traditionally, these preferences have been          of preferences and the re-prioritization of NFRs due to new
defined at design-time. In this paper we develop further our ideas     evidence found at runtime. The role of runtime models to meet
on re-assessment of NFRs preferences given new evidence found          these challenges is crucial we believe [11].
at runtime and using dynamic decision networks (DDNs) as the              The main contribution of this paper is the combination
runtime abstractions. Our approach use conditional probabilities
provided by DDNs, the concepts of Bayesian surprise and Prim-          of conditional probabilities (using Bayesian inference) based
itive Cognitive Network Process (P-CNP), for the determination         on models of DDNs with Bayesian surprises, and Primitive
of the initial preferences. Specifically, we present a case study      Cognitive Network Process (P-CNP), an improved version
in the domain problem of ambient assisted living (AAL). Based          of the Analytic Hierarchy Process (AHP) [12], for the de-
on the collection of runtime evidence, our approach allows the         termination of the initial preferences, to therefore allow the
identification of unknown situations at the design stage.
   Index Terms—Self-adaptation; decision making; AHP; P-CNP;           reassessment of NFRs preferences during runtime. The paper
non-functional requirements trade-off; uncertainty                     is organized as follows: Section II presents the background on
                                                                       P-CNP, DDNs and Bayesian Surprise where a back-review of
                                                                       related work is provided and the research gap is identified. In
                       I. I NTRODUCTION
                                                                       Section III, preliminary results that fills the identified research
   Dynamic decision-making is the core function of self-               gap are shown and discussed. In Section IV we explaine the
adaptation. Dynamic decision-making requires the runtime               background of the domain problem and case study. In Section
quantification and trade-off of multiple non-functional re-            V we show and explain the experiments performed. Finally,
quirements (NFRs) and the cost-benefit analysis of alternative         in Section VI, we conclude with respect to our findings, and
solution strategies. An important research issue has been              identify and discuss future research work.
the specification of the utility function to be used in the
decision making process. This utility function includes the                                   II. BACKGROUND
utility preferences (aka weights) associated with the NFRs                This section briefly overviews different Multi Criteria Deci-
and solution strategies. These preferences may vary from               sion Analysis Methods (MCDA), DDNs models and Bayesian
stakeholder to stakeholder and from one envisaged situation to         Surprises. We briefly explain how they are relevant to runtime
another. Furthermore, different priorities may imply different         decision-making in SASs.
decisions to be performed by the system. Additionally, in self-
adaptive systems (SAS), the assumptions made at design time            A. MCDA in SAS
probably change at runtime causing changes on the defined                 When we make decisions, a natural approach is to evaluate
priorities and therefore on the values for the utility preferences.    our different alternatives and choose the best one(s) with
We argue that modelling and reasoning with prioritization              respect to some given criteria. In SAS we must build intelligent
and preferences are the research fields that require further           systems being able to apply this way of reasoning to deal
research efforts [1]. Different authors have approached these          with environmental uncertain conditions. How to ensure a
issues [2], [3], [4], [5], [6]. However, critical challenges are       reliable decision trading off multiple factors being constantly
needed to be further explored. One of the issues is that current       affected by external changing conditions is the field of action
approaches focus on the design time activities and even if             of a well known set of methods, including Multi Criteria
effective they are unlikely to be generalizable [7], [8], [1], [9].    Decision Analysis Methods (MCDA) [14]. MCDA methods
Further, the needs for uncovering relationships between NFRs           are currently applied in different fields and especially in self-
and updating utility preferences during runtime have been              adaptation. Different MCDA techniques are used for both,
decision-making and preferences specification in SAS. Some
MCDA approaches such as Primitive Cognitive Network Pro-
cess (P-CNP) [15], [16] are used for the specification of quality
attribute preferences (i.e. NFRs) and some others such as Ana-
lytic Hierarchical Process (AHP) [12] are used for specifiying
quality attribute preferences and reasoning at runtime based on
the prioritization of a set of alternatives decisions. For example
in [17], Pimentel et.al. have implemented a routing protocol
by using AHP at runtime for video dissemination over Flying
Ad-Hoc Network. The approach takes into account multiple
types of NFRs such as link quality, residual energy, buffer
state, as well as geographic information and node mobility in
a 3D space. It uses Bayesian networks and AHP to adjust the
NFRs priorities based on instantaneous values obtained during                  Fig. 1. Structural Assessment Network (SAN) [20]
system operation.
As an ideal alternative of AHP, P-CNP replaces the AHP
paired ratio scale and performs paired comparison by using
a paired differential scale [18]: bij = vi − vj . bij represents
the result of paired differential comparisons between alterna-
tives values vi and vj . For example, in Table I, row 1, the
comparison between alternatives values v1 (i.e., vi ) and v2
(i.e.,vj ) will be represented as 3 = v1 - v2 .

   Paired differential scales and the use of pairwise opposite
matrices (POM) [15], [19], [16], [20] are the foundations of
P-CNP allowing a more precise and natural representation of
stakeholders’ perception of paired comparison [20]. P-CNP
is our selected approach for the determination of the initial
preferences of the case study, which involves the following
steps:

  • Problem cognition process: the idea is to formulate a
    decision problem as a measurable Structural Assessment
    Network (SAN) model. Fig. 1 shows a SAN with its main
    elements: the goal (aka functional requirement), a criteria
    structure (i.e., NFRs) and a set of alternatives An .
  • Weight assessment and quality assessment with respect
                                                                                                TABLE I
    to criteria: The Weight assessment is performed by using                P-CNP A LTERNATIVE IMPORTANCE ASSESSMENT FORM
    differential pairwise comparisons for the criteria Mini-
    mize Energy Cost (MEC) and Maximize Reliability (MR)             B. DDNs Model for Decision-Making in SAS
    (see Fig. 1). The quality assessment is performed by using          We have shown in [21], [22] how dynamic-decision net-
    differential pairwise comparison between alternatives An         works (DDNs) offers abstractions that serve the purpose of
    and each criterion. In Table I it is shown an assessment         modelling beliefs about the world, linking preferences and
    form for comparison between MEC criterion and alterna-           observation models (to obtain evidence from the operational
    tives A1 ...A8 .                                                 environment at runtime) with states of the world in order
  • Cognitive prioritization process: The idea is to compute         to make informed decisions. DDNs have been used as a
    the priority, vi , of each alternative Ai . The Row Average      mechanism which allows SASs to keep track of the current
    plus the Normal Utility (RAU) priorization method is             state and trade-off of NFRs [21], [22]. They are abstractions
    used to derive the priority values from POM [15]. As             for reasoning about the world over time [23]. DDNs provide a
    a common practice the values are re-escaled to [0,1]. In         set of random variables that represent the NFRs. Fig. 2 shows
    Table II, it is shown the vector of the normalized values:       a DDN during several time slices where Xi denotes a set of
    0.1633,0.1394,0.1051,0.0919,                                     state variables, which are unobservable, and E denotes the
    0.1622,0.1304,0.1215,0.0662                                      observable evidence variables. A DDN links decision maker
    These values will be used as a input for the Utility Node        preferences U (i.e. utility nodes), state and evidence variables
    U of a runtime model based on DDNs explained in                  to make informed decisions D (i.e. decision nodes).
    section II-B. (See Fig. 2).                                         The expected utility (EU) is computed using the equation 1
                                                                  environment. P(N F Ri ) is the prior probability of the non-
                                                                  functional requirement N F Ri being partially satisficed and
                                                                  P(N F Ri |E) is the posterior probability of the N F Ri being
                                                                  partially satisficed given the evidence E.
                                                                        S(N F Ri , E) = KL(P (N F Ri |E), P (N F Ri )) =
                                                                                 X                     P (N F Ri |E)
                                                                                     P (N F Ri |E) log                      (2)
                                                                                   i
                                                                                                        P (N F Ri )

                                                                  D. Research Gap
                                                                     In [26] we show that, even if scarce, there have been
                                                                  important research efforts towards decision-making for SAS
                                                                  taking into account NFRs. However, relevant results about
                          TABLE II                                dynamic reassessment and update of utility preferences are
               A LTERNATIVE P REFERENCES TABLE
                                                                  still challenges. The approaches studied show that different
                                                                  MCDA techniques stand out as common techniques used for
                                                                  reasoning optimization [8], [27]. Some approaches use ad-
                                                                  hoc methods for collecting users’ preferences, while others
                                                                  use techniques such as MCDA [8], [7], [27]. In [7], [9],
                                                                  [28] the support for preferences update exists but requires
                                                                  user intervention. Some approaches offer potential to support
                                                                  autonomic preference updating. For example, [29] proposed
                                                                  an approach for mining users’ behaviour while [27] used
                                                                  an autonomic preference tuning algorithm. [28] and [21]
                                                                  highlighted the relevance of using models that are needed to
                                                                  be learned and refined at runtime during the operation of the
                                                                  system. By using an MCDA technique (i.e., P-CNP) and a
                                                                  runtime model which involves DDNs and Bayesian Surprises,
                                                                  we are contributing to fill the identified research gap with
                                                                  a method for the reassessment of NFRs given new evidence
                                                                  found at runtime.
                Fig. 2. Example of DDN Structure
                                                                                         III. P ROPOSAL
as follows:
                                                                  A. Towards Reassessment of Utility Preferences
                       X
         EU(dj |e) =           U(xi , dj )×P(xi | e, dj )   (1)      Bayesian surprises have been exploited during runtime to
                       xi ∈X                                      improve better informed decision-making at runtime [30]. The
                                                                  approach supports the quantification of uncertainty over differ-
   In equation 1 above, P (xi | e, dj ) is the conditional        ent time slices at runtime and helps the system to improve its
probability of X = xi given the evidence E = e and the            behaviour based on the basis of learning during the operation
decision D = dj . The random variables X (i.e. state nodes        of the system. This learning process has shown to be memory-
in the DDN) correspond to the levels of satisficement of the      intensive and therefore has presented scalability and memory
NFRs. Solving a decision network (DN) refers to finding the       issues in the past [22]. In this paper, in addition to our novel
decision that maximizes EU.                                       approach, we also have improved the DDN models used in
                                                                  the past to therefore improve the scalability issues. Currently,
C. Bayesian Surprises to Quantify Deviations from Expected        the experiments can be run during a bigger number of time
Behaviour                                                         slices.
   A surprise value means that the evidence provided from            Our method aims to improve the decision making allowing
the environment has caused a difference between the prior         the access to new information and evidence about possible
and posterior probabilities of an event. A Bayesian surprise      adverse effects of the utility preferences during execution by:
measures how observed data affect the models or assumptions          • Allowing the identification of a range of scenarios during
of the world during runtime [24]. The surprise S represents             the execution of the system and the corresponding effects
the divergence between the prior and posterior distributions            they have on the satisfaction of relevant NFRs.
of a NFR and is calculated by using the Kullback-Leibler             • Highlighting the executed environmental properties which
divergence (KL) [25]. Lets us have a non-functional re-                 have highest and possible unknown effects at design time
quirement N F Ri , and E representing the evidence provided             on the satisfaction of the NFRs.
by the properties monitored as variables in the execution            The method involves the following steps:
                                                                          A specification of the requirements of the AAL at different
                                                                       levels has been extracted from the initial description in the
                                                                       document referenced above [31]. At the highest level, there
                                                                       is an implicit goal of keeping Mary healthy. The goal of the
                                                                       AAL is therefore: ”The system SHALL monitor Mary’s health
                                                                       and SHALL notify emergency services in case of emergency.”
                                                                       Different subgoals (i.e., functional requirements) have also
                                                                       been identified.
                                                                          • R1.1: The fridge SHALL detect and communicate with
                                                                            food packages.
                                                                          • R1.2: The fridge SHALL monitor and adjust the diet plan.
                                                                          • R1.3: The system SHALL ensure a minimum of liquid
                                                                            intake.
         Fig. 3. Approach for preference reassessment at runtime
                                                                       Further, softgoals (i.e. NFRs ) have also been identified. For
   • At runtime, per each time slice, a Bayesian Surprise is           example:
     computed for each state variable (i.e., each NFR).                   • R1.4: The system SHALL minimize energy consumption
   • If a surprise is detected, the next step is to evaluate                during normal operation.
     the current level of satisfaction of the NFRs (by using              • R1.5: The system SHALL maximize reliability during
     Bayesian Inference) to compare it with the decision sug-               normal operation.
     gested by the model (i.e., the decision be adapted or not            Let us focus on R.1.1. For this functional requirement we
     suggested by the DDN). It is important to highlight that          have identified two realization strategies:
     the probability distribution of each NFR is not influenced           • Strict Detection (SD): it implies using all the available
     by the utility nodes of the model (i.e., user preferences).            sensors and the computational resources available to
   • If the decision taken by the model (which is influenced                process and fuse the collected sensor data. The fridge
     by the utility nodes) is not contributing to the satisfaction          will be able to maximise detection of the number of food
     of the NFRs, the detected situation is highlighted as a                packages and collation of information about those food
     possible scenario needing preference reassessment.                     packages.
   Fig. 3 shows a graphic representation of the process. By               • Flexible Detection (FD): it implies that the system should
using surprises and conditional probabilities provided by the               be able to tolerate incomplete information about food
DDNs to revising the initial utility preferences during runtime,            packages. It will require techniques to deal with uncer-
the approach contributes to support better understanding of                 tainty and the identification of a range of suitable sensor
the execution environment while assessing the corresponding                 types to monitor the food in the fridge.
responses of the running system.                                          This case study is implemented in a runtime model taking
                                                                       into account the requirements R1.1, R1.4 and R1.5, specially
          IV. A MBIENT A SSISTED L IVING (AAL)
  We conducted a case study originally provided by                     identifying at runtime the need of preference reassessment of
Fraunhofer IESE 1 . It was partially developed further during          the NFRs R1.4 and R1.5. It will be part of our future work
the execution of the RELAX research work shown in [31].                the inclusion of the following NFR: R1.6 The system SHALL
                                                                       minimize latency when an alarm has been raised.
   The case study is related to Mary, an elderly person who
                                                                                              V. E XPERIMENTS
can benefit from an Ambient Assisted Living (AAL). Mary is
                                                                          The experiments are based on the application of our ap-
a widow who is 65 years old, overweight and has high blood
                                                                       proach to the case study of an Ambient Assisted Living (AAL)
pressure and cholesterol levels. Mary will be provided with a
                                                                       application. The AAL system is an smart home for assisted
new AAL system that offers an intelligent fridge. The fridge
                                                                       living of elderly people and rely on adaptivity to work properly
comes with 4 temperature and 2 humidity sensors and is able
                                                                       [31]. AAL can be configured in different ways, for example
to read, store, and communicate RFID information on food
                                                                       in terms of detecting and transmiting information of food
packages. The fridge communicates with the AAL system in
                                                                       packages, flexible detection (FD) vs. strict detection (SD), in
the house and embed itself in the system. Specifically, the
                                                                       terms of monitoring and adjusting diet plans or in terms of
intelligent fridge can detect the presence of spoiled food and
                                                                       ensuring a minimum of liquid intake.
discover and receive a diet plan to be monitored on the basis of
                                                                          This research focuses on the detecting and transmitting
what food items Mary consumes. The intelligent fridge also
                                                                       information of food packages. Different strategies can be used
contributes to an important part of Mary’s diet which is to
                                                                       to implement this requirement and offer different costs and
ensure a minimum liquid intake. A complete description of
                                                                       benefits that would need to be traded-off. A SD strategy offers
the case study is shown in [31].
                                                                       a higher level of reliability than an FD strategy. However, the
  1 http://www.iese.fraunhofer.de/en//press/press archive/press 2012   energy consumption of sensors and computational techniques
/PM 2012 16 200912 optimaal.html                                       related to this strategy may be prohibitive. An assessment of
                                                                     •   P(ALDin[A, B > |M R = true)=0.35,
                                                                     •   P(ALD >= B|M R = true)=0.50
                                                                      • P(REC < A|M EC = true)=0.48,
                                                                      • P(RECin[A, B > |M EC = true)=0.38,
                                                                      • P(RECin[B, C > |M EC = true)=0.08,
                                                                      • P(REC >= C|M EC = true)=0.06

                                                                      The weights associated with the possible combination of
                                                                   nodes are given in Table II. These weights express the prefer-
                                                                   ences that represent the relative importance of each combina-
                                                                   tion of effects of the detection strategy used on the NFRs. For
                                                                   this case study there is a preference for the detection strategy
                                                                   SD. For example, the 3rd row in Table II has a weight value
                                                                   (0.1051) and the 7th row has a weight value (0.1215). Both
                                                                   alternatives have equivalent effect on the two NFRs Minimize
                                                                   Energy Cost and Maximize Reliability ( see the values T and
                                                                   F for the two NFRs), however the alternative related to the
                                                                   strategy SD is the most preferred.
     Fig. 4. Example of Computing Surprises - Exp.01 and Exp. 02      Two experiments have been implemented and for each one
                                                                   Surprises have been applied. Consider the situation where
the trade-off between these two choices and the satisfaction
                                                                   the prior models for surprise computation are P(M ECt )
levels of related NFRs need to be made at design-time and
                                                                   and P(M Rt ) and the posterior models when an evidence
revisited at runtime under the light of new evidence found
                                                                   has been observed over the time are P(M ECt+1 |REC) and
(See Table II).
                                                                   P(M Rt+1 |ALD) (see Fig. 4). We have computed surprises
A. Initial Setup of Experiments                                    based on the KL-divergence between the prior and the poste-
                                                                   rior probabilities during 13 time slices.
   For the experiments of this paper, a DDN for the application
of AAL has been designed according to two alternatives for
food packages detection: SD and FD as described above.
Each configuration provides different levels of reliability and
energy costs which are the NFRs Maximize Reliability (MR)
and Minimize Energy Consumption (MEC).
Fig. 5 shows as an example, a DDN for the NFR Minimize
Energy Consumption.
   The scenario that has been used to perform the experiments,
based on information provided by the system’s experts, is
described as follows: the states of two monitored variables
REC=“Ranges of Energy Consumption” and ALD=“Accuracy
Level of Detection” are monitored during runtime. The value
of ALD can be three different ranges represented by ALD <
A, ALD in [A,B>, and ALD>=B. The values for REC
are different possible ranges represented by the following
expressions: REC < A, REC in [A,B>, REC in [B,C>, and
REC>=C. At design time, ALD have been considered >=B
and REC >=C.
   In order to evaluate the DDN shown in Fig. 5, we have con-
sidered the following initial conditional probabilities provided
by the System’s stakeholders:
   • P(M EC = true|F D)=0.55,
                                                                                Fig. 5. Example of DDN for AAL System
   • P(M EC = f alse|F D)=0.45,
   • P(M EC = true|SD)=0.48,                                       B. Experiment 1
   • P(M EC = f alse|SD)=0.52,                                       Surprises take place in several time slices where different
   • P(M R = true|F D)=0.49,                                       specific situations have been identified. Fig. 8 shows the
   • P(M R = f alse|F D)=0.51,                                     observed values for REC and ALD variables and the surprises
   • P(M R = true|SD)=0.55,                                        S1 and S2. S1 and S2 are the divergence between the prior
   • P(M R = f alse|SD)=0.45,                                      and posterior distributions for the non-functional requirements
   • P(ALD < A|M R = true)=0.15,                                   MEC and MR respectively. Both, S1 and S2, are computed
  Fig. 6. Prob. distribution of NFR Minimize Energy Cost - Exp. 1



                                                                                Fig. 8. Surprises and monitored values - Exp. 1

                                                                       •  P(M EC = true|REC >= C, ALD >= B) = 10.9%
                                                                          (see Fig. 6, time slice 7) and
                                                                        • P(M R = true|REC >= C, ALD >= B)=70.6%, (see
                                                                          Fig. 7, time slice 7)
                                                                     We can observe that the probability for Minimize Enery Cost is
                                                                     low, however on the other hand, the probability for Maximize
                                                                     Reliability is high. The selected choice, i.e. to adapt from
                                                                     FD to SD, certainly may be a good selection for the current
                                                                     situation: low probability for Minimize Energy Cost and
                                                                     high probability for Maximize Reliability. The complementary
   Fig. 7. Prob. distribution of NFR Maximize Reliability - Exp. 1   information provided by the conditional probabilities suggest
                                                                     to use the stragegy FD. The surprises and the conditional
for each time slice during the experiment.                           probabilities help us in flaggingup this situation. Again,
                                                                     this situation is an example when surprises generated, the
   1) Surprises and adaptation: In time slice 2 we can observe       conditional probabilities and the adaption performed by the
two surprises and an adaptation that is suggested by the DDN         system agree.
(see Fig. 8, column adaptation). Studying the conditional prob-
abilities provided by the DDN under the current conditions:             2) Surprises and needed adaptations: We can observe that
  • P(M EC = true|REC < A, ALD < A) = 82.5 % (see                    in time slice 11 there are surprises however, the DDN has
    Fig. 6, time slice 2) and                                        not suggested any adaptation (see Fig. 8). Studying the con-
  • P(M R = true|REC < A, ALD < A)=25% (see Fig. 7,                  ditional probabilities provided by the DDN under the current
    time slice 2)                                                    conditions:
We can observe that while the probability for Minimize Energy           • P(M EC = true|RECin[A, B >, ALD < A) = 70.8%

Cost is high the probability for Maximize Reliability is low. The         (see Fig. 6, time slice 11) and
selected choice, i.e. to adapt from SD to FD, certainly sounds          • P(M R = true|RECin[A, B >, ALD < A)=25.0% (see

like a good selection given the current situation: high probabil-         Fig. 7, time slice 11)
ity for Minimize Energy Cost and low probability for Maximize        We can observe that the probability for Minimize Energy Cost
Reliability. Using FD would avoid unnecessary energy costs as        is high. However, on the other hand, the probability for
the complementary information provided by the conditional            Maximize Reliability is low. The selected choice, i.e. not to
probabilities suggest to use the less costly strategy FD. The        adapt, certainly may not be the best choice given the current
surprises and the conditional probabilities help us to identify      situation: high probability for Minimize Energy Cost and low
up this situation. This situation is an example when surprises       probability for Maximize Reliability. Continuing using SD as
are generated, the conditional probabilities and the adaptation      the configuration would create unnecessary energy costs as
performed by the system agree to support the same behaviour          the complementary information provided by the conditional
by the system improving confidence. In time slice 7 we can           probabilities suggest the use of the less costly strategy FD.
observe two surprises and that an adaptation is suggested by         The surprises and the conditional probabilities, which crucially
the DDN (see Fig. 8). Studying the conditional probabilities         are not influenced by the stakeholders’ preferences, help us to
provided by the DDN under the current conditions:                    flag up this situation. The situation identified is an example
                                                                    Fig. 10. Prob. distribution of NFR Minimize Energy Cost - Exp. 2



           Fig. 9. Surprises and monitored values - Exp. 2
of how surprises and the conditional probabilities of the DDN
can flag up the need of adaptation. Crucially, the situation
detected implies the need to revisit the preferences defined
by the stakeholders previously providing the opportunity to
improve the behaviour of the system.
C. Experiment 2
  The observed values for REC and ALD variables and the
surprises S1 and S2 are shown in Fig. 9.

   1) Surprises and adaptation: In time slice 2 we can observe
surprises and that an adaptation is suggested by the DDN (see
                                                                     Fig. 11. Prob. distribution of NFR Maximize Reliability - Exp. 2
Fig. 9). Studying the conditional probabilities provided by the
DDN under the current conditions:                                 We can see that the probability for Minimize Energy Cost
   • P(M EC = true|REC < A, ALD < A) = 82.5% (see                 is high. On the other hand, the probability for Maximize
     Fig. 10, time slice 2) and                                   Reliability is low. The selected choice, i.e. to adapt, certainly
   • P(M R = true|REC < A, ALD < A)=25.0% (see Fig.               may not be a good selection for the current situation: high
     11, time slice 2)                                            probability for Minimize Energy Cost and low probability
We can observe that the probability for Minimize Energy Cost      for Maxmize Reliability. Using SD would create unnecessary
is high. On the other hand, the probability for Maximize          energy costs as the complementary information provided by
Reliability is low. The selected choice, i.e. to adapt from       the conditional probabilities suggest to use the less costly
SD to FD, certainly looks to be a good selection given            strategy FD. The surprises and the conditional probabilities
the current situation: high probability for Minimize Energy       supported flagging up the situation. The situation is an
Cost and low probability for Maximize Reliability. Crucially,     example how surprises and conditional probabilities can
the complementary information provided by the conditional         highlight the needs of avoiding unnecessary adaptations.
probabilities suggest to use the strategy FD. The surprises       The previous findings imply the needs to reasses the quality
and the conditional probabilities help us in identifying this     preferences defined by the stakeholders during design-time.
situation. The situation is therefore an example of agreement
behavior between the surprises generated, the conditional            3) Surprises as a false positive: In time slice 6 we can
probabilities and the adaption performed by the system.           observe surprises and the fact that there is no adaptation
                                                                  recommended by the DDN (see Fig. 9). Studying the con-
   2) Surprises and unneeded adaptation: We can see that in       ditional probabilities provided by the DDN under the current
time slice 3 there are surprises and an adaptation is suggested   conditions:
by the DDN (see Fig. 9). Studying the conditional probabilities      • P(M EC = true|RECin[B, C >, ALD >= B) =
provided by the DDN under the current conditions:                      21.3% (see Fig. 10, time slice 6) and
   • P(M EC = true|RECin[A, B >, ALD < A) = 64.7%                    • P(M R = true|RECin[B, C >, ALD >= B)=75.3%

     (see Fig. 10, time slice 3) and                                   (see Fig. 11, time slice 6)
   • P(M R = true|RECin[A, B >, ALD < A)=20.8% (see               We can see that the probability for Minimize Energy Cost is
     Fig. 11, time slice 3)                                       low. On the other hand, the probability for Maximize Reliability
is high. The selected choice, i.e. not to adapt, certainly looks     satisfaction level of the NFRs allowing better reasoning. The
to be a good selection for the current situation: low probability    new implemented model is an improved version of previous
for Minimize Energy Cost and high probability for Maximize           experiments that provides better scalability.
Reliability. The complementary information provided by the
conditional probabilities suggests that using SD is a better                               VI. C ONCLUSIONS
option than using FD. This situation is an example of a false           In this paper we have used a better alternative of AHP,
positive, there are surprises but is not needed any adaptation.      P-CNP, for the definition of preferences at design time and
However, the conditional probabilities help us flagging up           have shown its integration to our DDN-based approach. The
this situation providing a better informed decision making.          approach can be used for preference updating at runtime. P-
                                                                     CNP method will provide a structured technique for runtime
   4) Surprises and needed adaptation: In time slice 10              decision-making problems with multiple criteria (i.e., NFRs)
there are surprises however, the DDN has not suggested any           by doing pairwise comparison during system operation be-
adaptation (see Fig. 9). This situations and its interpretation is   tween numerical values collected from sensors related to NFRs
equivalent to Experiment 1, time slice 11, i.e., is an example       and their relative importance to adjust preferences at runtime.
of how surprises and the conditional probabilities can flag up          The experiments performed required the setting of the
the need of adaptation.                                              utility preferences associated with NFRs. Those preferences
                                                                     were initially provided by the domain experts during the
D. Analysis of Results                                               sensitivity analysis at design time. However, the experiments
   Using our approach we have been able to identify four (4)         performed demonstrate how these utility preferences, even if
scenarios that allows opportunities to enhance the decision          meeting specific requirements identified at design time, may
making of the system:                                                not be ideal for specific cases to be found at runtime. When
   • Scenario 01 - surprises and needed adaptation. There are        preferences do not agree with specific situations identified
      surprises, there is no adaptation, and the conditional         at runtime and unknown at design time, the system may
      probabilities suggest to make an adaptation.                   either suggest unnecessary adaptations or miss adaptations.
   • Scenario 02 - surprises and no needed adaptation. There         These situations can potentially degrade the behaviour of the
      are surprises, there is adaptation, and the conditional        running system. The obtained results confirm the validity of
      probabilities suggest not to make an adaptation.               our approach defined in our previous work [26]. Currently, to
   • Scenario 03 - surprises and adaptation. There are sur-          our knowledge, there is no related work to this specific issue
      prises, there is adaptation, and the conditional probabili-    in SASs.
      ties suggest to make an adaptation.                               Our approach takes advantage of Bayesian learning to col-
   • Scenario 04 - surprises as a false positive. There are sur-     lect evidence to improve the understanding of the environment
      prises, there is no adaptation, however the conditional        and the decision making process by the running system.
      probabilities suggest no adaptation.                           Furthermore, we have shown the power of runtime abstractions
Scenarios 01 and 02 have been identified to flag up the need         based on runtime DDN-based models to allow the better
for revisiting the NFRs preferences defined by the stakeholders      understanding of contexts that were not fully captured during
previously using an MCDM method (i.e. P-CNP) and provide             the requirements elicitation. Challenges for future work still
an opportunity to improve the decision making and behaviour          remain, specifically we are working on how to optimize and
of the system. Scenario 03 shows an agreement between the            scale reasoning techniques to perform dynamic updating of
suggested adaptation and surprises providing more confidence         NFR preferences when non-appropriate NFR preferences have
in the decision making of the SAS. Scenario 04 is a false            been identified. The use of machine learning techniques and
positive for surprises, however the conditional probabilities        bayesian surprise for NFRs preferences learning and NFRs
allow us to highlight the fact that the DDN was triggering the       relaxation respectively may be a promissory path in our future
correct behaviour allowing a better informed decision making         work.
and the possibility of providing a system with self-explanation
capabilities [32].                                                                         ACKNOWLEDGMENT
   It was possible to explore all these scenarios only by using        The research work reported in this paper is partially sup-
surprises and Bayesian inference (conditional probabilities)         ported by Research Grants from National Natural Science
at runtime. Now that we can evaluate NFR preferences at              Foundation of China (Project Number 61503306) and Natural
runtime, the next possible step will be to explore mechanisms        Science Foundation of Jiangsu Province (Project Number
to use this information for autonomic NFR preferences up-            BK20150377), China.
dating. Different from previous initial experiments [24], [22]
[33], we have used monitorables (i.e. evidence nodes) with                                     R EFERENCES
major level of granularity to allow us the exploration of further
potential situations that suggest the need for reassessment of       [1] S. Liaskos, S. A. McIlraith, S. Sohrabi, and J. Mylopoulos,
                                                                         “Representing and reasoning about preferences in requirements
NFR preferences. These new experiments showed how the                    engineering,” Requir. Eng., vol. 16, no. 3, pp. 227–249, Sep. 2011.
values monitored as evidence provide different impacts on the            [Online]. Available: http://dx.doi.org/10.1007/s00766-011-0129-9
 [2] M. Salehie and L. Tahvildari, “Self-adaptive software: Landscape            [23] S. J. Russell and P. Norvig, Artificial intelligence - a modern approach:
     and research challenges,” ACM Trans. Auton. Adapt. Syst.,                        the intelligent agent book, ser. Prentice Hall series in artificial intelli-
     vol. 4, no. 2, pp. 14:1–14:42, May 2009. [Online]. Available:                    gence. Prentice Hall, 1995.
     http://doi.acm.org/10.1145/1516533.1516538                                  [24] N. Bencomo and A. Belaggoun, “A world full of surprises: bayesian
 [3] B. H. Cheng and et al., “Software engineering for self-adaptive systems,”        theory of surprise to quantify degrees of uncertainty,” in ICSE, 2014,
     B. H. Cheng, R. Lemos, H. Giese, P. Inverardi, and J. Magee, Eds.                pp. 460–463.
     Berlin, Heidelberg: Springer-Verlag, 2009, ch. Software Engineering for     [25] S. Kullback, Information Theory and Statistics. New York: Wiley, 1959.
     Self-Adaptive Systems: A Research Roadmap, pp. 1–26.                        [26] L. H. G. Paucar and N. Bencomo, “The Reassessment of Preferences
 [4] E. Yuan, N. Esfahani, and S. Malek, “A systematic survey of                      of Non-Functional Requirements for Better Informed Decision-making
     self-protecting software systems,” ACM Trans. Auton. Adapt. Syst.,               in Self-Adaptation,” AIRE - 3rd International Workshop on Artificial
     vol. 8, no. 4, pp. 17:1–17:41, Jan. 2014. [Online]. Available:                   Intelligence for Requirements Engineering, 2016.
     http://doi.acm.org/10.1145/2555611                                          [27] X. Peng, B. Chen, Y. Yu, and W. Zhao, “Self-tuning of software systems
 [5] M. Salama, R. Bahsoon, and N. Bencomo, “Managing trade-offs in                   through goal-based feedback loop control,” in Requirements Engineering
     self-adaptive software architectures: A systematic mapping study,” in            Conference (RE), Sept 2010, pp. 104–107.
     Managing trade-offs in adaptable software architectures, I. Mistrk,         [28] W. E. Walsh, G. Tesauro, J. O. Kephart, and R. Das, “Utility functions
     N. Ali, J. Grundy, R. Kazman, and B. Schmerl, Eds. Elsevier, 2016.               in autonomic systems,” in Autonomic Computing, 2004. Proceedings.
 [6] C. Krupitzer, F. M. Roth, S. VanSyckel, G. Schiele, and                          International Conference on, May 2004, pp. 70–77.
     C. Becker, “A survey on engineering approaches for self-                    [29] J. Garcı́a-Galán, L. Pasquale, P. Trinidad, and A. Ruiz-Cortés,
     adaptive systems,” Pervasive and Mobile Computing, vol.                          “User-centric adaptation of multi-tenant services: Preference-based
     17, Part B, pp. 184 – 206, 2015. [Online]. Available:                            analysis for service reconfiguration,” in SEAMS, ser. SEAMS
     http://www.sciencedirect.com/science/article/pii/S157411921400162X               2014.      USA: ACM, 2014, pp. 65–74. [Online]. Available:
 [7] G. Elahi and E. Yu, “Requirements trade-offs analysis in the absence             http://doi.acm.org/10.1145/2593929.2593930
     of quantitative measures: A heuristic method,” in Proceedings of            [30] S. Hassan, N. Bencomo, and R. Bahsoon, “Minimize nasty surprises
     the 2011 ACM Symposium on Applied Computing, ser. SAC ’11.                       with better informed decision-making in self-adaptive systems,” in 10th
     New York, NY, USA: ACM, 2011, pp. 651–658. [Online]. Available:                  International Symposium on Software Engineering for Adaptive and Self-
     http://doi.acm.org/10.1145/1982185.1982331                                       Managing Systems (SEAMS), 2015.
 [8] E. Letier, D. Stefan, and E. T. Barr, “Uncertainty, risk, and information   [31] J. Whittle, P. Sawyer, N. Bencomo, B. H. C. Cheng, and J. M. Bruel,
     value in software requirements and architecture,” in Proceedings of              “RELAX: A language to address uncertainty in self-adaptive systems
     ICSE, ser. ICSE 2014. New York, NY, USA: ACM, 2014, pp. 883–894.                 requirement,” Requirements Engineering, vol. 15, no. 2, pp. 177–196,
 [9] H. Song, S. Barrett, A. Clarke, and S. Clarke, “Self-adaptation with end-        2010.
     user preference: Using run-time models and constraint solving,” in the      [32] N. Bencomo, K. Welsh, P. Sawyer, and J. Whittle, “Self-explanation
     Intrl. Conference MODELS, USA, 09/2013 2013.                                     in adaptive systems,” in Engineering of Complex Computer Systems
[10] N. Bencomo, “Quantun: Quantification of uncertainty for the reassess-            (ICECCS), 2012 17th International Conference on, july 2012.
     ment of requirements,” in 23rd IEEE International Requirements En-          [33] L. Garcia-Paucar and N. Bencomo, “A survey on preferences of quality
     gineering Conference, RE 2015, Ottawa, ON, Canada, August 24-28,                 attributes in the decision-making for self-adaptive and self-managed
     2015, 2015, pp. 236–240.                                                         systems: the bad, the good and the ugly,” Aston University, Tech. Rep.,
[11] G. Blair, N. Bencomo, and R. B. France, “Models@ run. time,”                     2016.
     Computer, vol. 42, no. 10, pp. 22–27, 2009.
[12] T. Saaty, “Decision making with the analytic hierarchy process,” Inter.
     Journal of Services Sciences,, 2008.
[13] A. Ishizaka and P. Nemery, Multi-criteria decision analysis : methods
     and software. Chichester: J. Wiley & Sons, 2013. [Online]. Available:
     http://opac.inria.fr/record=b1135342
[14] J. Figueira, S. Greco, and M. Ehrogott, Multiple Criteria Decision
     Analysis: State of the Art Surveys. Springer, 2005.
[15] K. K. F. Yuen and W. Wang, “Towards a ranking approach for sensor
     services using primitive cognitive network process,” 4th Annual IEEE
     International Conference on Cyber Technology in Automation, Control
     and Intelligent Systems, IEEE-CYBER 2014, pp. 344–348, 2014.
[16] K. Yuen, “Cognitive network process with fuzzy soft computing tech-
     nique in collective decision aiding,” The Hong Kong Polytechnic Uni-
     versity, PhD thesis, 2009.
[17] L. Pimentel, D. Ros, and M. Seruffo, “Wired/Wireless Internet
     Communications,” vol. 8458, pp. 122–135, 2014. [Online]. Available:
     http://link.springer.com/10.1007/978-3-319-13174-0
[18] K. K. F. Yuen, “Software-as-a-Service evaluation in cloud paradigm:
     Primitive cognitive network process approach,” 2012 IEEE International
     Conference on Signal Processing, Communications and Computing,
     ICSPCC 2012, pp. 119–124, 2012.
[19] Y. K.K.F., “The pairwise opposite matrix and its cognitive prioritization
     operators: the ideal alternatives of the pairwise reciprocal matrix and
     analytic prioritization operators,” Journal of the Operational Research
     Society, no. 63, pp. 322–338, 2012.
[20] K. K. F. Yuen, “The Primitive Cognitive Network Process in
     healthcare and medical decision making: Comparisons with the
     Analytic Hierarchy Process,” Applied Soft Computing Journal,
     vol. 14, no. PART A, pp. 109–119, 2014. [Online]. Available:
     http://dx.doi.org/10.1016/j.asoc.2013.06.028
[21] N. Bencomo and A. Belaggoun, “Supporting decision-making for self-
     adaptive systems: From goal models to dynamic decision networks,” in
     REFSQ - Best Paper Award, 2013.
[22] N. Bencomo, A. Belaggoun, and V. Issarny, “Dynamic decision networks
     to support decision-making for self-adaptive systems,” in (SEAMS),
     2013.