Caching Strategies for Run-time Probabilistic Model Checking Hiroyuki Nakagawa, Kento Ogawa, Tatsuhiro Tsuchiya Graduate School of Information Science and Technology Osaka University 1-5 Yamadaoka, Suita, Osaka, Japan Email: {nakagawa, o-kento, t-tutiya}@ist.osaka-u.ac.jp Abstract—For software systems that need to adapt to their en- design time; value calculation assigns values that are measured vironment at run-time, run-time verification is useful to guarantee by monitoring the environment to the variables in expressions the correctness of their behaviors. Probabilistic model checking at run-time. In such a verification process, the verification using discrete time Markov chain (DTMC) model has been activity during run-time is only substituting the variables with applied to implement run-time verification. A current existing values obtained by monitoring the environment. Although approach provides an efficient run-time verification mechanism by pre-generating expressions for model checking at design time. the process allows efficient run-time verification, this method In case that a system model is changed, the system is required to assumes that the structure of the system model is not changed, re-generate the expressions. In order to expand the applicability that is, states or transitions are not added or deleted. If such of the approach, we propose three strategies, caching, prediction, changes happen, the system has to update (re-generate) the and reduction, for reducing computational time for re-generated expression at run-time. This re-generation usually requires expressions at run-time. We conduct preliminary experiments and large computational time. demonstrate that our approach could expand the applicability of run-time verification by reducing the computational cost at run- In expression generation, variables in DTMC models, time. which represent uncertain transition probabilities, lead to a large computational cost. DTMC models are represented as matrices, and the expression generation requires the compu- I. Introduction tation of determinants. The computation of determinants is Software systems are often required to be able to main- performed by using Laplace expansion and LU-decomposition. tain high performance and high reliability even though their We previously evaluated the computational cost for gener- environments are changeable or uncertain at design time. ating expressions and concluded that the number of times Traditional software systems are difficult to demonstrate the of performing Laplace expansions considerably affects the best performance or even run correctly under their change- computational cost [10]. able or uncertain environments. Constructing systems as self- In this study, we aim to expand the applicability of the adaptive systems [1], [2], which enable run-time adaptation, efficient run-time verification technique by reducing the com- is an efficient way of dealing with such environments. Since putational time for executing Laplace expansions. We reuse self-adaptive systems modify their behaviors for adaptation, intermediate expressions obtained in pre-computation. Our first, the systems should be able to autonomously control approach is on the basis of caching. We store pairs of a their behaviors. Model-based approaches [3], in particular, partial matrix in DTMC and the corresponding intermediate models@run.time [4] approach, which focuses on dynamically expression as key-value pairs into cache. Moreover, to increase changeable models of the systems, provide a promising way cache hit ratio and decrease cache size, we apply additional of modifying behaviors at runtime. Second, the correctness strategies, prediction and reduction. of their behavior after adaptation should be verified. Such systems can benefit from a run-time verification mechanism We conduct preliminary experiments on randomly gen- [5], [6], [7] that checks whether a new behavior satisfies system erated DTMC models. The experimental results demonstrate requirements. that our approach could expand the applicability of run-time verification by reducing the computational cost at run-time Unlike verification techniques at design-time, run-time while keeping from rapidly increasing memory usage. verification techniques should be efficiently executed not to This paper is organized as follows: Section II describes degrade system performance. Some studies on efficient verifi- theoretical background of our study. Section III explains cation for self-adaptive systems have been developed. Filieri our approach for an efficient run-time verification with three et al [8] proposed an efficient run-time verification technique strategies. Section IV evaluates our approach from preliminary that divided a verification process into pre-computation at experimental results. Section V presents related work, and design time and value calculation at run-time. Pre-computation Section VI concludes this paper. constructs expressions for verification from a system model described in discrete time Markov chain (DTMC) model [9], II. Theoretical Background which associates state transitions with probabilities represented by fixed values or parameter variables, and requirements Since systems that need to adapt to their environment described in probabilistic computational tree logic (PCTL) at have to face uncertainty, we deal with the systems under „$WGHVLJQWLPH 1 1 1 IDLOXUH V V V 6\VWHPPRGHO *HQHUDWH ([SUHVVLRQ ‫ݔ‬ଶ H[SUHVVLRQ ZLWKYDULDEOHV IDLOXUH VXFFHVV 5HTXLUHPHQWV V VHQVH 1 െ ‫ݔ‬ଷ *0.1 ‫ݔ‬ଷ ‫ݔ‬ସ 1 െ ‫ݔ‬ଶ ‫ݔ‬଴ ‫ݔ‬ଵ 1 െ ‫ݔ‬ସ „$WUXQWLPH V V V 0.2 V V 0RQLWRU 9HULI\ 1 െ ‫ݔ‬଴ 1 െ ‫ݔ‬ଵ 1 െ ‫ݔ‬ଷ *0.9 ([SUHVVLRQ URWDWH HQYLURQPHQW DGDSWDWLRQ VHQVH DQDO\]H JR 0.8 9DOXHDVVLJQPHQW IRUZDUG (YDOXDWLRQ WRYDULDEOHV Fig. 2. An example of DTMC model: a cleaning robot. Fig. 1. Filieri’s approach [8]. ࡽ ࡾ 0 1 െ ‫ݔ‬଴ ‫ݔ‬଴ 0 0 0 0 0 0 development as probabilistic systems. In many cases, behaviors 0 0 ‫ݔ‬ଵ 1 െ ‫ݔ‬ଵ 0 0 0 0 0 0 0 0 1 െ ‫ݔ‬ଶ 0 0 ‫ݔ‬ଶ 0 0 of such systems are described in Discrete Time Markov Chain (DTMC) [9]. Probabilistic model checking [11] is a verification P= 0 0 0 0 0.2 0.8 0 0 0 0 0 0 0 0 0.9 1 െ ‫ݔ‬ଷ 0 0.1 1 െ ‫ݔ‬ଷ ‫ݔ‬ଷ technique for the probabilistic systems. Probabilistic Compu- 0 0 0 0 1 െ ‫ݔ‬ସ 0 0 ‫ݔ‬ସ 0 tation Tree Logic (PCTL) [12] is a probabilistic temporal logic 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 that extends CTL to express probabilistic properties, which can 0 0 0 0 0 0 0 0 1 be interpreted over the DTMC model. As in Filieri’s study [8], which our study is based on, we assume that the system ࡻ ࡵ model is described as a DTMC and requirements are expressed Fig. 3. The matrix P and its sub-matrices Q, R, O, and I of the DTMC in PCTL. Daws [13] also gives a theoretical background of model for the cleaning robot illustrated in Figure 2 parametric model checking of PCTL formulae over DTMC models. In this section, we first present the verification process that Filieri et al. proposed and then briefly explain DTMC and Formally, a DTMC is represented as a tuple (S , S 0 , P, L), PCTL. The last subsection explains how Filieri et al. construct where S is a set of states; S 0 (⊆ S ) is a set of initial states; expressions at design time. P : S × S → [0, 1] is a stochastic matrix, in which an element of P(si , s j ) represents the probability that the next state of the A. Efficient Run-time Verification process will be s j given the current state is si ; L : S → 2AP is a labeling function, where AP is a set of atomic propositions Figure 1 illustrates an efficient run-time verification process that represent basic properties. A state s ∈ S with P(s, s) = 1 is that Filieri et al. proposed in [8]. To realize on-the-fly analysis, called an absorbing state. A state that is not an absorbing state the process separates a model checking activity into two steps, is called a transient state. A DTMC is said to be absorbing if ones executed at design time and run-time. In the step at design it contains at least one absorbing state and from every state it time, expressions for verification are generated from a system is possible to go to an absorbing state. In order to apply the model represented as DTMC and requirements expressed in Filieri’s approach, the DTMC must be absorbing. It should PCTL. Transition probabilities in DTMC are represented as be noted that the approach can sometimes be applied to a real values [0, 1] or variables. A variable in the DTMC model non-absorbing DTMC with some modifications to the model. represents that a transition probability is unknown at design For example, if the DTMC contains an absorbing connected time. In the step at run-time, values of the variables are component that has no absorbing state, then replacing the identified by monitoring the system or its environment, and component with an absorbing state yields an alternative DTMC then the system evaluates the expression by substituting the model that is absorbing. values to the transition variables to verify whether the system model satisfies requirements or not. We show an example of a DTMC model in Figure 2. This model represents a behavioral model for a cleaning robot. Since some of the transition probabilities are determined B. Discrete Time Markov Chain by monitoring the environment at run-time and cannot be determined at design time, these transitions are described as A DTMC model is a state transition diagram that has state variables. For example, the cleaning robot detects obstacles transitions represented as probabilities. DTMC is a stochastic by using sensors that the robot has and decides a direction to process [14] that has transitions from one state to another state move by using data collected by sensors. The robot moves in on a state space. DTMC must respect the following property different directions by executing going forward and turning of Markov chains: the probability distribution of the next state commands. Since which sensor is used is adjusted at run- depends only on the current state and not on the sequence time, transition probabilities to sensing states are represented of events that preceded the current state. We consider finite as variables x0 , ..., x3 . and time-homogeneous DTMC models, i.e., where the state space is finite and transition probabilities between states do A DTMC model can be represented as a matrix. For not change with time, in this paper. example, the matrix illustrated in Figure 3 represents the DTMC model of our cleaning robot example (Figure 2). An practical requirements can be specified as the reachability of absorbing DTMC that has r absorbing states and t transient an absorbing state [15], we focus on the reachability properties states is represented by the matrix P that is in the following in this paper. canonical form: ( ) Q R D. Expression Generation P= (1) O I As we described, the probability of moving from the where I is an r by r identity matrix, O is an r by t zero matrix, transient state si to the absorbing state sk in one step is R is a t by r nonzero matrix and Q is a t by t matrix. The represented as the element pik of the matrix P. In the case of element qi j of the matrix Q is the transition probability from more than 1 step, we have to compute the probability of the the transient state si to the transient state s j . The element rik of transition from a transient state to another transient state and the matrix R is the transition probability from the transient state the transition from the transient state to the destination state. si to the absorbing state sk . An absorbing state has probability The transition probability in two steps is the element q2i j of the 1 from the absorbing state to itself. Therefore, the probabilities matrix Q2 (= Q × Q), and the transition probability in k steps of absorbing states to themselves are represented as the identity is the element qkij (of the matrix Qk ). Therefore, the expected matrix I. Since transition probability from an absorbing state number of visits to the transient state s j before arriving at an to a transient state is 0, O is a zero matrix. absorbing state, starting from the state si is the (i,∑j)-th element The transition probability from the transient∑state si to the of the matrix N = I + Q + Q2 + Q3 + · · · = ∞ k k=0 Q . The transient state∑s j in two steps is calculated by sx ∈S P(si , s x ) · probability from the transient state si to the absorbing state sk P(s x , s j ) = s x ∈S Q(si , s x ) · Q(s x , s j ). The k-step transition in some steps is defined as the element bik of the matrix B: probability, with which the state s j is reached from the state si B= N×R (5) in k steps, is the element qkij of the matrix Qk . Since the element q0i j (of the matrix Q0 ) represents the transition probability from The computation for the matrix B requires the computation for the state si to the state s j in zero steps, this matrix Q0 is a t the matrix N. We can replace N as follows: by t identity matrix. Due to the fact that R must be a nonzero N = I + Q + Q 2 + Q3 + · · · matrix, Q has uniform-norm strictly less than 1, thus Qn → 0 ∑∞ when n → ∞, which implies that eventually the process will = Qk be absorbed with probability 1. k=0 = (I − Q)−1 (6) C. Probabilistic Computational Tree Logic Since the matrix N is the inverse of (I − Q), we calculate the The requirements are expressed in probabilistic computa- element ni j of the matrix N as follows: tional tree logic (PCTL), which can be used to describe the properties of the system. Filieri et al. proposed an efficient 1 ni j = · α ji (W) (7) approach to verify whether DTMC models satisfy requirements det(W) expressed by PCTL. PCTL is an extension of Computational where W is I − Q, and αi j is the cofactor. Using Expression (5) Tree Logic (CTL) that allows probabilistic quantification of and (7), the element bi j of the matrix is calculated as follows: described properties. The PCTL formula [11] is defined by ∑ the following Φ: bik = nix · r x j x∈0...t−1 Φ ::= true | a | Φ ∧ Φ | ¬ Φ | P▷◁p (φ) (2) ∑ 1 φ ::= X Φ | Φ U Φ | Φ U ≤t Φ (3) = α xi (W) · r x j (8) det(W) x∈0...t−1 where a is an atomic proposition. Probabilistic operator P▷◁p (φ) represents whether the expression φ satisfies the con- Here, the computation of bik requires the computation straint ▷◁ p or not, where p ∈ [0, 1], ▷◁∈ {≤, <, ≥, >}. Operators of determinants. Determinants are calculated by applying X and U, which appear in path formulae φ, represent “next” Laplace expansion [16] and LU-decomposition [17] (Figure and “until”, respectively. For example, P<0.01 (X s = 1) means 4). Laplace expansion removes variables from a matrix; LU- that the reachability probability from the current state to the decomposition, on the other hand, computes determinants of next state s1 is less than 0.01. the matrix that contains no variables. The determinant |A| of the n × n matrix A is the sum of the product of the element In Section IV, we conduct an experiment to evaluate our ai j by each determinant of n sub-matrices of A with size approach by observing the computational time for the determi- (n−1)×(n−1). The expression of |A| with the i-th row expanded nation of reachability property. The reachability property states is as follows: that a target state is eventually reached from a given initial ∑ n state. Many safety properties can be checked by reachability |A|(= det(A)) = (−1)i+ j ai j |A′ |i j (9) property verifications. In most cases, the state to be reached is j=0 one of absorbing states. An example of a PCTL formula that represents a reachability property is as follows: A′i j is the (n − 1) × (n − 1) sub-matrix generated by removing the i-th row and the j-th column of the matrix A. P≤0.01 (true U s = 6) (= P≤0.01 (F s = 6)) (4) As we mentioned, the computation of reachability property Expression (4) states that the probability from an initial state requires the determinant computation. The determinant com- to the absorbing state s6 has to be less than 0.01. Since many putation is executed by using LU-decomposition and Laplace 0 0.2 0.1 0.5 0.01 ‫ݔ‬ଵ 0 0.9 Laplace expansion 0 ‫ݔ‬ଶ 0.5 0 0.2 0.7 0.01 0.02 0.2 0.1 0.5 0 0.1 0.5 0 0.2 0.1 = െ0.01 ‫ݔ‬ଶ 0.5 0 + ‫ݔ‬ଵ 0 0.5 0 + 0.9 0 ‫ݔ‬ଶ 0.5 0.7 0.01 0.02 0.2 0.01 0.02 0.2 0.7 0.01 Laplace expansion LU-decomposition Laplace expansion = ‫ = ڮ‬െ0.04503‫ݔ‬ଶ െ 0.05‫ݔ‬ଵ + 0.01973 (a part of) Expression Fig. 4. Laplace expansion and LU-decomposition. „At design time VHQVH 1 1 ‫଺ݔ‬ 1 IDLOXUH System model V V V V Generate Expression ‫ݔ‬ଶ expression (with variables) ‫ݔ‬ହ IDLOXUH VXFFHVV Requirements V VHQVH Caching 1 െ ‫ݔ‬ଷ *0.1 Store similar pairs 1 െ ‫଺ݔ‬ ‫ݔ‬ଷ ‫ݔ‬ସ (Prediction) ‫ݔ‬଴ െ ‫ݔ‬ହ 1 െ ‫ݔ‬ଶ Matrix Expression with considering ‫ݔ‬ଵ 1 െ ‫ݔ‬ସ M1 Exp1 state groups 0.2 Cache V V V V V (Reduction) 1 െ ‫ݔ‬଴ 1 െ ‫ݔ‬ଷ *0.9 M2 Exp2 1 െ ‫ݔ‬ଵ … … VHQVH DQDO\]H URWDWH JR 0.8 „model is changed at runtime IRUZDUG Efficient reuse re-generate Fig. 6. The DTMC model of our cleaning robot after adaptation. A new state expression s11 for sensing another data from an environment and relevant transitions are introduced. Monitor Verify Expression For example, Figure 6 shows a model after an adaptation environment adaptation that causes to add state and transitions to the model illustrated Value assignment Evaluation to variables in Figure 2. In such a case, the system has to re-execute the pre-computation process at run-time. This re-execution Fig. 5. Our expansion: efficient reuse of the pairs of matrix and expression usually requires large computational time. Especially, Laplace with three strategies, i.e., caching, prediction, and reduction. expansions in the pre-computation process require a large amount of computational time. expansion; while the former requires O(n3 ) computational cost, The goal in this paper is to reduce the number of ex- the latter requires O(n!), that is, considerably larger computa- ecuting Laplace expansions at run-time. Figure 5 illustrates tional cost. The rows that include variables representing un- the overview of our approach. In this paper, we propose the known parameters are expanded by using Laplace expansion. following three strategies: The high number of times of Laplace expansions leads to a huge computational cost. Models of systems that have to deal • Caching: To avoid the re-execution of Laplace ex- with uncertain environments, such as self-adaptive systems, pansion, we store pairs of sub-matrix and the cor- usually have variables that are represented as unknown pa- responding expression for re-use into the memory rameters in the matrices. The large number of variables leads (cache). When the system needs to re-generate an to the large number of times of Laplace expansions and large expression, it searches for the matrix from the cache. computational time. If the system finds it, the matrix is replaced with the stored expression with no calculation. III. Our approach • Prediction: Since a DTMC model is changed by adaptation, the cache hit ratio may be decreased when The advantage of Filieri’s approach is that it can shift the we search for sub-matrices of the new DTMC model. large computational cost from run-time to design time. This In order to increase hit ratio of caching, we store process provides an efficient run-time verification mechanism additional pairs that are similar to the stored pairs under the assumption that the structure of the system model, based on the caching strategy. i.e., the DTMC model, is not changed. On the other hand, if the structure is changed, such as states and transitions addition, • Reduction: Caching and prediction strategies require the pre-computed expressions no longer correspond to the new an additional memory space. We need to reduce the model. increase of the required space. In this paper, we try to $WGHVLJQWLPH 2) Choose a non-zero value from the elements in Q. Let /DSODFH 0 0.2 02 0 0.1 1 0.5 H[SDQVLRQ 6WRUHSDLUVRIVXEPDWUL[ the selected element be akl and its value be αkl . 0.01 ‫ݔ‬ଵ DQGH[SUHVVLRQ 0 0.9 3) Introduce a new variable xnew and put this variable at 0 ‫ݔ‬ଶ 0.5 0 ak j . Since the sum of transition probabilities in each 0.2 0.7 0.01 0.02 /8GHFRPSRVLWLRQ 0.2 0.1 0.5 0 0.1 0.5 0 0.2 0.1 row should be 1, we change the value αkl at akl to = 0.01 ‫ݔ‬ଶ 0.5 0 + ‫ݔ‬ଵ 0 0.5 0 + 0.9 0 ‫ݔ‬ଶ 0.5 αkl − xnew . We regard this matrix as a matrix similar 0.7 0.01 0.02 0.2 0.01 0.02 0.2 0.7 0.01 to the existing matrix. &KDQJHEHKDYLRU 4) Repeat Steps 2 and 3 and generate matrices by $WUXQWLPH changing a non-zero element in Q at Step 2 until 0 0 0.2 2 0.1 0.5 ‫ݔ‬ସ 0.01 ‫ݔ‬ଵ 0 0.9 0 all of the non-zero elements in Q are selected. 0 ‫ݔ‬ଶ 0.5 0 0 /DSODFH 5HXVH H[SDQVLRQ For example, our prediction strategy can make the matrix 0.2 0.7 0.01 0.02 0 0 1 െ ‫ݔ‬ଷ 0 0 ‫ݔ‬ଷ illustrated in Figure 8 from the matrix illustrated in Figure 0 0.1 0.5 ‫ݔ‬ସ 0 0.2 0.1 0.5 3, that is, the model before adaptation. 0.01 0 0.9 0 0.01 ‫ݔ‬ଵ 0 0.9 = 1 െ ‫ݔ‬ଷ + ‫ݔ‬ଷ 0 0.5 0 0 0 ‫ݔ‬ଶ 0.5 0 0.2 0.01 0.02 0 0.2 0.7 0.01 0.02 C. Reduction Fig. 7. Caching. Caching and prediction strategies are expected to require a large amount of memory space. To restrain the increase of memory usage, we try to reduce the candidate matrices on the realize the reduction on the basis of grouping. basis of grouping. The reduction strategy defines sets of states in DTMC model as groups. This strategy removes the matrices The following subsections briefly explain these strategies. that the prediction strategy generated by injecting transitions over different groups. This restriction saves certain degree of A. Caching matrix generation. In many cases, self-adaptive systems change only a part Considering the cleaning robot example illustrated in Fig- of their behaviors. Even though the systems requires re- ure 2, we can define groups for sensing (the set of states s0 generation of expression for verification at run-time, the most to s3 and the state s6 ) and migration (the set of states s4 , part of the system models are not changed in many cases. s5 , s7 , and s8 ). When we prepare the similar matrix added a Figure 7 illustrates the caching strategy. We store the new state into the sensing group, we consider new transitions pairs of sub-matrix and the corresponding (sub-)expression from the states within the sensing group, i.e., s0 , s1 , s2 , s3 , at design time. Since LU-decomposition does not require and s6 , to the new state and transitions from the new state to the large computational cost, we store the sub-matrices that these states. The reduction strategy prevents generating similar require Laplace expansion. When the system needs to re- matrices that have less possibility of re-use. generate expressions at run-time, it searches for the matrix to be processed from the cache. If the system finds the same IV. Evaluation matrix in the cache, the matrix is replaced with the paired sub-expression. In order to evaluate the effectiveness of our strategies, we conducted preliminary experiments on randomly generated B. Prediction DTMC models. We implemented three strategies in Java and compared the computational time and memory size of four Effectiveness of caching depends on the size of matrix to methods of re-generation as follows: be replaced with an expression. Even though we can replace a small size of sub-matrix with the corresponding expression, • Method 1: baseline (no caching), denoted as none in the improvement of computational cost is limited. We should Figures 9 to 12; find as large a size of matrix as possible. • Method 2: caching, denoted as C in the figures; In order to match a large size of matrix, we additionally register matrices that are similar to the matrix representing the • Method 3: caching and prediction, denoted as CP in system model before adaptation. the figures; In this paper, we focus on the behavioral changes such that • Method 4: caching, prediction, and reduction, denoted new states and transitions are added. For example, the model as CPR in the figures. shown in Figure 6 is an example of such cases. Figure 8 is the corresponding matrix. The prediction strategy makes such As problem instances, we randomly generated DTMC models matrices at design time by adding one state and one transition with the size ranging from 20 to 40 states increased by from the existing matrix. These matrices can be constructed five, including two absorbing states. Some generated models as follows: contain loops. In these DTMC models, the number of outgoing transitions of each state follows a Gaussian distribution with 1) Insert a new row and column in the rear of Q. Let the mean 10 and standard deviation 2. Initial DTMC models, that inserted row and column be the i-th row and the j-th is, the system models before adaptation, contain one or two column. All elements in the added row and column variables as transition probabilities, and adaptation causes to are 0. introduce two variables into the models. We divided all states ࡽ Existing matrix Added column ࡾ 0 1 െ ‫ݔ‬଴ ‫ݔ‬଴ െ ‫ݔ‬ହ 0 0 0 ‫ݔ‬ହ 0 0 0 0 0 ‫ݔ‬ଵ 1 െ ‫ݔ‬ଵ 0 0 0 0 0 0 0 0 0 1 െ ‫ݔ‬ଶ 0 0 0 ‫ݔ‬ଶ 0 0 0 0 0 0 0.2 0.8 0 0 0 0 0 0 0 0 0 0.9 1 െ ‫ݔ‬ଷ 0 0 0.1 1 െ ‫ݔ‬ଷ ‫ݔ‬ଷ P= 0 0 0 0 1 െ ‫ݔ‬ସ 0 0 0 ‫ݔ‬ସ 0 Added 0 0 0 1 െ ‫଺ݔ‬ 0 0 0 ‫଺ݔ‬ 0 0 row 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 ࡻ ࡵ Fig. 8. The matrix of the DTMC model for the cleaning robot after adaptation. Our prediction strategy makes such a matrix from the existing matrix. ϵϬϬϬ ϭ͘нϬϲ EŽŶĞ ϴϬϬϬ  ϭ͘нϬϱ ϳϬϬϬ W WZ ϲϬϬϬ ϭ͘нϬϰ EŽŶĞ dŝŵĞ΀ŵƐ΁ ϱϬϬϬ ĂĐŚĞ  ϭ͘нϬϯ ϰϬϬϬ W WZ ϯϬϬϬ ϭ͘нϬϮ ϮϬϬϬ ϭ͘нϬϭ ϭϬϬϬ Ϭ ϭ͘нϬϬ ϮϬ Ϯϱ ϯϬ Ϯϱ ϰϬ ϮϬ Ϯϱ ϯϬ Ϯϱ ϰϬ ^ƚĂƚĞƐ ^ƚĂƚĞƐ Fig. 9. Computational time when the model before adaptation has one variable Fig. 11. Cache size when the model before adaptation has one variable and and the model after adaptation has three variables. X-axis represents the the model after adaptation has three variables. X-axis represents the number number of states in the DTMC model, while Y-axis represents computational of states in the DTMC model, while Y-axis represents the number of the time [ms]. stored expressions in logarithmic scale. Since Method 1, represented as None in legend, does not use cache, it does not appear in this graph. (Cache size is zero.) ϱϬϬϬϬ EŽŶĞ ϰϱϬϬϬ  ϰϬϬϬϬ sometimes it does not affect computational time. Prediction W WZ (Method 3) and reduction (Method 4) further make the com- ϯϱϬϬϬ putation faster. ϯϬϬϬϬ dŝŵĞ΀ŵƐ΁ As for the cache size illustrated in Figures 11 and 12, on the ϮϱϬϬϬ other hand, prediction (Method 3) requires the largest memory ϮϬϬϬϬ size. Reduction (Method 4) partly reduces the memory size for ϭϱϬϬϬ caching. ϭϬϬϬϬ The results of the experiments shows that the prediction ϱϬϬϬ strategy, on the basis of caching, is effective to reduce the computational time and the reduction strategy works to prevent Ϭ the increase of memory size for caching. These strategies ϮϬ Ϯϱ ϯϬ Ϯϱ ϰϬ ^ƚĂƚĞƐ partially work; however, we also find that the effects of these strategies are still limited. Fig. 10. Computational time when the model before adaptation has two variables and the model after adaptation has four variables. The current design of prediction with reduction prepares similar matrices for all elements in the same group in Q. We could make more precise prediction using the semantic into groups of size 10. We compared the averages of 10 runs of the system model. As for the memory size, our current for each configuration. reduction strategy is effective when behavioral changes are limited within the group. Since the effectiveness of reduction Figures 9 and 10 show the computational time to re- strategy is different between the cases illustrated in Figures generate an expression. We can find that caching (Method 2) 11 and 12, we should improve the applicability of reduction speeds up expression re-generation (faster than Method 1) but strategy. For further memory usage reduction, we could discard ϭ͘нϬϳ From the experimental results, we identified the directions for future work. First, we should address the reduction of cache ϭ͘нϬϲ size. Some Markov model simplification techniques, such as ϭ͘нϬϱ ones based on SCC (strongly connected components) reduction [20], partial order reduction [21], and bisimulation [22], can be ϭ͘нϬϰ beneficial to reduce cache size; however, excessive reduction ĂĐŚĞ could prevent our strategies from reusing stored matrices for ϭ͘нϬϯ regenerating expressions. ϭ͘нϬϮ Second, we plan to improve the precision strategy of EŽŶĞ  the prediction strategy. The improvement of prediction helps ϭ͘нϬϭ W to keep the cache size from increasing. We also plan to WZ improve the applicability of reduction strategy. One direction ϭ͘нϬϬ is discarding small size matrices and redundant ones. Third, ϮϬ Ϯϱ ϯϬ Ϯϱ ϰϬ after the sufficient improvement of our strategies, we will take ^ƚĂƚĞƐ into account different adaptation patterns, such as state removal Fig. 12. Cache size when the model before adaptation has two variables and or larger changes in the system model. In order to evaluate the model after adaptation has four variables. Since Method 1, represented as our approach on real world examples, we plan to embed the None in legend, does not use cache, it does not appear in this graph. (Cache verification mechanism in our programming framework [23] size is zero.) for self-adaptive systems. small matrices that do not substantially affect computational Acknowledgment time and redundant ones. This work was supported by JSPS Grants-in-Aid for Sci- When using the proposed strategies, software developers entific Research (No. 15K00097). The Telecommunications should estimate the benefits of the strategies because they Advancement Foundation also supported this work. are still limited. The time required to generate expressions is reduced in the range of 10 to 30%. Also memory consumption References should be taken into account. Even though prediction and [1] B. H. Cheng, R. de Lemos, H. Giese, P. Inverardi, J. Magee, and et al., reduction strategies are effective for reducing computational “Software engineering for self-adaptive systems: A research road map,” time, they consume a large amount of memory; this trade off in Dagstuhl Seminar Proceedings 08031, 2008, pp. 1 – 13. should be carefully considered. [2] R. de Lemos, H. Giese, H. A. Müller, M. Shaw, and et al., “Software engineering for self-adaptive systems: A second research roadmap (draft V. Related Work version of may 20, 2011),” in Dagstuhl Seminar 10431, 2011. [3] J. Zhang and B. H. C. Cheng, “Model-based development of dy- Runtime verification, such as one in [7], plays a key role namically adaptive software,” in Proceedings of the 28th international in dynamically adaptive software systems. Some work, in conference on Software engineering (ICSE ’06). ACM, 2006, pp. 371– particular, focuses on the efficiency of runtime verification. 380. Gerasimou et al. [18] evaluated the effectiveness of caching, [4] G. Blair, N. Bencomo, and R. B. France, “Models@ Run.Time,” Computer, vol. 42, no. 10, pp. 22–27, Oct. 2009. [Online]. Available: look-ahead, and nearly-optimal techniques on the continuous- http://dx.doi.org/10.1109/MC.2009.326 time Markov chain (CTMC) model. While this technique is [5] R. Calinescu, C. Ghezzi, M. Kwiatkowska, and R. Mirandola, designed on the assumption that verifications are continuously “Self-adaptive software needs quantitative verification at runtime,” executed at run-time with using previous results, our approach, Communications of the ACM, vol. 55, no. 9, pp. 69–77, Sep. 2012. on the other hand, is designed on the assumption that computa- [Online]. Available: http://doi.acm.org/10.1145/2330667.2330686 tional results to be re-used have been prepared at design time. [6] R. Calinescu, L. Grunske, M. Kwiatkowska, R. Mirandola, and G. Tam- burrelli, “Dynamic QoS management and optimization in service-based Kwiatkowska et al. [19] proposed an incremental quanti- systems,” IEEE Transactions on Software Engineering, vol. 37, no. 3, tative verification of probabilistic models. This method uses pp. 387–409, May 2011. the previous verification results that are acquired at run- [7] M. U. Iftikhar and D. Weyns, “ActivFORMS: Active Formal time to speed up run-time verification. This approach can Models for Self-adaptation,” in Proc. of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing also deal with the structural changes of models. While this Systems (SEAMS’14). ACM, 2014, pp. 125–134. [Online]. Available: approach requires the analysis of change impact on the model http://doi.acm.org/10.1145/2593929.2593944 before re-use of the previous verification results, our approach [8] A. Filieri, G. Tamburrelli, and C. Ghezzi, “Supporting self-adaptation determines where the re-use can be applied by only checking via quantitative verification and sensitivity analysis at run time,” IEEE cache hit, which can be implemented by a simpler mechanism. Transactions on Software Engineering, vol. 42, no. 1, pp. 75–99, Jan 2016. VI. Conclusion and Direction for future work [9] K. Goševa-Popstojanova and K. S. Trivedi, “Architecture-based approach to reliability assessment of software systems,” Perform. We presented the current state of our approach that supports Evaluation, vol. 45, no. 2-3, pp. 179–204, Jul. 2001. [Online]. run-time verification when the structure of the system model Available: http://dx.doi.org/10.1016/S0166-5316(01)00034-7 is changed by adaptation. In order to reduce the computational [10] K. Ogawa, H. Nakagawa, and T. Tsuchiya, “An experimental evaluation on runtime verification of self-adaptive systems in the presence of time for re-generating expressions, we proposed three strate- uncertain transition probabilities,” in Proc. of Software Engineering and gies, i.e., caching, prediction, and reduction. The preliminary Formal Methods - SEFM 2015 Workshops, ser. LNCS, vol. 9509, 2015, evaluation demonstrated the possibilities of our strategies. pp. 253–265. [11] C. Baier and J.-P. Katoen, Principles of Model Checking (Representation http://doi.acm.org/10.1145/2593929.2593932 and Mind Series). The MIT Press, 2008. [19] M. Kwiatkowska, D. Parker, and H. Qu, “Incremental quantitative [12] H. Hansson and B. Jonsson, “A logic for reasoning about time and verification for markov decision processes,” in Proc. of the reliability,” Formal Aspects of Computing, vol. 6, no. 5, pp. 512–535, 2011 IEEE/IFIP 41st International Conference on Dependable 1994. [Online]. Available: http://dx.doi.org/10.1007/BF01211866 Systems&Networks, ser. DSN ’11. IEEE Computer Society, 2011, [13] C. Daws, “Symbolic and parametric model checking of discrete-time pp. 359–370. [Online]. Available: http://dx.doi.org/10.1109/DSN.2011. markov chains,” in Proc. of the First International Conference 5958249 on Theoretical Aspects of Computing (ICTAC’04), ser. ICTAC’04. [20] F. Ciesinski, C. Baier, M. Grosser, and J. Klein, “Reduction techniques Springer-Verlag, 2005, pp. 280–294. [Online]. Available: http: for model checking markov decision processes,” in Proc. of the Fifth //dx.doi.org/10.1007/978-3-540-31862-0 21 International Conference on Quantitative Evaluation of Systems (QEST [14] H. E. Taylor and S. Karlin, An Introduction to Stochastic Modeling. ’08), 2008, pp. 45–54. Academic Press, 1998. [21] H. Hansen, M. Kwiatkowska, and H. Qu, “Partial order reduction [15] L. Grunske, “Specification patterns for probabilistic quality properties,” for model checking markov decision processes under unconditional in Proc. of the 30th International Conference on Software Engineering fairness,” in Proc. of the 2011 Eighth International Conference on (ICSE’08). ACM, 2008, pp. 31–40. [Online]. Available: http: Quantitative Evaluation of SysTems (QEST ’11). IEEE Computer //doi.acm.org/10.1145/1368088.1368094 Society, 2011, pp. 203–212. [Online]. Available: http://dx.doi.org/10. 1109/QEST.2011.35 [16] H. Rose and H. Rose, Linear Algebra: A Pure Mathematical Approach. Springer, 2002. [Online]. Available: https://books.google.co.jp/books? [22] J.-P. Katoen, T. Kemna, I. Zapreev, and D. N. Jansen, “Bisimulation id=mTdAj-Yn4L4C minimisation mostly speeds up probabilistic model checking,” in Proc. of the 13th International Conference on Tools and Algorithms for the [17] G. H. Golub and C. F. Van Loan, Matrix Computations (3rd Ed.). Construction and Analysis of Systems (TACAS’07), ser. TACAS’07. Baltimore, MD, USA: Johns Hopkins University Press, 1996. Berlin, Heidelberg: Springer-Verlag, 2007, pp. 87–101. [Online]. [18] S. Gerasimou, R. Calinescu, and A. Banks, “Efficient runtime Available: http://dl.acm.org/citation.cfm?id=1763507.1763519 quantitative verification using caching, lookahead, and nearly-optimal [23] H. Nakagawa, A. Ohsuga, and S. Honiden, “Towards dynamic evolution reconfiguration,” in Proc. of the 9th International Symposium on of self-adaptive systems based on dynamic updating of control loops,” Software Engineering for Adaptive and Self-Managing Systems in Proc. of IEEE Sixth International Conference on Self-Adaptive and (SEAMS’14). ACM, 2014, pp. 115–124. [Online]. Available: Self-Organizing Systems (SASO’12). IEEE, sept. 2012, pp. 59 – 68.