Gained and Excluded Classified Actions by Dynamic Security Policies ⋆ Damas P. Gruska Institute of Informatics, Comenius University Mlynska dolina, 842 48 Bratislava, Slovakia gruska@fmph.uniba.sk Abstract. Dynamic security policies and formalisms for expressing information on private actions obtained by observing public ones are presented. Two sets of private actions are considered. The set of actions which execution is guaranteed according to observations and the set of actions which execution is excluded ac- cording to observations of public actions. Moreover, we consider also intruders which have limited memory capacity to record these sets during an attack as well as policies which change due to elapsing of time. Key words: dynamic security policy, gained and excludes actions, information flow, security, non-interference 1 Introduction Information flow based security properties assume an absence of any information flow between private and public systems activities. This means, that systems are considered to be secure if from observations of their public activities no information about pri- vate activities can be deduced. Such security properties could be quantified to avoid the unrealistic requirement of absolutely none information flow and hence to express an amount of possibly "leaked secrecy" in several ways. For example, Shannon’s informa- tion theory was applied for simple imperative languages (see [CHM07,CMS09]) or for process algebras (see [Gru08]). Another possibility is to exploit probabilistic theory as it was used for process algebras in [Gru09] or by expressing subsets of private actions which occurrence (and not occurrence) can be deduced by an intruder who can observe public behaviour of the system [Gru11]. All above mentioned concepts are based on a static security policy, i.e. policy which is not changed during executions. This ap- proach seems to be rather restrictive for applications where their security policies (clas- sification, declassification etc.) change dynamically during runtime. Hence, there is a growing research and a number of papers devoted to dynamic security policies tailored for various formalizations and computational paradigms. For instance, in the case of imperative programs, security policy requires that values of classified variables could not be obtained by observing public ones, what can be formalizes by an equivalence relation on values of program’s variables. In the case of a dynamic security policy, this relation can change during a computation (see, for example [DHS15]). In general, a ⋆ Work supported by the grant VEGA 1/1333/12. 171 dynamic security property permits different information flows at different points during program/system’s execution. The aim of this paper is to formulate dynamic security policies for security con- cepts based on the sets of gained and excluded private actions [Gru11]. In this case, a dynamic security policy defines a set of private actions at a given state of execution. That means that the set of private actions is not fixed but it can change dynamically during system’s execution. We study the resulting concepts and we show how they are related to traditional static security properties as well. Later, we consider also so called limited intruders (intruders with limited storage to record obtained information) who always try to be prepared for declassifications of private actions but have to take into account also new classification of non-public, so called invisible, actions. We will de- fine also a special class of dynamic security policies which change only with elapsing of time. The paper is organized as follows. In Section 2 we describe the timed process al- gebra TPA which will be used as a basic formalism. In Section 3 we formalize sets of gained and excludes private actions with respect to a static security policy. In Section 4 we define gained and excludes private actions with respect to dynamic security poli- cies. The next section is devoted to limited intruders and time dynamic security policies. Section 6 contains discussion and plans for a future work. 2 Timed Process Algebra In this section we define Timed Process Algebra, TPA for short. TPA is based on Mil- ner’s CCS (see [Mil89]) but the special time action t which expresses elapsing of (dis- crete) time is added (see also [Gru10]). The presented language is a slight simplification of the Timed Security Process Algebra (tSPA) introduced in [FGM00]. We omit the ex- plicit idling operator ι used in tSPA and instead of this we allow implicit idling of processes. Hence processes can perform either "enforced idling" by performing t ac- tions which are explicitly expressed in their descriptions or "voluntary idling". But in the both cases internal communications have priority to action t in the case of the par- allel operator. Moreover we do not divide actions into private and public ones as it is in tSPA. TPA differs also from the tCryptoSPA (see [GM04]). TPA does not use value passing and strictly preserves time determinacy in case of choice operator + what is not the case of tCryptoSPA. To define the language TPA, we first assume a set of atomic action symbols A not containing symbols τ and t, and such that for every a ∈ A there exists a ∈ A and a = a. We define Act = A ∪ {τ }, Actt = Act ∪ {t}. We assume that a, b, . . . range over A, S u, v, . . . range over Act, and x, y . . . range over Actt. Assume the signature Σ = n∈{0,1,2} Σn , where 172 Σ0 = {N il} Σ1 = {x. | x ∈ A ∪ {t}} ∪ {[S] | S is a relabeling function} ∪{\M | M ⊆ A} Σ2 = {|, +} with the agreement to write unary action operators in prefix form, the unary operators [S], \M in postfix form, and the rest of operators in infix form. Relabeling functions, S : Actt → Actt are such that S(a) = S(ā) for a ∈ A, S(τ ) = τ and S(t) = t. The set of TPA terms over the signature Σ is defined by the following BNF notation: P ::= X | op(P1 , P2 , . . . Pn ) | µXP where X ∈ V ar, V ar is a set of process variables, P, P1 , . . . Pn are TPA terms, µX− is the binding construct, op ∈ Σ. The set of CCS terms consists of TPA terms without t action. We will use a usual definition of opened and closed terms where µX is the only binding operator. Closed terms which are t-guarded (each occurrence of X is within some subexpression t.A, i.e. between any two t actions only finitely many non timed actions can be performed) are called TPA processes. Note that N il will be often omitted from processes descriptions and hence, for example, instead of a.b.N il we will write just a.b. We give a structural operational semantics of terms by means of labeled transition systems. The set of terms represents a set of states, labels are actions from Actt. The x transition relation → is a subset of TPA × Actt × TPA. We write P → P ′ instead ′ x ′ x ′ of (P, x, P ) ∈ → and P → 6 if there is no P such that P → P . The meaning of the x x expression P → P is that the term P can evolve to P ′ by performing action x, by P → ′ x we will denote that there exists a term P ′ such that P → P ′ . We define the transition relation as the least relation satisfying the inference rules for CCS (see [Mil89]) plus the following inference rules: t A1 t A2 N il → N il u.P → u.P t t τ t t P → P ′ , Q → Q′ , P | Q → 6 P → P ′ , Q → Q′ t P a1 t S P | Q → P ′ | Q′ P + Q → P ′ + Q′ Here we mention the rules that are new with respect to CCS. Axioms A1, A2 al- low arbitrary idling. Concurrent processes can idle only if there is no possibility of an internal communication (P a1). A run of time is deterministic (S). In the definition of the labeled transition system we have used negative premises (see P a1). In general this may lead to problems, for example with consistency of the defined system. We avoid these dangers by making derivations of τ independent of derivations of t. For an expla- nation and details see [Gro90]. Regarding behavioral relations we will work with the timed version of weak trace equivalence. Note that here we will use also a concept of observations which contain complete information which includes also τ actions and not 173 just actions from A and t action as it is in [FGM00]. For s = x1 .x2 . . . . .xn , xi ∈ Actt s x x xn we write P → instead of P →1 →2 · · · → and we say that s is a trace of P . The set of all traces of P will be denoted by T r(P ). x s1 x s2 We will write P ⇒M P ′ for M ⊆ A iff P → →→ P ′ for s1 , s2 ∈ (M ∪ {τ })⋆ and s x1 x2 xn P ⇒M instead of P ⇒M ⇒M · · · ⇒M . Instead of ⇒∅ we will write ⇒ and instead of ⇒{h} we will write ⇒h . By Succ(P ) we will denote the set of all successors P ′ s of P i.e. processes for which P → P ′ for some s ∈ Actt∗ . By s|B we will denote the sequence obtained from s by removing all actions not belonging to B. By s′ ⊑ s we will denote that s′ is a subsequence of s i.e. s′ can be obtained by removing some actions from s. We conclude this section with definitions of M-bisimulation and M-trace equivalence. Definition 1. Let (TPA, Actt, →) be a labelled transition system (LTS). A relation ℜ ⊆ TPA × TPA is called a M-bisimulation if it is symmetric and it satisfies the following x condition: if (P, Q) ∈ ℜ and P → P ′ , x ∈ Actt then there exists a process Q′ such x b that Q ⇒M Q′ and (P ′ , Q′ ) ∈ ℜ. Two processes P, Q are M-bisimilar, abbreviated P ≈M Q, if there exists a M-bisimulation relating P and Q. Definition 2. The set of M-traces of process P for M, M ⊆ A ∪ {t} is defined as s T rtM (P ) = {s ∈ (A ∪ {t})⋆ |∃P ′ .P ⇒M P ′ }. Instead of T rt∅ (P ) we will write T rt (P ). Two processes P and Q are M-trace equivalent (P ≈tM Q) iff T rtM (P ) = T rtM (Q). Note that for M = ∅ these two notions correspond to weak bisimulation and weak trace equivalence (see [Mil89]) and they will be denoted by ≈ and ≈t , respectively. 3 Gained and Excluded Classified Actions In this we recall (with slightly modified notations) concepts of gained and excluded classified actions (see [Gru11]). We suppose that all actions are divided into two groups, namely public (low level) actions L and private (high level) actions H i.e. A = L ∪ H, L ∩ H = ∅. Moreover, we suppose that H 6= ∅ and L 6= ∅ and that for every h ∈ H, l ∈ L we have h ∈ H, l ∈ L. Hence the division is given by the set of high level actions. To denote sequences of public actions, i.e sequences consisting of actions from L ∪ {t} and sequences of private actions from H, we will use notation ˜l, l˜′ , . . . for sequences from (L ∪ {t})⋆ (note that elapsing of time - i.e. t action is also a public action) and h̃, h̃′ , . . . for sequences from H ⋆ , respectively. The set of actions could be divided to more than two subsets, what would correspond into more levels of classification. All the following concepts could be naturally extended to such setting. First we define a set of private actions which occurrence can be learned by an in- truder who see a process to perform a sequence of public actions ˜l (we will call such action as gained actions). Definition 3. Let P ∈ T P A and ˜l ∈ T rtH (P ). Then the occurrence of the set of private action which can be gained about P by public observing ˜l is defined as follows: l̃ gH (P, ˜l) = {h|h ∈ H, P ⇒ 6 H\{h} }. 174 According to Definition 3 the set of private actions gH (P, ˜l) is the one which has to be performed by P if an intruder sees P to perform public actions ˜l. Example 1. Let P = l1 .h.l2 .N il + l1 .l2 .N il and P ′ = l1 .h.h′ .l2 .N il + l1 .h.l2 .N il. Let ˜l = l1 .l2 then we have g(P, ˜l) = ∅, g(P ′ , ˜l) = {h}. Definition 4. Let P ∈ T P A. Then the occurrence of the set of private action which can be excluded by observing P performing public action ˜l (i.e. ˜l ∈ T rtH (P )) is defined as follows: \ eH (P, ˜l) = H \ M. l̃ P ⇒M If we have that eH (P, ˜l) = ∅ that means that an intruder after observing ˜l cannot exclude occurrence of any private action. There is no direct correlation between sets g(P, ˜l) and e(P, ˜l) since there are pro- cesses such that for one is the former set empty and the later nonempty and vice versa. If both of them are empty, that means, that an intruder can learn practically nothing on private actions by observing process P and seeing it to perform ˜l. In some sense g(P, ˜l) and e(P, ˜l) are complementary as it is stated in the following proposition (see [Gru11]). Proposition 1. For every process P and every ˜l, ˜l ∈ T rtH (P ) it holds g(P, ˜l) ∩ e(P, ˜l) = ∅ and ∅ ⊆ g(P, ˜l) ∪ e(P, ˜l) ⊆ H. Now we are prepared to formulate how much information can be gained by observ- ing public activities O, O ⊆ L∗ of a process. The formal definition follows. O Definition 5. Let P ∈ T P A. By gH (P ) we will denote the set of private actions which occurrence by P can be gained (detected) by an intruder observing sequences of public actions from O as [ O gH (P ) = gH (P, ˜l). l̃∈O O We say that no private information can be gained by observing P by O if gH (P ) = ∅. Now we can define a set of private actions which occurrence could be excluded by the set of observations O. Definition 6. Let P ∈ T P A. Then the occurrence of the set of private action which executions could be excluded by the set of observations O, O ⊆ T rtH (P ) is defined as follows: [ eO H (P ) = eH (P, ˜l). l̃∈O O For a given process P the size of sets gH (P ), eO H (P ) with respect to the size of H and O, give us another quantification of security. For example, small |O| and big O |gH (P )| and/or |eO H (P )| indicate a rather low level of security. 175 4 Dynamic Security Policies Division of actions to public and private ones is based on fixed (static) security policy which is not changed during system computation. This approach seems to be rather restrictive for applications where their security policies (classification, declassification etc.) change dynamically during runtime. In the presented framework by a policy we mean some set M of actions which are supposed to be private at the given points during program/system’s execution. By dynamic security policy D we mean a partial mapping which assigns to every process P and sequence of actions some policy i.e. subset of actions. Moreover we require that D is uniquely defined with respect to weak bisimu- lation. Definition 7. By dynamic security policy we mean partial mapping D, D : T P A × Actt∗ → 2Actt such that D(P, s) = D(P ′ , s) whenever P ≈ P ′ . Hence by D(P, s) we denote the set of actions which are private after the execution s of s by P if P → otherwise D(P, s) is not defined. Now we can define the set of gained actions with respect to dynamic security policy D. In this case an intruder gains private actions given by a security policy valid at the moment and from this set removes actions which are declassified. To do so we divide every execution trace to (maximal) intervals during which a policy is not changed. The formal definition follows. Definition 8. Let P ∈ T P A. Let s ∈ T rt (P ) and s = s1 . . . . sk such that s si = xi1 . . . . .xini for i = 1, . . . , k. such that Pi →i Pi+1 where P = P1 i.e. xi xi xin Pi →1 Pi1 , Pi1 →2 Pi2 , . . . , Pini −1 →i Pi+1 such that D(Pi , ǫ) = Hi and also D(Pi , xi1 . . . . .xij |Actt\Hi ) = Hi for every j, 1 ≤ j < ni . Moreover, we suppose that Hi 6= Hi+1 . Let ˜l = l˜1 . . . . , l˜k where l˜i = si |Actt\Hi . Then we define gD (P, l˜1 ) = gH1 (P, l˜1 ) gD (P, l˜1 . . . . .˜ln+1 ) = (gD (P, l˜1 . . . . .˜ln ) ∩ Hi+1 ) ∪ gHi+1 (Pi , ˜ln+1 ). O Definition 9. Let P ∈ T P A. By gD (P ) we will denote the set of private actions which occurrence by P can be gained (detected) by an intruder observing sequences of public actions from O under dynamic security policy D as [ O gD (P ) = gD (P, ˜l). l̃∈O O We say that no private information can be gained by observing P by O if gD (P ) = ∅. We can define partial ordering between dynamic security policies. Definition 10. Let D, D′ are two dynamic security policies. We say that D is stronger than D′ (denoted by D′  D) if for every P and s it holds D′ (P, s) ⊆ D(P, s). 176 Both the ordering of dynamic security properties as well as ordering (by inclusion) of sets of observations influence resulting sets of gained action as it is stated by the following proposition. Proposition 2. For every process P , sets of obsevations O, O′ and dynamic policies O′ D, D′ such that O ⊆ O′ and D′  D it holds gD O (P ) ⊆ gD O (P ) and gD O (P ) ⊆ gD ′ (P ), respectively. Proof. Sketch. The first part of the proof follows directly from Definition 9. The second part follows from Definitions 8 and 10. Now we can show how the property "no private information can be gained by ob- serving P " by dynamic security policy is related to persistent variant of absence-of- information-flow property, so called Strong Nondeterministic Non-Interference (SNNI, for short). We recall its definition (see [FGM00]). Process P has SNNI property (we will write P ∈ SN N I) if P \ H behaves like P for which all high level actions are hidden for an observer. To express this hiding we introduce hiding operator P/M, M ⊆ A, a a for which it holds if P → P ′ then P/M → P ′ /M whenever a 6∈ M ∪ M̄ and τ P/M → P ′ /M whenever a ∈ M ∪M̄ . Moreover, process has persistent SNNI property (denoted by PSNNI) if also all its successors have SNNI property. Formal definition of SNNI and PSNNI follows. Definition 11. Let P ∈ T P A. Then P ∈ SN N IH iff P \ H ≈t P/H and P ∈ P SN N IH iff P ′ ∈ SN N IH for every P ′ , P ′ ∈ Succ(P ). The persistent variant of SNNI property is stronger than SNNI itself as it is ex- pressed by the next proposition. Proposition 3. P SN N IH ⊂ SN N IH . Proof. Clearly P SN N IH ⊆ SN N IH . Let P = (l.l.N il + h.e.(h.l.N il + l.N il)). Then it is easy to check thatP 6∈ P SN N IH but P ∈ SN N IH . Now we are ready to formulate relationship between PSNNI property and the set of gained actions under a constant dynamic security policy. O Proposition 4. If P ∈ P SN N IH then gD H (P ) = ∅ for constant dynamic policy DH which assigns the set H to every process and sequence of actions. O Proof. The main idea. Let P ∈ P SN N IH and suppose that gD H (P ) 6= ∅. Hence there ′ ′ ′ exists P , P ∈ Succ(P ) and such that and subsequence o of some observation o from o′ O, o ∈ T rtHi (P ′ ) and h, h ∈ H and such that P ′ ⇒ 6 H\{h} . But then there exists sequence s which contains o′ and h such that s ∈ T rt (P ′ /H) but s 6∈ T rt (P ′ \ H) i.e. P ′ \ H 6≈t P/H i.e. P ′ 6∈ SN N IH and hence P 6∈ P SN N IH . The inverse of the previous proposition does not hold as it shows the following example. 177 P O Example 2. Let P = 1≤i≤n hi .l.N il and H = {h1 , . . . , hn }. Then gD H (P ) = ∅ but P has not PSNNI property since P \ H 6≈t P/H. Indeed P \ H cannot perform the sequence of action τ.l while P/H can perform it and an intruder seeing l can deduce that a private action was performed. O Corollary. Let D  DH and P ∈ P SN N IH then gD (P ) = ∅. Proof. The proof follows from Proposition 2 and 4. Now we will define sets of excluded private action first for a single observation and later for a set of observations. Definition 12. Let P ∈ T P A. Let s ∈ T rt (P ) and s = s1 . . . . sk such that s si = xi1 . . . . .xini for i = 1, . . . , k. such that Pi →i Pi+1 where P = P1 i.e. xi xi xin Pi →1 Pi1 , Pi1 →2 Pi2 , . . . , Pini −1 →i Pi+1 such that D(Pi , ǫ) = Hi and also D(Pi , xi1 . . . . .xij |Actt\Hi ) = Hi for every j, 1 ≤ j < ni . Moreover, we suppose that Hi 6= Hi+1 . Let ˜l = l˜1 . . . . , l˜k where l˜i = si |Actt\Hi . Then we define eD (P, l˜1 ) = eH1 (P, l˜1 ) eD (P, l˜1 . . . . .˜ln+1 ) = (eD (P, l˜1 . . . . .˜ln ) ∩ Hi+1 ) ∪ eHi+1 (Pi , ˜ln+1 ). Definition 13. Let P ∈ T P A. By g D (P ) we will denote the set of private actions which occurrence by P can be excluded by an intruder observing sequences of public actions from O under dynamic security policy D as [ eO D (P ) = eD (P, ˜l). l̃∈O O We say that no private information can be excluded by observing P by O if gD (P ) = ∅. For excluded action we can formulate the similar property which holds for gained actions. Also the proof is similar. Proposition 5. For every P ′ and D, D′ such that D′  D it holds eO O D (P ) ⊆ eD ′ (P ). There is no direct correlation between sets gD (P, ˜l) and eD (P, ˜l) since there are processes such that the former set is empty and the later one is nonempty and vice versa. If the both of them are empty, that means, that an intruder can learn practically nothing on private actions by observing process P to perform ˜l under the dynamic security policy D. In some sense gD (P, ˜l) and eD (P, ˜l) are complementary as it is stated in the following proposition. ˜ ˜ ˜ ˜ S process P it holds gD (P, l) ∩ eD (P, l) = ∅ and ∅ ⊆ Proposition 6. For every gD (P, l) ∪ eD (P, l) ⊆ s,s⊑l̃ D(P, s). 178 Proof. Let h ∈ gD (P, ˜l). We now that every execution of sequence of visible action l̃ ˜l has to contain h for some P ′ ∈ Succ(P ) i.e. if P ′ ⇒ M then h ∈ Hi . That means h 6∈ Hi \ M i.e. h 6∈ eD (P, ˜l). As regards the second part of the proposition let us consider process P = l.N il + h.l.N il. We have g(P, l) = e(P, l) = ∅. If we consider H = {h} then we see that gDH (P, ˜l) ∪DH (P, ˜l) = H i.e. ⊆ cannot be replaced by ⊂ in general. O We could further quantify levels of security by relating size of gD (P ), eO D (P ) to the size of O and D as it was suggested at the end of the previous section. 5 Limited Intruders and Variants of Dynamic Security Policies In this section we will assume intruders which are aware of dynamicity of security policy and try to learn as much as can be done with limited amount of memory where obtained information can be recorded. That means, (s)he tries to record also invisible but declassified actions for the case that they become classified in the future. First, let us consider three types of actions. Low level (public) actions, which are always visible, and the rest of the actions is called invisible (I). Moreover, the invisible actions could contain private (high level) actions, i.e. A = L ∪ I, L ∩ I = ∅ and H ⊆ I. Now we will consider dynamic security policies for which D(P, s) ⊆ I. That means that neither the set of visible nor invisible actions are changed by D. Only the set of high level actions can be changed but they are always invisible. We assume that |I| ≥ n. We suppose intruders who try to learn information about all classified actions but also about declassified actions which are not visible at the moment for the case that they will become classified in the future. Moreover, we assume that s(he) has only limited storage to record obtain information and hence in the case that the amount of this information is bigger then memory capacity, then some part of this already obtained information has to be forgotten. To model this we define a mapping which assigns to given set of classified actions M , capacity of storage n and given set N a set of subsets of N , with size n (or less if the capacity of storage is not reached) preferably containing actions from M . The formal definition follows. n Definition 14. FM (N ) = {N ′ | where N ′ = N if |N | ≤ n otherwise if |N ∩ M | ≥ n then N ⊆ N ∩ M such that |N ′ | = n and if |N ∩ M | < n then N ′ ⊆ ′ N such that N ∪ M ⊆ N ′ and |N ′ | = n}. n MappingSFM could be naturally extended for sets of sets. Let T ⊆ 2I , then n n FM (T ) = N ∈T FM (N ). Now we reformulate Definitions 3, 8 and 9 for limited intruders. Definition 15. Let P ∈ T P A and ˜l ∈ T rtI (P ). Then the occurrence of the set of private action which can be gained about P by public observing ˜l is defined as follows: l̃ gH (P, ˜l, n) = FH n ({h|h ∈ I, P 6⇒I\{h} }). 179 Definition 16. Let P ∈ T P A. Let s ∈ T rt (P )) and s = s1 . . . . sk such that s si = xi1 . . . . .xini for i = 1, . . . , k. such that Pi →i Pi+1 where P = P1 i.e. xi xi xin Pi →1 Pi1 , Pi1 →2 Pi2 , . . . , Pini −1 →i Pi+1 such that D(Pi , ǫ) = Hi and also D(Pi , xi1 . . . . .xij |Actt\Hi ) = Hi for every j, 1 ≤ j < ni . Moreover, we suppose that Hi 6= Hi+1 . Let ˜l = l˜1 . . . . , l˜k where l˜i = si |Actt\Hi . Then we define gD (P, l˜1 , n) = gH1 (P, l˜1 ) gD (P, l˜1 . . . . .˜ln+1 , n) = FH n i+1 ((gD (P, l˜1 . . . . .˜ln ) ∩ Hi+1 ) ∪ gHi+1 (Pi , ˜ln+1 )). O Definition 17. Let P ∈ T P A. By gD (P ) we will denote the set of private actions which occurrence by P can be gained (detected) by an intruder observing sequences of public actions from O under dynamic security policy D as [ O gD (P, n) = gD (P, ˜l, n). l̃∈O O We say that no private information can be gained by observing P by O if gD (P ) = ∅. n Note that in the previous definition we do not apply FM (H) since the whole concept O is based on one time attacks. If M ∈ gD (P, n)Sthan (there is a possibility that) an intruder can gain actions from M . Hence E = M ∈gO (P,n) M is the set of possibly D gained actions. Clearly, with a bigger memory an intruder can learn more as it is stated by the following proposition. O Proposition 7. Let n ≤ m. Then for every M , M ∈ gD (P, n) there exists M ′ , M ′ ∈ O ′ gD (P, m) such that M ⊆ M . n Proof. The main idea. It is clear from Definition 14 that mapping FM is monotonic with respect to parameter n. If n sufficiently large limited and unlimited intruders can learn the same as it is stated by the following proposition. S O O Proposition 8. Let | s D(P, s)| ≤ n for every P and s. Then gD (P, n) = gD (P ). n Proof. S The main idea. From Definition 14 we see that FM (H) is identical function if | s D(P, s)| ≤ n S O For | s D(P, s)| > n we cannot say, in general, whether gD (P, n) is equal to O gD (P ) or not. We cannot say this even if |D(P, s)| > n for some s. We could define also sets of excluded actions for limited intruders in the similar way as it is done for gained actions. But instead of that we concentrate on special cases of dynamic security policies. First, we define finite dynamic security policies and then time dynamic security policies. 180 Definition 18. We say that dynamic security policy is finite iff there exists set F = {H1 , . . . , Hm } such that for every P and s we have D(P, s) ∈ F . Definition 19. By time dynamic security policy we dynamic security policy such that for every P and s, s′ such that s|{t} = s′ |{t} it holds D(P, s) = D(P, s′ ). In the case of finite dynamic security policy D we could design process PD which could "compute" the both sets of excluded and private actions for finite state processes O in the following sense h.g ∈ T rt (C[P ]|PD ) iff h ∈ gD (P, n) and h.e ∈ T rt (C[P ]|PD ) O iff h ∈ eD (P, n) where C[] is an appropriate process context (see [Gru10]). In this case O we obtain decidability of sets gD (P ) and eO D (P ) (and their derivatives) for finite state processes which are undecidable in general. Time dynamic security policies are useful for formalization of systems which allow partial classification/declassification of actions within some time windows. Study of these two specific dynamic security policies we leave for the further research. 6 Conclusions We have presented several security concepts based on an information flow and dynamic security policies. They express which set of private actions was performed (gained sets) or which set of private actions could be excluded by an intruder observing systems public actions (excluded sets), taking into account dynamic security policies which can change sets of high level actions during runtime. The concepts offer a finer security notion with respect to traditional ones which usually only ensure that an intruder cannot learn that some private action was performed (for example, persistent variant of SNNI). Moreover, the sets of excluded and gained actions can be used for reduction of a space of possible private actions and if the reduction is significant then it really threatens systems security. Concepts of gained and excluded private actions are complementary. Roughly speaking, only systems for which both the sets - gained and excluded private actions are empty could be considered fully secure with respect to a given dynamic security policy D and set of observations O. But since this is a very rare situation we have sug- gested how to numerically express corresponding level of security by relating size of sets of gained or excluded actions to the set of all appropriate actions and the size of O. That means, if the resulting measure is small enough the system can still be considered secure with respect to some given requirements. Later we have introduced the concept of limited intruders, i.e. intruders who have limited storage to record information ob- tained by observing public system activities. Such intruders could try to record also declassified but invisible action for the case that during execution they could become classified. We had to resolve the case when not all information on invisible actions could be recorded due to the lack of memory space which has an intruder at disposal. In the future we plan, besides already mentioned research, to investigate also addi- tional covert channels which could be exploited by an intruder. Particularly interesting are termination and divergence channels. They can be exploited by an intruder who can learn that the system is still working but does not react (for example, by power consumption). It might happen, for example, that the system is completely secure if an intruder cannot see termination (or divergence) and vice versa. 181 References [CHM07] Clark D., S. Hunt and P. Malacaria: A Static Analysis for Quantifying the In- formation Flow in a Simple Imperative Programming Language. The Journal of Computer Security, 15(3). 2007. [CMS09] Clarkson, M.R., A.C. Myers, F.B. Schneider: Quantifying Information Flow with Beliefs. Journal of Computer Security, to appear, 2009. [DHS15] van Delft B., S. Hunt, D. Sands: Very Static Enforcement of Dynamic Policies. In Principles of Security and Trust, LMCS 9036, 2015. [FGM00] Focardi, R., R. Gorrieri, and F. Martinelli: Information flow analysis in a discrete-time process algebra. Proc. 13th Computer Security Foundation Work- shop, IEEE Computer Society Press, 2000. [GM04] Gorrieri R. and F. Martinelli: A simple framework for real-time cryptographic protocol analysis with compositional proof rules. Science of Computer Program- ming archive Volume 50, Issue 1-3, 2004. [GM82] Goguen J.A. and J. Meseguer: Security Policies and Security Models. Proc. of IEEE Symposium on Security and Privacy, 1982. [Gro90] Groote, J. F.: Transition Systems Specification with Negative Premises. Baeten, J.C.M. and Klop, J.W. (eds.), CONCUR’90, Springer Verlag, Berlin, LNCS 458, 1990. [Gru11] Gruska D.P.: Gained and Excluded Private Actions by Process Observations. Fundamenta Informaticae, Vol. 109, No. 3, 2011. [Gru10] Gruska D.P.: Process Algebra Contexts and Security Properties. Fundamenta Informaticae, vol. 102, Number 1, 2010. [Gru09] Gruska D.P.: Quantifying Security for Timed Process Algebras, Fundamenta In- formaticae, vol. 93, Numbers 1-3, 2009. [Gru08] Gruska D.P.: Probabilistic Information Flow Security. Fundamenta Informaticae, vol. 85, Numbers 1-4, 2008. [Mil89] Milner, R.: Communication and concurrency. Prentice-Hall International, New York, 1989.