=Paper= {{Paper |id=None |storemode=property |title=Test Based Security |pdfUrl=https://ceur-ws.org/Vol-928/0135.pdf |volume=Vol-928 |dblpUrl=https://dblp.org/rec/conf/csp/Gruska12a }} ==Test Based Security== https://ceur-ws.org/Vol-928/0135.pdf
                         Test based security ?

                                 Damas P. GRUSKA

                     Institute of Informatics, Comenius University,
                      Mlynska dolina, 842 48 Bratislava, Slovakia,
                                 gruska@fmph.uniba.sk.



        Abstract. Different techniques for expressing an amount of information
        on secrete data which can be obtained by a process testing are presented.
        They are based on (probalistic) process algebra and (probabilistic) test-
        ing. Besides tested noninterference they express certainty about sets of
        private actions which execution is guaranteed by a given test and sets of
        actions which execution could be excluded by a given test. Moreover, we
        relate obtained information to the size of the test.

        Keywords: probabilistic process algebras, information flow, security


1     Introduction

We propose formalisms for analysis of systems of various nature specified by
process algebras. They allow us to formalize security properties based on an
absence of information flow between private and public system’s activities.
    The presented approach combines several ideas emerged from security theory.
We exploit an idea (of an absence) of information flow between public and private
system’s behaviour (see [GM82]). This concept has been exploited many times
in various formalisms. For example, security property called Bisimulation Strong
Nondeterministic Non-Interference requires that it cannot be distinguished (by
bisimulation) between forbidding and hiding of private actions. In our approach
we exploit this idea but we somehow weaken it by requiring more realistic as-
sumption that forbidding and hiding of private actions cannot be distinguished
by a (finite state) test, i.e. we exploit a kind of testing equivalence. In [Gru11]
we have studied information flows by defining two sets - the set of private ac-
tions which execution is guaranteed by a given observation of public actions and
the set of actions which execution is excluded by a given observation. Here we
apply this idea and we define sets of gained and excluded actions by a given test
instead of an observation.
    Traditional security properties could be criticized for being either too restric-
tive or too benevolent.
    Even in the case testing scenario, a standard access control process to be
insecure since there is always some (even very small) information flow for an
attacker which tries to learn a password - at least (s)he can learn what is not
?
    Work supported by the grant VEGA 1/1333/12.
the correct password. On the other side, it can happen that the set of excluded
or gained (or both) sets (of possible passwords) are empty but a membership of
some possible password to some of them is very likely since not certain. There
are several ways how to overcome these disadvantages. An amount of leaked in-
formation could be expressed by means of the Shannon’s information theory as
it was done, for example, in [CHM07,CMS09] for simple imperative languages
and in [Gru08] for process algebra. Another possibility is to exploit probabilis-
tic theory as it was used for process algebras in [Gru09]. Resulting techniques
lead to quantifications of how many bits of private information can leak or how
probable is that an intruder can learn some secrete property on processes. Here
we exploit probabilistic process algebras (either for a test, tested process, or
both) to express probabilities of passing the test or probabilities of membership
of elements in sets of gained or excluded actions. Moreover, we relate obtained
information to the size of the test. Presented testing approach is strictly stronger
then that of [Gru11], which is based on traces. On the other side, bisimulation
based approach, which is stronger, is often too strong and does not correspond
to real(istic) possible intruders. Moreover, testing allow us, besides other advan-
tages, to express security of a system with respect to size of the test which can
jeopardize this security. Hence the resulting level of security gives us relevant
information on real (practical) system security.
    The paper is organized as follows. In Section 2 we describe our testing sce-
nario. In Sections 4 and 5 we define test based noninterference and probabilistic
noninterference, respectively.


2   Probabilistic Process Algebra

In this section we define the Probabilistic Process Algebra, pCCS for short, which
is based on Milner’s CCS (see [Mil89]). First we assume a set of atomic action
symbols A not containing symbol τ and such that for every a ∈ A there exists
a ∈ A and a = a. We define Act = A ∪ {τ }. We assume that    S a, b, . . . range over
A and u, v, . . . range over Act. Assume the signature Σ = n∈{0,1,2} Σn , where

             Σ0 = {N il}
             Σ1 = {x. | x ∈ Act} ∪ {[S] | S is a relabeling function}
                  ∪{\M | M ⊆ A}
             Σ2 = {|, +}

with the agreement to write unary action operators in prefix form, the unary
operators [S], \M in postfix form, and the rest of operators in infix form. Re-
labeling functions, S : Act → Act are such that S(a) = S(ā) for a ∈ A and
S(τ ) = τ .
   The set of CCS terms over the signature Σ is defined by the following BNF
notation:
                     P ::= X | op(P1 , P2 , . . . Pn ) | µXP
where X ∈ V ar, V ar is a set of process variables, P, P1 , . . . Pn are CCS terms,
µX− is the binding construct, op ∈ Σ.
    We will use an usual definition of opened and closed terms where µX is the
only binding operator. Closed terms which are guarded (each occurrence of X
is within some subexpression u.A are called CCS processes. Note that N il will
be often omitted from processes descriptions and hence, for example, instead
of a.b.N il we will write just a.b. Structural operational semantics for processes
by given labeled transition systems. The set of terms represents a set of states,
labels are actions from Act (see [Mil89]).
                                                                                 x
    The transition relation → is a subset of CCS × Act × CCS. We write P → P 0
                    0                 x                   0               x    0
instead of (P, x, P ) ∈ → and P →    6 if there is no P such that P → P . The
                                  x
meaning of the expression P → P 0 is that the term P can evolve to P 0 by
                              x
performing action x, by P → we will denote that there exists a term P 0 such
          x
that P → P 0 .
    Now we add probabilities to CCS calculus. We will follow alternating model
(the approach presented in [HJ90]) which is neither reactive nor generative nor
stratified (see [LN04]) but instead of that it will be based on separation of prob-
abilistic and nondeterministic transitions and states. Probabilistic transitions
are not associated with actions but they are labeled with probabilities. In so
called probabilistic states a next transition is chosen according to probabilistic
distribution. For example, process a.(0.3.b.N il ⊕0.7.(a.N il +b.N il)) can perform
action a and after that it reaches the probabilistic state and from this state it
can reach with probability 0.3 the state where only action b can be performed
or with probability 0.7 it can reach the state where it can perform either a or b .
L Formally, to add probabilities to CCS calculus P      we introduce a new operator
       q .P
   i∈I i i i, q being real numbers  in (0, 1] such that   i∈I qi = 1. Processes which
can perform as the first action probabilistic transition will be called probabilistic
processes or states (to stress that P is non-probabilistic process we L    will some-
times write PN if necessary). Hence we require that all Pi processes in i∈I qi .Pi
and in P1 + P2 are non-probabilistic ones. By pCCS we will denote the set of
all probabilistic and non-probabilistic processes and all definitions and notations
for CCS processes are extended for pCCS ones. We need new transition rules for
pCCS processes.

                        1
                                   A1           L              qi     A2
                   PN → PN                          i∈I qi .Pi → Pi

                  q         r
               P → P 0 , Q → Q0
                      q.r          Pa
                P | Q → P 0 | Q0


    For probabilistic choice we have the rule A2 and for a probabilistic transition
of two processes running in parallel we have the rule P a. The technical rule A1
enables parallel run of probabilistic and non-probabilistic processes by allowing
                                             1
to non-probabilistic processes to perform → transition and hence the rule P a
could be applied.
    Introducing probabilities to process algebras usually causes several technical
complications. For example, an application of the restriction operator to proba-
bilistic process may lead to unwanted deadlock states or to a situation when a
sum of probabilities of all outgoing transitions is less than 1. A normalization is
usually applied to overcome similar situations. We do not need to resolve such
situations on the level of pCCS calculus since we will use only relative probabil-
ities of sets of computations. To compute these probabilities normalization will
be also exploited but only as the very last step.

    To express what an observer can see from system behaviour we will define
                         x
modified transitions ⇒M which hide actions from M (except τ and probabilities).
                                x                             s1 x s2
Formally, we will write P ⇒M P 0 for M ⊆ A iff P →              →→ P 0 for s1 , s2 ∈
                      ?         s                  x1  x2        xn
(M ∪ {τ } ∪ (0, 1]) and P ⇒M instead of P ⇒M ⇒M . . . ⇒             M . Instead of ⇒∅
we will write ⇒ and instead of ⇒{h} we will write ⇒h . By  we will denote the
empty sequence of actions and by s v s0 , s, s0 ∈ (Act∪(0, 1])? we will denote that
s is a prefix of s0 . By Sort(P ) we will denote the set of actions from A which can
                                            s.x
be performed by P i.e. Sort(P ) = {x|P −→ for some s ∈ (Act∪(0, 1])? and x ∈
A}. By x b we mean x for x 6= τ and  otherwise.
    Let s ∈ (Act∪(0, 1])? . By s|B we will denote the sequence obtained from s by
removing all actions not belonging to B and we will write x ∈ s if the sequence
s contains x as its element. We will write s̄ for sequence obtained from s in such
a way that all actions are replaced by its complementary action if it exists and
left as they are otherwise. For example, if s = a.b̄.τ.c we have s̄ = ā.b.τ.c̄.

Definition 1. The set of weak traces of process P with respect to the set M, M ⊆
                                               s
A is defined as T rwM (P ) = {s ∈ A? |∃P 0 .P ⇒M P 0 }. Instead of T rw∅ (P ) we
will write T rw (P ).
    Two processes P and Q are weakly trace equivalent with respect to M (P ≈wM
Q) iff T rwM (P ) = T rwM (Q). Again we will write ≈w instead of ≈w∅ .

Definition 2. Let ( CCS, Act, →) be a labelled transition system (LTS). A re-
lation < ⊆ CCS × CCS is called a weak bisimulation if it is symmetric and it
                                                        x
satisfies the following condition: if (P, Q) ∈ Re and P → P 0 , x ∈ Act, then there
                                      x
exists a process Q0 such that Q ⇒ Q0 and (P 0 , Q0 ) ∈ <. Two processes P, Q
                                      b
are weakly bisimilar, abbreviated P ≈ Q, if there exists a strong bisimulation
relating P and Q. If it is not required that relation Re is symmetric we call it
simulation and we say that process P simulates process Q, abbreviated P ≺ Q,
if there exists a simulation relating P and Q.


3   Testing

In this section we define basic testing scenario which will be applied in the next
section. First we start with M-simulations. We say that process P passes test T if
whenever the test makes an action then process can perform the complementary
action (optionally together with τ action and actions from M ) in such a way
that a resulting process again pass the resulting test. By test T we will consider
process which does not contain τ action and such that Sort(T ) ∩ M = ∅.

Definition 3. Let M ⊂ A. We say that process P passes test T with respect to
M iff there exists relation < called M -simulation such that whenever (T, R) ∈ <
                            a                                             ā
then for every a ∈ A if T → T 0 then there exists process P 0 such that P ⇒M P 0
and (T 0 , P 0 ) ∈ <. We will denote the union of all M -simulations as M . If
M = ∅ we will write  instead of ∅ .

   Now we will define equivalence between processes if they cannot be distin-
guished by test T .

Definition 4. We define test equivalence with respect to test T and set M, M ⊂
A (we will denote it as ≈T,M ) as

                        P ≈T,M Q iff T M P ↔ T M Q.

   We will need also stronger variant of testing, which require that every re-
sponse to the test has to pass testing.

Definition 5. Let M ⊂ A. We say that process P strongly passes test T with
respect to M iff there exists relation < called M -strong simulation such that
                                                a
whenever (T, R) ∈ < then for every a ∈ A if T → T 0 then for every P 0 such that
   ā
P ⇒M P 0 we have (T 0 , P 0 ) ∈ <.
    We will denote the union of all M -strong simulations as sM .

Definition 6. We define strong test equivalence with respect to test T and set
M, M ⊂ A (we will denote it as ≈s,T,M ) as

                      P ≈s,T,M Q iff T sM P ↔ T sM Q.

Example 1. Let T = a.(b.N il + c.N il) and M = {d, d0 }. Then we have P ≈T,M
P 0 and P 6≈s,T,M P 0 for P = τ.d.ā.(d0 .b.a.b.N il + τ.c̄.N il) + a.d.b.d.c.d.N il,
P 0 = τ.d.ā.(d0 .b.a.b.N il + τ.c̄.N il). Note that in general it holds ≈s,T,M ⊂≈T,M .


4    Non-interference
To define non-interference for process algebra setting we suppose that all actions
are divided into two groups, namely public (low level) actions L and private
(high level) actions H i.e. A = L ∪ H, L ∩ H = ∅. Moreover, we suppose that
H 6= ∅ and L 6= ∅ and that for every h ∈ H, l ∈ L we have h ∈ H, l ∈ L.
To denote sequences of public actions, i.e sequences consisting of actions from
L and sequences of private actions from H, we will use notation ˜l, l˜0 , . . . for
sequences from L? and h̃, h̃0 , . . . for sequences from H ? , respectively. The set of
actions could be divided to more than into two subsets, what would correspond
into more levels of classification. All the following concepts could be naturally
extended for such setting.
    First we formally define an absence-of-information-flow property - Bisimula-
tion Strong Nondeterministic Non-Interference (BSNNI, for short, see [FGM00]).
    Process P has BSNNI property (we will write P ∈ BSN N I) if P \H behaves
like P for which all high level actions are hidden for an observer. To express
this hiding we introduce hiding operator P/M, M ⊆ A, for which it holds if
    a               a                                         τ
P → P 0 then P/M → P 0 /M whenever a 6∈ M ∪ M̄ and P/M → P 0 /M whenever
a ∈ M ∪ M̄ . Formal definition of BSNNI follows.

Definition 7. Let P ∈ CCS. Then P ∈ BSN N I iff P \ H ≈ P/H.

    Inspired by BSNNI property we define our security tests. Process passes a
test if tester cannot detect communication via actions from M of tested process
(see Fig 1., here we assume that Succ(Q) ⊆ M ).

Definition 8. Let T be a test and M ⊂ A. We say that CCS process P passes
security test with respect to M (we will write P ∈ STT,M ) if

                                P ≈T,M P/M.



                                                      
   T    -      P       Q                        T       -   P



                            Fig. 1. Testing scenario




Lemma 1. Let P ∈ BSN N I. Then we have P ∈ STT,H for every test T .

Proof. Sketch. Let P ∈ BSN N I then P \ H and P/H cannot be distinguished
by weak bisimulation and hence cannot be distinguished by test equivalence with
respect to any test and the set of high level actions.

Lemma 2. Let P 6∈ BSN N I. Then there exists finite state test T such that
P 6∈ STT,H .

Proof. Sketch. If P 6∈ BSN N I. This means that P \ H 6≈ P/H, i.e. if one tries
to establish corresponding bisimulation than this has to fail. The required test
would mimic that.

   In similar way we can prove the inverse of the following lemma.

Lemma 3. There does not exist finite state test T such that P ∈ STT,H would
imply P ∈ BSN N I.
4.1    Gained and excluded private actions

First we define a set of private actions which occurrence can be learned by a test
T.

Definition 9. We say that test T can gain information on occurrence of high
level actions of CCS process P (we will denote their set as g(P, T )) if
                            [
                 g(P, T ) =   {x|P ∈ STT,H\{x} ∧ P 6∈ STT,H }.
                               x∈H

    According to Definition 9 set g(P, T ) represents private actions which per-
formance could be deduced by test T . By stronger test we can learn more, as it
is stated by the following proposition.

Proposition 1. Let we have two tests T, T 0 such that T 0 ≺ T . Then we have
g(P, T 0 ) ⊆ g(P, T ) for every process P .

Proof. Main idea. Let T 0 ≺ T then every action performed by T can be simulated
by T 0 . Hence every action which occurrence could be detected by T can be
detected also by T 0 .

    In [Gru11] we have defined a set of gained actions by observing a sequence
of public actions ˜l. We repeat its definition.

Definition 10. Let P ∈ CCS and ˜l ∈ T rwH (P ). Then the occurrence of the set
of private action which can be gained about P by public observing ˜l is defined as
follows:
                                                l̃
                        g(P, ˜l) = {h|h ∈ H, P ⇒
                                               6 H\{h} }.

    As it is clear from the following lemma and example, here presented notion
is strictly stronger.
             ¯                                                  ¯
Lemma 4. Let ˜l ∈ T rwH (P ). Then it holds g(P, ˜l.N il) = g(P, ˜l).

Proof. Sketch. In case of simple test T, T = ˜l.N il we can obtain exactly the same
                             ¯
information as by observing ˜l.
                                          ¯
Corollary 1. Let ˜l ∈ T rwH (T ) then g(P, ˜l) ⊆ g(P, T ).
                     ¯
Proof. Let T 0 = ˜l.N il. Then we have from Propositions 1 and Lemma 4 that
    ¯
g(P, ˜l) ⊆ g(P, T ).

Example 2. Let P = l1 .h.l2 .N il + l1 .l2 .N il and P 0 = l1 .h.h0 .l2 .N il + l1 .h.l2 .N il.
Let ˜l = l1 .l2 then we have g(P, ˜l) = ∅, g(P 0 , ˜l) = {h}. Let P 00 = l.l1 .N il +
l.(h.l1 .N il + l2 .N il) and T = ¯l.(¯l1 .N il + ¯l2 .N il) then we have g(P 00 , T ) = {h}
but g(P, ˜l) = ∅ for every ˜l.
   By process’s testing one can obtain information not only about actions which
had to be performed but also about actions which could be excluded (they could
not be performed). We start with a motivation example taken from [Gru11].

 Example 3 (Access control process). Let P sw be a set of all possible passwords.
 Let us consider a simple access control process defined as follows (the set of
 high level action HP sw consists of actions hw , w ∈ P sw and actions ¯llogin ,
¯l
  access denied , lw , w ∈ P sw are low level actions).
                                             X
            P = lv .hv .¯llogin .N il +                 lu .hv .¯laccess denied .N il
                                          u∈P sw,u6=v

   This process could represent, for example, an access to safe-deposit where
no name of a bank client is required just a private key (or pin code - i.e. some
password, in general). An attacker tries to guess the correct password. (S)he
enters u what is modeled by performing low level action lu ((s)he can see/observe
what (s)he tries - a public action lu could be ”observed”.) The guessed password
(u) is compared with the correct one (v, represented by high level action hv ,
which is unknown to the attacker). The attacker behaves as the test
                         X
                   T =         ¯lw .(l
                                       login .N il + laccess denied .N il)
                         w∈V

where V ⊆ P sw. If |V | is much smaller then |P sw| it is very unlikely that V
contains a correct password, but even if V really does not contain the correct
password, we can gain by test T some information about the correct one - since
the correct one has to be from the reduced set P sw \ V . Note that we still have
g(P, T ) = ∅ and hence to describe the knowledge obtained by test T we need a
new concept.

    To obtain negative information on action performance we need to exploit
strong testing (see Definition 5).

Definition 11. Let T be a test and M ⊂ A. We say that CCS process P passes
strong security test with respect to M (we will write P ∈ sSTT,M ) if

                                      P ≈s,T,M P/M.

     Now we can define a set of actions which occurrence can be excluded by test
T.

Definition 12. Let T be a test and P process.
   We define a set of high level actions (denoted as e(P, T )) which occurrence
could be excluded by test T as
                             \
                e(P, T ) =         {x|P ∈ STs,T,H\{x} ∧ P 6∈ STs,T,H }.
                             x∈H
    Note that for sets of excluded action one can formulate similar properties as
for gained ones. Concepts of the gained and excluded sets of private actions are
complementary as it is stated in the following proposition. Roughly speaking,
only systems for which both the sets - gained and excluded private actions are
empty could be considered fully secure.
Proposition 2. For every process P and every test T it holds g(P, T )∩e(P, T ) =
∅ and ∅ ⊆ g(P, T ) ∪ e(P, T ) ⊆ H.
Proof. Main idea. Suppose that there exists h, h ∈ g(P, T ) ∩ e(P, T ) for some
P and T . This high level action is excluded and gained by the test what is
contradiction with the way how these sets are defined. On the other side, we
could find process P and test T such that no action is gained or excluded and
that every private actions is either gained or excluded, respectively.
    Let us return to Example 2. Security of process P (how much information on
P can be obtained by test T ) depends on ”size” of T . For example, if T could
try all possible passwords then P would be not secure with respect to such test.
On the other hand, if it can try only small fragment of passwords, then P is
usually considered to be reasonable secure. Formally we define size of process as
follows.
Definition 13. Size of process P (denoted |P |) is defined as follows:


                                   |N il| = 0
                                   |x.P | = 1 + |P |
                               |P + Q| = |P | + |Q|
                                  |P |Q| = |P | + |Q|
                               |P \ M | = 1 + |P |
                                 |P [S]| = 1 + |P |
                                |µXP | = 1 + |P |

    Now we are ready to defines quantifications of levels of security.
Definition 14. Level of secrecy of P with respect test T for gained (excluded)
actions (denoted by LSg and LSe, respectively), can be expressed as LSg =
(|g(P, T )|.|H|)/|T | (LSe = (|H|.|ge(P, T )|/|T |).
Example 4. Let us return to Example 2. We have LSe = |P sw|, i.e. the evel of
secrecy of P with respect test T for excluded asctions is given by size of the set
of passwords.


5    Probabilistic noninterference
Above mentioned properties are qualitative ones. In Definition 14 we propose
some quantification as a numerical expression how much information can be
obtained with respect to size of H and T , but it is based on qualitative basis. By
test we can obtain information or not. In this section we extend this approach
by directly employing probabilistic tests and testing.
     First we need some preparatory work. Let P be a pCCS process and let
    x         x    x        xn
P →1 P1 →2 P2 →3 . . . →         Pn , where xi ∈ Act ∪ (0, 1] for every i, 1 ≤ i ≤ n.
The sequence P.x1 .P1 .x2 . . . xn .Pn will be called a finite computational path of
P (path, for short), its label is a subsequence of x1 . . . . .xn consisting of those
elements which belong to Act i.e. label(P.x1 .P1 .x2 . . . xn .Pn ) = x1 . . . . .xn |Act
and its probability is defined as a multiplication of all probabilities contained
in it, i.e. P rob(P.x1 .P1 .x2 . . . xn .Pn ) = 1 × q1 × . . . × qk where x1 . . . . .xn |(0,1] =
q1 . . . gk . The multiset of finite pathes of P will be denoted by P ath(P ). For
example, the path (0.5.a.N il ⊕ 0.5.a.N il).0.5.(a.N il).a.(N il) is contained in
P ath(0.5.a.N il⊕0.5.a.N il) two times. There exist a few techniques how to define
this multiset. For example, in [SL95] a technique of schedulers are used to resolve
the nondeterminism and in [GSS95] all transitions are indexed and hence pathes
can be distinguished by different indexes. In the former case, every scheduler
defines (schedules) a particular computation path and hence two different sched-
ulers determine different pathes, in the later case, the index records which transi-
tion was chosen in the case of several possibilities. The set of indexes for process
P consists of sequences i1 . . . ik where ij ∈ {0, . . . , n} ∪ {0, . . . , n} ×L    {0, . . . , n}
where n is the maximal cardinality of I for subterms of P of the form i∈I qi .Pi .
An index records how a computation path of P could be derived, i.e. it records
which process was chosen in case of several nondeterministic possibilities. If
there is only one possible successor transitions are indexed by 1 (i.e. correspond-
                                 x
ing il = 1) If transition P → P 0 is indexed by k (i.e. corresponding il = k) then
                     x    0                                                 x
transition P + Q → P is indexed by k.1 and transition Q + P L             → P 0 is indexed
                             x                                                              x
                                    0
by k.2. If transition Pi → P is indexed by k then transition i∈I qi .Pi → P 0
                                                x               x
is indexed by k.i, and if transitions P → P 0 and Q → Q0 are indexed by k and
l, respectively, then transitions of P |Q have indexes from {(k, 0), (0, l), (k, l)}
depending on which transition rule for parallel composition was applied. Every
index defines at most one path and the set of all indexes defines the multi-
sets of pathesP P ath(P ). Let C, C ⊆ P ath(P ) be a finite multiset. We define
P r(C) = c∈C P rob(c) if C 6= ∅ and P r(∅) = 0. Now we are ready to associate
                               a          s
probabilities to relations → and ⇒M .
Definition 15. Let P ∈ pCCS. We define P r(P, a, O) = P r(C), where C =
                      a
{c|label(c) = a, P → P 0 , P 0 ∈ O} and P rM (P, a, O) = P r(C) where C =
                                             s
{c|label(c) = s, s|L = a, s ∈ ({a} ∪ M )∗, P ⇒M P 0 , P 0 ∈ O}.
    Probabilistic testing with a given accuracy  is similar to non-probabilistic
testing but it requires that probabilities with which the test and process perform
required actions do not differ more then by .
Definition 16. Let M ⊂ A. We say that process process P passes test T with re-
spect to M with accuracy  iff there exists M -strong -simulation such that when-
ever (T, R) ∈ < P r(T, a, O)×P rM (P, a, O0 ) 6= 0, |P r(T, a, O)−P rM (P, a, O0 )| ≤
 and (O, O0 ) ∈ pCCS × pCCS/<.
We will denote the union of all M -strong simulations with accuracy  as sM  .
   Finally we can define strong test  probabilistic equivalence.
Definition 17. We define strong test  probabilistic equivalence with respect to
test T for pCCS processes and set M, M ⊂ A (we will denote it as ≈s,T,M, ) as
                   P ≈s,T,M, Q iff T sM  P ↔ T sM  Q.
   Here we have simple observation with a straightforward proof.
Lemma 5. ≈s,T,M,1 =≈s,T,M .
    As regards tests, we assume that they cannot perform at any state the same
action with two different probabilistic steps. Formally:
Definition 18. We say that process P is probabilistically deterministic for every
                                                                 qa
P 0 ∈ Succ(P ) it holds that there is at most one transition P 0 → for every a.
   Finally we are ready to define probabilistic test with a given accuracy.
Definition 19. Let T be a probabilistically deterministic test and M ⊂ A. We
say that CCS process P passes security test with respect to M and accuracy  if
                                  P ≈T,M, P/M.
   In this case we write P ∈ STT,M, .
    We could extend Definitions 12 and 9 to probabilistic setting as well as Def-
inition 14 of quantifications of levels of probabilistic security.
    To do so we need more accurate measure for probabilistic test size, which
takes into account also probabilistic distribution in case of choice operator. Ba-
sically, we multiply sum of sizes of process Pi by entropy of ⊕i∈I qi in case of
process ⊕i∈I qi .Pi .
Definition 20. Size of process probabilistic process P (denoted |P |) is defined
as for non-probabilistic one (see Definition 13) with
                                                                     1
                 | ⊕i∈I qi .Pi | = sumi∈I |Pi | × sumi∈I qi . log2
                                                                     qi
   Now, level of secrecy in case of probabilistic test can be defined in the same
way as it was done for non-probabilistic one (see Definition 14).
Example 5. Let us return to Example 2. The probabilistic test T would be then

                T = ⊕i∈V qi .¯lw .(llogin .N il + laccess denied .N il)
   Maximal size of T is reached if all probabilities qi are equal, i.e. there is no
preliminary belief expressed by the test.
    Note that the presented formalism cover all combinations of testings: none
of processes is probabilistic, both processes are probabilistic, only test is prob-
abilistic and only tested process is probabilistic. Each testing has its meaning
depending on security property we would like to check.
6   Conclusions
We have presented several security concepts based on an information flow ob-
served by testing of public actions. They express process security, sets of private
actions which have to be performed (the gained sets) or the set of private actions
which could be excluded by a given test. The notion of excluded actions can be
used for a reduction of a space of possible private actions and if the reduction is
significant then it really threatens systems security.
     Moreover, our formalism includes also probabilistic tests of probabilistic pro-
cesses. In this way we can capture also cases when the membership to the sets
of gained and excluded actions is not certain but only very likely what is infor-
mation, which could help intruder very significantly. Note that without ”quan-
tification” of the set membership, we can consider a system to be secure despite
the fact that a successful attack could be possible. We have also defined a way
how to measure size of a test and how to relate it to the size of obtained private
information on processes. We consider such quantification as one of the most
important advantages of presented testing approach.
References
[CHM07] Clark D., S. Hunt and P. Malacaria: A Static Analysis for Quantifying the
        Information Flow in a Simple Imperative Programming Language. The Jour-
        nal of Computer Security, 15(3). 2007.
[CMS09] Clarkson, M.R., A.C. Myers, F.B. Schneider: Quantifying Information Flow
        with Beliefs. Journal of Computer Security, to appear, 2009.
[FGM00] Focardi, R., R. Gorrieri, and F. Martinelli: Information flow analysis in
        a discrete-time process algebra. Proc. 13th Computer Security Foundation
        Workshop, IEEE Computer Society Press, 2000.
[GSS95] Glabbeek R. J. van, S. A. Smolka and B. Steffen: Reactive, Generative and
        Stratified Models of Probabilistic Processes Inf. Comput. 121(1): 59-80, 1995
[GM82] Goguen J.A. and J. Meseguer: Security Policies and Security Models. Proc.
        of IEEE Symposium on Security and Privacy, 1982.
[Gru11] Gruska D.P.: Gained and Excluded Private Actions by Process Observations.
        To apear in Fundamenta Informaticae, 2011.
[Gru09] Gruska D.P.: Quantifying Security for Timed Process Algebras, Fundamenta
        Informaticae, vol. 93, Numbers 1-3, 2009.
[Gru08] Gruska D.P.: Probabilistic Information Flow Security. Fundamenta Infor-
        maticae, vol. 85, Numbers 1-4, 2008.
[Gru07] Gruska D.P.: Observation Based System Security. Fundamenta Informaticae,
        vol. 79, Numbers 3-4, 2007.
[HJ90]  Hansson, H. a B. Jonsson: A Calculus for Communicating Systems with
        Time and Probabilities. In Proceedings of 11th IEEE Real - Time Systems
        Symposium, Orlando, 1990.
[LN04]  López N. and Núñez: An Overview of Probabilistic Process Algebras
        and their Equivalences. In Validation of Stochastic Systems, LNCS 2925,
        Springer-Verlag, Berlin, 2004
[Mil89] Milner, R.: Communication and concurrency. Prentice-Hall International,
        New York,1989.
[SL95]  Segala R. and N. Lynch: Probabilistic Simulations for Probabilistic Pro-
        cesses. Nord. J. Comput. 2(2): 250-273, 1995