=Paper= {{Paper |id=Vol-2571/CSP2019_paper_6 |storemode=property |title=Observations, Testing and Security |pdfUrl=https://ceur-ws.org/Vol-2571/CSP2019_paper_6.pdf |volume=Vol-2571 |authors=Damas Gruska,M. Carmen Ruiz |dblpUrl=https://dblp.org/rec/conf/csp/GruskaR19 }} ==Observations, Testing and Security== https://ceur-ws.org/Vol-2571/CSP2019_paper_6.pdf
            Observations, Testing and Security ?

                      Damas P. Gruska1 and M. Carmen Ruiz2
                             1
                               Comenius University, Slovakia
                      2
                          Universidad de Castilla-La Mancha, Spain



        Abstract. Testing of security for multi-agent systems is proposed and
        studied. We assume an attacker, as an agent, who can accumulate and
        exploit knowledge about other agents. This will be done by observation
        function and security property called process opacity. Unfortunately, this
        property is undecidable in general, so we propose its more realistic variant
        based on tests and testing. Here we consider systems to be secure if they
        cannot be compromised by a given test or set of tests. In the end, we
        state a decidability result for security testing.

        Keywords: multi-agent systems, process algebras, information flow, se-
        curity, testing


1     Introduction

Formal methods play an important role to guarantee software quality. On the
other side their application is frequently either rather expensive or practically
impossible due to complexity issues. On top of these there are properties which
cannot be formally verified in general due to their undecidability. In such cases,
tests and testing offer a more realistic approach.
    In this paper, we address security properties of systems, particularly multi-
agent (MAS). We consider an attacker, as one of agents, who can accumulate
some knowledge on the behaviour of other agents. We propose corresponding se-
curity properties and means how to test these properties. In a sense, we combine
formal verification with testing. The presented approach combines several ideas
emerged from the security theory. We exploit an idea (of an absence) of infor-
mation flow between public and private system’s behaviour (see [GM82]). This
concept has been exploited many times in various formalisms. For example, the
security property called Bisimulation Strong Nondeterministic Non-Interference
requires that it cannot be distinguished (by means of bisimulation) between for-
bidding and hiding of private actions. In [Gru13] we have exploited this idea,
but we weaken it by requiring that forbidding and the hiding of the private ac-
tions cannot be distinguished by a given test, i.e. we exploit a kind of testing
equivalence (see also [NH84,SL95]).
?
    Work supported by the grant VEGA 1/0778/18. Copyright c 2019 for this paper
    by its authors. Use permitted under Creative Commons License Attribution 4.0
    International (CC BY 4.0).
    Here we start with security concept called opacity. To explain the opacity
principle, let’s suppose that we have some property φ over sequences of actions.
Such property might be an execution of one or more classified actions, an ex-
ecution of actions in a particular classified order which should be kept hidden,
etc. We would like to know whether an observer can deduce the validity of the
property φ just by partially observing (not all actions are visible) sequences of
actions (traces) performed by the given process. The observer cannot deduce the
validity of φ if there are two traces w, w0 such that φ(w) ∧ ¬φ(w0 ) holds and the
traces cannot be distinguished by the observer. In [Gru15] opacity is modified
(the result is called process opacity) in such a way that instead of a process’
traces we focus on properties of reachable states and attackers which can see
only some process’s actions. Hence we assume an intruder who is not primarily
interested in whether some sequence of actions performed by a given process has
some given property but we consider an intruder who wants to discover whether
this process reaches a state which satisfied some given (classified) predicate. It
turned out that in this way we could capture many new security flaws. On the
other hand some security flaws, particularly important for multi-agent systems,
are not covered by this state-based security property neither by its variant called
an initial state opacity or infinite studied in [Gru17,GR19b].
    In this paper we extend process opacity to reflect attackers which can ac-
cumulate some knowledge about other system’s behaviour. This approach is
particularly appealing for multi-agent systems where one of the agents could be
an attacker. Qualitative security properties are often criticized for being either
too restrictive or too benevolent. For example, a standard access control process
should be considered insecure even if there always exists some (even very small)
information flow which could help an attacker who tries to learn a password. By
every attempt an attacker can learn, at least, what is not the correct one. There
are several ways to overcome these disadvantages i.e. either quantify information
flow or put some restrictions on attacker’s capabilities. An amount of leaked in-
formation could be expressed by means of the Shannon’s information theory as it
was done, for example, in [CHM07,CMS09] for simple imperative languages. In
this way we can obtain quantification of information flow either as a number of
bits of private information which could leak or as a probability that an intruder
can learn some secrete property. Here we exploit a different approach. We define
tests and testing of security properties. Hence instead of requiring general secu-
rity we require security with respect to a given set of tests. Each test represents
a possible scenario of an attacker as well his or her capabilities. For example,
an access control system with strong password policy should be considered rea-
sonable secure with respect to a ”small” attackers (tests), which can try only a
few passwords. Moreover, testing allow us, besides other advantages, to express
security of a system with respect to size of the test which could jeopardize its
security. Hence the resulting level of security gives us relevant information on
real (practical) system’s security.
   In this paper we also exploit an idea of observation function which express
a capability of an attacker to accumulate some knowledge over MAS behaviour.
Attacker’s observations may not be just simple ones, say that some actions are
visible for her and others are not (see [GR19a]). Visibility of a particular action
could depend on previous actions as well. Hence the presented testing approach
is strictly stronger then that of [Gru11], which is based on simple process’s
observations.
    Contribution of the work can be summarized as follows: 1. definition of secu-
rity with respect to a given test, 2. modeling observations by processes, 3. mod-
eling predicates by processes, 4. reducing security checking to process’s traces
checking, 5. decidability results, for practically the most interesting, finite states
processes.
    The paper is organized as follows. Our working formalism is introduced in
Section 2. In Section 3 we describe information flow security properties of inter-
est. In Sections 4 we define tests and testing and in Section 5 we relate testing
and security.


2    Working Formalism

As an working formalism we will use Milner’s CCS (see [Mil89]). Let A be a
se of atomic action symbols not containing symbols τ and such that for every
a ∈ A there exists a ∈ A and a = a. We define Act = A ∪ {τ }. We assume that
a, b, u, v . . . range over A, x, y, . . . range over Act, For s = x1 .x2 . . . . .xn , xi ∈ Act
                   s                 x x         xn
we write P → instead of P →1 →2 . . . →             and we say that s is a trace of P .
The set of all traces of P will be denoted by T r(P ). By  we will denote the
empty sequence of actions, by Succ(P ) we will denote the set of all successors
                                    s
of P i.e. Succ(P ) = {P 0 |P → P 0 , s ∈ Act∗ }. If the set Succ(P ) is finite we
                                                                                    x
say that P is a finite state process. We define modified transitions ⇒M which
                                                                x      0
”hide” actions from M . Formally, we will write P ⇒M P for M ⊆ Act iff
   s1 x s2                                    s                x    x            xn
P →   →→ P 0 for s1 , s2 ∈ M ? and P ⇒M instead of P ⇒1 M ⇒2 M . . . ⇒              M . We will
             x                                         x                                 x
write P ⇒M if there exists P 0 such that P ⇒M P 0 . We will write P ⇒M P 0
                                                                                         b
                                                   x
instead of P ⇒M P 0 if x ∈ M . Note that ⇒M is defined for arbitrary action x
but in definitions of security properties we will use it for actions (or sequence of
actions) not belonging to M . We can extend the definition of ⇒M for sequences
                             s
of actions similarly to →. By s|B we will denote the sequence obtained from s
by removing all actions not belonging to B.
    We define two equivalences which are modifications of trace equivalence and
weak bisimulation, respectively (see [Mil89]).

Definition 1. The set of weak traces of process P with respect to the set M, M ⊆
                                               s
A is defined as T rwM (P ) = {s ∈ A? |∃P 0 .P ⇒M P 0 }. Instead of T rw∅ (P ) we
will write T rw (P ).
    Two processes P and Q are weakly trace equivalent with respect to M (P ≈wM
Q) iff T rwM (P ) = T rwM (Q). We will write ≈w instead of ≈w∅ .

Definition 2. Let (Act, Act, →) be a labelled transition system (LTS). A rela-
tion < ⊆ Act × Act is called a M-bisimulation if it is symmetric and it satisfies
                                                  x
the following condition: if (P, Q) ∈ < and P → P 0 , x ∈ Act then there exists a
                          x
process Q0 such that Q ⇒M Q0 and (P 0 , Q0 ) ∈ <. Two processes P, Q are M-
                          b
bisimilar, abbreviated P ≈M Q, if there exists a M-bisimulation relating P and
Q.


3   Opacity

To formalize an information flow we do not divide actions into public and private
ones at the system description level, as it is done for example in [GM04,BG04],
but we use a more general concept of observation and opacity. This concept was
exploited in [BKR04] and [BKMR06] in a framework of Petri Nets and transition
systems, respectively. Firstly we define observation function on sequences from
Act? .

Definition 3 (Observation). Let Θ be a set of elements called observables.
Any function O : Act? → Θ? is an observation function. It is called static
/dynamic /orwellian / m-orwellian (m ≥ 1) if the following conditions hold
respectively (below we assume w = x1 . . . xn ):

 – static if there is a mapping O0 : Act → Θ ∪ {} such that for every w ∈ Act?
   it holds O(w) = O0 (x1 ) . . . O0 (xn ),
 – dynamic if there is a mapping O0 : Act? → Θ ∪ {} such that for every
   w ∈ Act? it holds O(w) = O0 (x1 ).O0 (x1 .x2 ) . . . O0 (x1 . . . xn ),
 – orwellian if there is a mapping O0 : Act × Act? → Θ ∪ {} such that for every
   w ∈ Act? it holds O(w) = O0 (x1 , w).O0 (x2 , w) . . . O0 (xn , w),
 – m-orwellian if there is a mapping O0 : Act × Act? → Θ ∪ {} such that for
   every w ∈ Act? it holds O(w) = O0 (x1 , w1 ).O0 (x2 , w2 ) . . . O0 (xn , wn ) where
   wi = xmax{1,i−m+1} .xmax{1,i−m+1}+1 . . . xmin{n,i+m−1} .

    In the case of the static observation function each action is observed inde-
pendently from its context. In the case of the dynamic observation function an
observation of an action depends on the previous ones, in the case of the orwellian
and m-orwellian observation function an observation of an action depends on the
all and on m previous actions in the sequence, respectively. The static observa-
tion function is the special case of m-orwellian one for m = 1. Note that from
the practical point of view the m-orwellian observation functions are the most
interesting ones. An observation expresses what an observer - eavesdropper can
see from a system behavior and we will alternatively use both the terms (obser-
vation - observer) with the same meaning. Note that the same action can be seen
differently during an observation (except static observation function) and this
express a possibility to accumulate some knowledge by intruder. For example,
an action not visible at the beginning could become somehow observable.
    Now suppose that we have some security property. This might be an execution
of one or more classified actions, an execution of actions in a particular classified
order which should be kept hidden, etc. Suppose that this property is expressed
by predicate φ over process traces. We would like to know whether an observer
can deduce the validity of the property φ just by observing sequences of actions
from Act? performed by given process. The observer cannot deduce the validity
of φ if there are two traces w, w0 ∈ Act? such that φ(w), ¬φ(w0 ) and the traces
cannot be distinguished by the observer i.e. O(w) = O(w0 ). We formalize this
concept by opacity.

Definition 4 (Opacity). Given process P , a predicate φ over Act? is opaque
w.r.t. the observation function O if for every sequence w, w ∈ T r(P ) such that
φ(w) holds and O(w) 6= , there exists a sequence w0 , w0 ∈ T r(P ) such that
¬φ(w0 ) holds and O(w) = O(w0 ). The set of processes for which the predicate φ
is opaque with respect to O will be denoted by OpφO .

    A predicate is opaque if for any trace of a system for which it holds, there
exists another trace for which it does not hold and the both traces are indistin-
guishable for an observer (which is expressed by an observation function). This
means that the observer (intruder) cannot say whether a trace for which the
predicate holds has been performed or not.


4   Process Opacity

Now let us assume a different scenario, namely that an intruder is not interested
in traces and their properties but he or she tries to discover whether a given
process has reached a state with some given property which is expressed by a
(total) predicate. This property might be process deadlock, capability to execute
only traces with some given actions, capability to perform at the same actions
form a given set, incapacity to idle (to perform τ action ) etc. We do not put
any restriction on such predicates but we only assume that they are consistent
with some suitable behavioral equivalence. The formal definition follows.

Definition 5. We say that the predicate φ over processes is consistent with
respect to relation ∼
                    = if whenever P ∼
                                    = P 0 then φ(P ) ⇔ φ(P 0 ).

   As the consistency relation ∼ = we could take bisimulation (≈∅ ), weak bisim-
ulation (≈{τ } ) or any other suitable equivalence.
   .

Definition 6 (Process Opacity). Given process P , a predicate φ over pro-
                                                                      w
cesses is process opaque w.r.t. the observation function O whenever P → P 0 for
                                                                  w0
w ∈ Act∗ and φ(P 0 ) holds then there exists P 00 such that P → P 00 for some
w0 ∈ Act∗ and ¬φ(P 00 ) holds and moreover O(w) = O(w0 ). The set of processes
for which the predicate φ is process opaque w.r.t. to the O will be denoted by
P OpφO .

    Schematically, Fig. 1 depicts process opacity. In [Gru15]) process opacity is
defined for static observational function, namely a set of public actions which can
                                          w
                                    P    =⇒       φ(P 0 )        O(w)
                                                                 k
                                          w0
                                    P    =⇒       ¬φ(P 00 )      O(w0 )


                     Fig. 1. Process opacity with respect to O


be observed and a set of hidden (not necessarily private) actions are assumed.
                                                    s
To model such observations we exploit the relation ⇒M where actions from M
are those ones which could not be seen by the observer.
Definition 7. Given process P , a predicate φ over processes is process opaque
                                  s
w.r.t. the set M if whenever P ⇒M P 0 for s ∈ (Act \ M )∗ and φ(P 0 ) holds then
                              s
there exists P 00 such that P ⇒M P 00 and ¬φ(P 00 ) holds. The set of processes for
which the predicate φ is process opaque w.r.t. to the M will be denoted by P OpφM .
    Schematically, Fig. 2 depicts process opacity e w.r.t. the set M . Note that if
P ∼                     φ               φ
   = P 0 then P ∈ P OpM ⇔ P 0 ∈ P OpM whenever φ is consistent with respect
to ∼
   = and ∼ = is such that it is a subset of the trace equivalence (defined as 'w
                s             s
but instead of ⇒{τ } we use ⇒∅ ).


                                          s
                                    P    =⇒M      φ(P 0 )

                                          s
                                    P    =⇒M      ¬φ(P 00 )


                     Fig. 2. Process opacity with respect to M


   A relation between these two security properties is stated by the following
lemma.
Lemma 1. Let O : Act → Act such that O(x) =  iff x ∈ M and O(x) = x
otherwise. Then P ∈ P OpφO iff P ∈ P OpφM .
Proof. Directly from Definitions 6 and 7.
    The process opacity is defined for arbitrary predicates and observation func-
tions. Now we will reformulate it for those ones which can be expressed by
process algebras. Firstly we start with observation function O . Suppose that
Act ∩ Θ = ∅. We extend the set of actions A by Θ and we model O by a special
process.
Definition 8. Process O is called process definition of observation function O
                                        s                     o
if for every P and s ∈ Act∗ it holds P −→ P 0 iff (P |O) \ A ⇒ (P 0 |O0 ) \ A for
       ∗
o ∈ Θ such that O(s) = o.
    Some observation function cannot be computed at all (we can prove this by
defining a function which returns a specific symbol if a given Turing machine
halts). But some observation functions can be emulated by finite state processes
as it is stated by the following lemma.

Lemma 2. For any static or m-orwellian observation functions O there exists
finite state process O which is process definition of observation function O.

Proof. Sketch. Any static or m-orwellian observation function can be simulated
by finite-state transducer which can be simulated by finite state process.

    Note that also some dynamic and orwellian observation function can be de-
fined by finite state systems in a case that their computation does not need an
unlimited memory.
    Now we can relate observation functions defined by processes with the defi-
nition of process opacity.
                                                             o
Lemma 3. Let for every o ∈ Θ∗ such that (P |O) \ A ⇒ (P 0 |O0 ) \ A and φ(P 0 )
                                             o
holds there exists P 00 such that (P |O) \ A ⇒ (P 00 |O0 ) \ A such that ¬φ(P 00 ) holds.
                φ
Then P ∈ P OpO and vice versa.

Proof. Directly from Definitions 6 and 8.

   An observation function defines what an attacker can see from process be-
haviour. One attacker could see more that another attacker. There are several
ways how to express this situation. In the following definition we suppose that
the set of observables for one attacker is a subset of observables of another at-
tacker. This gives an ordering between observation functions.

Definition 9. Let O1 , O2 are two observation functions with the common set
of observables Θ. We define ordering between them (denoted by ≺) as follows.
O1 ≺ O2 iff for every w, w ∈ Act∗ we have O1 (w)|U = O2 (w) for some set
U, U ⊆ Θ.

Lemma 4. Let O1 , O2 are two observation functions such that O1 ≺ O2 . Then
for every w, w ∈ Act∗ it holds that O1 (w) = O1 (w0 ) implies O2 (w) = O2 (w0 ).

Proof. Let O1 (w) = O1 (w0 ). Then also O1 (w)|U = O1 (w0 )|U for arbitrary U .
Since O1 ≺ O2 we have O1 (w)|U = O2 (w) and O1 (w0 )|U = O2 (w0 ) i.e. O2 (w) =
O2 (w0 ).

   The ordering between observation functions corresponds to stronger and
weaker process opacity properties as it is stated by the following lemma.

Lemma 5. Let O1 , O2 are two observation functions such that O1 ≺ O2 . Then
P OpφO1 ⊆ P OpφO2 .
Proof. Main idea. Let P ∈ P OpφO1 Then from Definitions 6 we know that when-
        w
ever P → P 0 for w ∈ Act∗ and φ(P 0 ) holds then there exists P 00 such that
    w0
P → P 00 for some w0 ∈ Act∗ and ¬φ(P 00 ) holds and moreover O1 (w) = O1 (w0 ).
But since O1 ≺ O2 we have by the previous lemma also O2 (w) = O2 (w0 ) and
hence P ∈ P OpφO2 .

    The ordering on observation functions could be related to the relations ⇒M
as it is stated by the following Lemma.

Lemma 6. Let O1 , O2 are process definitions of observation functions O1 , O2 ,
                                                                             o
respectively and O1 ≺ O2 . Then for every P and o ∈ Θ∗ it holds (P |O1 ) \ A ⇒
   0  0                       o    0  0
(P |O1 ) \ A iff (P |O2 ) \ A ⇒U (P |O2 ) \ A for some U, U ⊆ Θ.

Proof. Sketch. Since O1 ≺ O2 from Definition 9 we have O1 (w)|U = O2 (w) for
some set U, U ⊆ Θ. And from Definition 8 we have translation of this fact of this
to traces.

    Note that there also other possibilities how to order intruders with respect to
their capabilities to observe processes behaviour expressed by observation func-
tion. For example, observation of one attacker can be included in observations
of another one, one attacker cannot see completely something which can be seen
by another one etc. We leave investigations of such orderings to future work.


5        Testing

Process opacity is undecidable property, since its special variant (see Definition
7) is undecidable (see [Gru15]). Undecidability results from many factors, one
of them is too general notion of observer’s capabilities. Now we use a concept of
testing, which will help us to define security with respect to given test or set of
tests.
    A test examines process’s capability to perform some set of actions. It is
required that every sequence of test’s actions should be emulated also by tested
process i.e. for a test T and tested process P it should hold T rw (PT ) ⊆ T rw (P )
where PT is behavior of P under the test T . But since we will later incorporate
also visibility of such traces as well as properties of resulting processes we need
a more elaborated definition of testing. We suppose that a test communicates
with tested process by means of new alternative actions, which are original ones
just indexed by a (i.e. alternative actions to x, y, z are depicted as xa , ya , ya ).
The formal definition follows.

Definition 10. Process T is called a test for process P iff T rw ((T |P [f ]) \ Aa ) ⊆
T rw (P ) where f maps every action of Sort(P ) to its alter ego, depicted by index
a. Alter age actions will be denoted by Aa .

    In the following example we present a test which examines all process traces.
                           P
Example 1. Let T = µX. x∈A (x.x̄a .X + x̄.xa .X). Then we have T rw ((T |P [f ])\
Aa ) = T rw (P ) for every P and hence T will be called a simple complete test.

    Now we are prepared for combining testing with observation functions. It
reduces general process opacity to property which requires that for an ob-
server/tester, expressed by T , tested process P is secure.

Definition 11. Let process O is a process definition of observation function
O and T is the test of process P . We say that P pass test T under O and φ
                                                                                             o
(denoted by P ∈ P OpφO (T )) iff if for every o ∈ Θ∗ it holds ((T |P [f ]) Aa |O)\A ⇒
((T 0 |P 0 [f ]) Aa |O0 )\A and φ(P 0 ) holds there exists P 00 such that ((T |P [f ]) Aa |O)\
   o
A ⇒ ((T 0 |P 00 [f ]) Aa |O0 ) \ A such that ¬φ(P 00 ) holds.

   Testing by test T and observation function defined by process O is depicted
by Fig. 3. Actions of process P are firstly tested by T and its sequence of actions
and then the same sequence is taken as an input for O to produce its visible
part, i.e. as a sequence of observables from Θ.



                            O               T              P
                                        -               -




                                 Fig. 3. Testing scenario


   If a process P is process opaque with respect to φ and O then it should
be process opaque with respect to any test T as its is stated by the following
Lemma.

Lemma 7. Let process O is process definition of observation function O, test
T of process P and P ∈ P OpφO . Then P pass test T under O.

Proof. Directly from Definitions 6 and 11.

   Now we can show how a natural ordering between tests defines ordering on
security of processes with respect to process opacity.

Lemma 8. Let T1 , T2 are two tests for process P such that T r(T1 ) ⊆ T r(T2 ).
Then if P ∈ P OpφO (T2 ) then P ∈ P OpφO (T1 ).

Proof. Sketch. Let P ∈ P OpφO (T2 ) i.e. it has to pass test T2 which represents,
basically a set of traces. Since test T1 produces less traces it has to pass also this
test and hence P ∈ P OpφO (T1 ).

    As a direct consequence of the previous Lemma we have the following one.
Lemma 9. Let T1 , T2 are two tests for process P . Then if P ∈ P OpφO (T1 + T2 )
then P ∈ P OpφO (T1 ) and P ∈ P OpφO (T2 ).

Proof. The proof follows from the previous Lemma and facts that T r(T1 ) ⊆
T r(T1 + T2 ) and T r(T2 ) ⊆ T r(T1 + T2 ).

     A property similar to the one expressed by Lemma 8 holds for predicates
i.e. security with respect to weaker predicate implies security with respect to
stronger one.

Lemma 10. Let φ1 , φ2 are two predicates such that φ1 ⇒ φ2 and T is a test for
process P . Then if P ∈ P OpφO2 (T ) then P ∈ P OpφO1 (T ).
                                          w
Proof. Let P ∈ P OpφO2 (T ) and let P → P 0 for w ∈ Act∗ and φ1 (P 0 ) holds.
Since φ1 ⇒ φ2 then also φ2 (P 0 ) holds and since P ∈ P OpφO2 (T ) there exists
                  w0
P 00 such that P → P 00 for some w0 ∈ Act∗ and ¬φ2 (P 00 ) holds and moreover
O(w) = O(w0 ). Again since ¬φ2 ⇒ ¬φ1 then also ¬φ1 (P 00 ) holds and hence
P ∈ P OpφO1 (T ).

    Now we show how checking of process opacity could be reduced to checking
of standard process algebra properties. Firstly we define also a predicate over
processes by a special process.

Definition 12. Predicate φ is called process√
                                              definable if there exists a process
                                                   √                     √
Pφ such that φ(P ) holds iff (P |Pφ ) \ A ⇒ where      is a new action      6∈ A.
Process Pφ is called process definition of φ.

Example 2. Let φ(Q) holds of QP     can once perform action a then later P action b.
                                                     0           0
Process
  √      P φ defined as P φ = µX.    x6 =a x.X + a.P   , where P   = µX.  x6=b x.X +
b. .N il is process definition of φ.

     Now we can reduce process opacity checking to checking of trace inclusion as
it is stated by the following theorem.

Theorem 1. Let O, Pφ , P¬φ are process definitions of observation functions O
and predicates
          √    φ, ¬φ, respectively. Then√P ∈ P OpφO iff for every o ∈ Θ∗ it holds
that if o. ∈ T rw (P |O|Pφ ) \ A then o. ∈ T rw (P |O|P¬φ ) \ A.

Proof. The proof follows from Definitions 6, 8 and 12.

    Now, thank to the above mention reduction, we can obtain decidable variant
for process opacity.

Theorem 2. Let O, T, Pφ , P¬φ are finite state process definitions of observa-
tion functions O, test and predicates φ, ¬φ, respectively. Then process opacity is
decidable for any finite state test.

Proof. Sketch. The proof follows from Definitions 6, 8 and 12 and Theorem 1.
    The security property process opacity expects an attacker who can just ob-
serve process’ traces but cannot interact with the process. In many cases this
does not cover real attacks and attackers, particularly in the case of multi-agent
systems where a possible attacker could be one of the agents. Hence we gener-
alize process opacity in such a way that also every successor of a process has to
be process opaque as well. The formal definition follows.

Definition 13 (Persistent Process Opacity). We say that process P is per-
sistently process opaque w.r.t. the observation function O and φ if Succ(P ) ⊆
P P OpφO .

Proposition 1. P P OpφO ⊆ P OpφO for every φ and O. Moreover, there exist φ
and O such that P P OpφO ⊂ P OpφO .

Proof. The main idea. Let P ∈ P P OpφO . Then directly from Definition 13 we
have that P ∈ P OpφO . To show that the inclusion is proper let us assume the
following example. Let P = a.b.b.N il +a.c.c.N il, an observation function O such
that O(a) = a, O(b) = O(c) =  and predicate φ such that φ(Q) hold iff Q can
perform a or b. Then P ∈ P OpφO but P 6∈ P P OpφO since b.b.N il 6∈ P OpφO .

   For persistent process opacity similar properties as for process opacity but
some modifications are needed. For example, in Lemma 8 we have to replace
trace inclusion by simulation, what is an asymmetric variant of bisimulation.


6   Conclusions

We have presented the security concept called process opacity and its stronger
variant persistent process opacity. We show how to model observation functions,
which express capability to observe a system as well as capabilities to accumulate
some knowledge on its behaviour as well as security predicates by processes. We
have proposed a way how to (partially) verify these properties by means of tests
and testings. Each test represents a scenario for an attacker as well his or her
capabilities. Instead of verifying process opacity we define system security with
respect to a given set of tests. We have shown, that under some restriction, this
testing is feasible or at least decidable.
    As future work, we plan to define and study a minimal set of tests which
are necessary to be passed to guarantee some security property. In this way we
would simplify overall testing. Moreover we plan to work with different ordering
on observation functions as well as tests.
    The presented approach allows us to exploit also process algebras enriched
by operators expressing other ”parameters” (space, distribution, networking ar-
chitecture, processor or power consumption and so on). Hence we could obtain
security properties which have even higher practical value.
References
[BKR04] Bryans J., M. Koutny and P. Ryan: Modelling non-deducibility using Petri
        Nets. Proc. of the 2nd International Workshop on Security Issues with Petri
        Nets and other Computational Models, 2004.
[BKMR06] Bryans J., M. Koutny, L. Mazare and P. Ryan: Opacity Generalised to
        Transition Systems. In Proceedings of the Formal Aspects in Security and
        Trust, LNCS 3866, Springer, Berlin, 2006.
[BG04]  Busi N. and R. Gorrieri: Positive Non-interference in Elementary and Trace
        Nets. Proc. of Application and Theory of Petri Nets 2004, LNCS 3099,
        Springer, Berlin, 2004.
[CHM07] Clark D., S. Hunt and P. Malacaria: A Static Analysis for Quantifying
        the Information Flow in a Simple Imperative Programming Language. The
        Journal of Computer Security, 15(3). 2007.
[CMS09] Clarkson, M.R., A.C. Myers, F.B. Schneider: Quantifying Information Flow
        with Beliefs. Journal of Computer Security, to appear, 2009.
[NH84]  De Nicola R. and M. C. B. Hennessy: Testing Equivalences for Processes,
        Theoretical Computer Science, 34, 1984.
[GM04]  Gorrieri R. and F. Martinelli: A simple framework for real-time crypto-
        graphic protocol analysis with compositional proof rules. to appear at Sci-
        ence of Computer Programing.
[GM82]  Goguen J.A. and J. Meseguer: Security Policies and Security Models. Proc.
        of IEEE Symposium on Security and Privacy, 1982.
[GR19a] Gruska D.P and M. C. Ruiz: Security Testing for Multi-Agent Systems.
        IWANN 2019, LNCS 11506, Springer, 2019.
[GR19b] Gruska D.P and M. C. Ruiz: Security of Low Level IoT. ICCS 2019, LNCS
        11538, Springer, 2019.
[Gru18] Gruska D.P and M. C, Ruiz: Opacity-enforcing for Process Algebras.
        CS&P’2018, 2018.
[Gru17] Gruska D.P and M. C, Ruiz: Initial process security. in Specification and
        Verification CS&P’2017, 2017.
[Gru15] Gruska D.P.: Process Opacity for Timed Process Algebra. In Perspectives
        of System Informatics, LNCS 8974, 2015.
[Gru13] Gruska D.P.: Information flow testing Fundamenta Informaticae. - Vol. 128,
        No. 1-2 (2013).
[Gru11] Gruska D.P.: Gained and Excluded Private Actions by Process Observa-
        tions. Fundamenta Informaticae. - Vol. 109, No. 3 (2011).
[Mil89] Milner, R.: Communication and concurrency. Prentice-Hall International,
        New York,1989.
[SL95]  Segala R. and N. Lynch: Probabilistic Simulations for Probabilistic Pro-
        cesses. Nord. J. Comput. 2(2): 250-273, 1995.