=Paper= {{Paper |id=Vol-2951/paper7 |storemode=property |title=Process Opacity and Insertion Functions |pdfUrl=https://ceur-ws.org/Vol-2951/paper7.pdf |volume=Vol-2951 |authors=Damas P. Gruska,M. Carmen Ruiz |dblpUrl=https://dblp.org/rec/conf/csp/GruskaR21 }} ==Process Opacity and Insertion Functions== https://ceur-ws.org/Vol-2951/paper7.pdf
Process Opacity and Insertion Functions
Damas P. Gruska1 , M. Carmen Ruiz2
1
    Comenius University, Slovak Republic
2
    Universidad de Castilla-La Mancha, Spain


                                         Abstract
                                         Time insertion functions as a way how to guarantee state-based security with respect to timing attacks
                                         are proposed and studied. As regards the security property, we work with the property called process
                                         opacity. First, we define timing attacks and later we show how security with respect to them can be
                                         enforced by such functions. The time insertion function can alter the time behaviour of the original
                                         system by inserting some time delays to guarantee its security. We investigate conditions under which
                                         such functions do exist and also some of their properties.

                                         Keywords
                                         state-based security, process opacity, process algebras, information flow, insertion function, timing at-
                                         tack




1. Introduction
Formal methods allows us, in many cases, to show, even prove, that a given system is not secure.
Then we have a couple of options what to do. We can either re-design its behavior, what might
be costly, difficult or even impossible, in the case that it is already part of a hardware solution,
proprietary firmware and so on, or we can use supervisory control (see [1]) to restrict system’s
behaviour in such a way that the system becomes secure. A supervisor can see (some) system’s
actions and can control (disable or enable) some set of system’s action. In this way it restricts
system’s behaviour to guarantee its security (see also [2]). This is a trade-off between security
and functionality. Situation is different in the case of timing attacks. They, as side channel
attacks, represent serious threat for many systems. They allow intruders “break” “unbreakable”
systems, algorithms, protocols, etc. For example, by carefully measuring the amount of time
required to perform private key operations, attackers may be able to find fixed Diffie-Hellman
exponents, factor RSA keys, and break other cryptosystems (see [3]). This idea was developed
in [4] were a timing attack against smart card implementation of RSA was conducted. In [5], a
timing attack on the RC5 block encryption algorithm is described. The analysis is motivated
by the possibility that some implementations of RC5 could result in data-dependent rotations
taking a time that is a function of the data. In [6], the vulnerability of two implementations of
the Data Encryption Standard (DES) cryptosystem under a timing attack is studied. It is showed
that a timing attack yields the Hamming weight of the key used by both DES implementations.
Moreover, the attack is computationally inexpensive. A timing attack against an implementation
of AES candidate Rijndael is described in [7], and the one against the popular SSH protocol in
29th International Workshop on Concurrency, Specification and Programming (CS&P’21)
" damas.gruska@fmph.uniba.sk (D. P. Gruska); MCarmen.Ruiz@uclm.es (M. C. Ruiz)
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
[8]. Even relatively recently discovered possible attacks on most of currently used processors
(Meltdown and Spectre) also belong to timing attacks. To protect systems against timing attacks
we propose application of inserting functions (see [9, 10, 11, 12]). Such functions can add some
idling between actions to enforce process’s security. In this paper we investigate conditions
under which such functions do exist and also their properties.
   As regards formalism, we will work with a timed process algebra and opacity, which is
the security property based on an absence of information flow. This formalism enables us to
formalize timing attacks. In [13] we have introduced time insertion functions to guarantee
language-based security and showed some of their properties. In [14] we studied conditions
under which there exists a timed insertion function for a given process and security language-
based security property and we have presented some decidability and undecidability results.
In this paper we define and study time insertion functions for state-based security property
process opacity.
   The paper is organized as follows. In Section 2 we describe the timed process algebra TPA
which will be used as a basic formalism and information flow state-based security process
opacity. The next section is devoted to time insertion functions as a way how to guarantee state
this security property with respect to timing attacks.


2. Working Formalism
In this section we briefly recall Timed Process Algebra, TPA for short (for more details see [15]).
TPA is based on Milner’s Calculus of Communicating Systems (for short, CCS, see [16]) but
the special time action 𝑡 which expresses elapsing of (discrete) time is added and hence the set
of actions is extended from 𝐴𝑐𝑡 to 𝐴𝑐𝑡𝑡. The presented language is a slight simplification of
Timed Security Process Algebra (tSPA) introduced in [17]. We omit an explicit idling operator
𝜄 used in tSPA and instead of this we allow implicit idling of processes. Hence processes can
perform either "enforced idling" by performing 𝑡 actions which are explicitly expressed in their
descriptions or "voluntary idling" (i.e. for example, the process 𝑎.𝑁 𝑖𝑙 can perform 𝑡 action
despite the fact that this action is not formally expressed in the process specification). But
in both cases internal communications have priority to action 𝑡 in the parallel composition.
Moreover we do not divide actions into private and public ones as it is in tSPA. TPA differs
also from the tCryptoSPA (see [18]). TPA does not use value passing and strictly preserves time
determinancy in case of choice operator + what is not the case of tCryptoSPA (see [15]).
   To define the language TPA, we first assume a set of atomic action symbols 𝐴 not containing
symbols 𝜏 and 𝑡, and such that for every 𝑎 ∈ 𝐴 there exists 𝑎 ∈ 𝐴 and 𝑎 = 𝑎. We define
𝐴𝑐𝑡 = 𝐴 ∪ {𝜏 }, 𝐴𝑡 = 𝐴 ∪ {𝑡}, 𝐴𝑐𝑡𝑡 = 𝐴𝑐𝑡 ∪ {𝑡}. We assume that 𝑎, 𝑏, . . . range over 𝐴,
𝑢, 𝑣, . . . range over 𝐴𝑐𝑡, and 𝑥, 𝑦 . . . range over 𝐴𝑐𝑡𝑡.
   We give a structural operational semantics of terms by means of labeled transition systems.
The set of terms represents a set of states, labels are actions from 𝐴𝑐𝑡𝑡. The transition relation
                                                        𝑥                                        𝑥
→ is a subset of TPA × 𝐴𝑐𝑡𝑡 × TPA. We write 𝑃 → 𝑃 ′ instead of (𝑃, 𝑥, 𝑃 ′ ) ∈ → and 𝑃 →         ̸ if
                              𝑥                                            𝑥
there is no 𝑃 ′ such that 𝑃 → 𝑃 ′ . The meaning of the expression 𝑃 → 𝑃 ′ is that the term 𝑃
                                                      𝑥
can evolve to 𝑃 ′ by performing action 𝑥, by 𝑃 → we will denote that there exists a term 𝑃 ′
                𝑥
such that 𝑃 → 𝑃 ′ . We define the transition relation as the least relation satisfying the inference
rules for CCS plus the following inference rules:

                                  𝑡
                                                 𝐴1                  𝑡
                                                                                        𝐴2
                         𝑁 𝑖𝑙 → 𝑁 𝑖𝑙                           𝑢.𝑃 → 𝑢.𝑃
                   𝑡          𝑡             𝜏                   𝑡          𝑡
                𝑃 → 𝑃 ′ , 𝑄 → 𝑄′ , 𝑃 | 𝑄 →
                                         ̸                  𝑃 → 𝑃 ′ , 𝑄 → 𝑄′
                              𝑡
                                                 𝑃𝑎                 𝑡
                                                                                        𝑆
                       𝑃 | 𝑄 → 𝑃 ′ | 𝑄′                    𝑃 + 𝑄 → 𝑃 ′ + 𝑄′
                                                      𝑠                  𝑥 𝑥        𝑥
   For 𝑠 = 𝑥1 .𝑥2 . . . . .𝑥𝑛 , 𝑥𝑖 ∈ 𝐴𝑐𝑡 we write 𝑃 → instead of 𝑃 →1 →2 . . . →
                                                                               𝑛
                                                                                 and we say that
𝑠 is a trace of 𝑃 . The set of all traces of 𝑃 will be denoted by 𝑇 𝑟(𝑃 ). By 𝜖 we denote the
                                                                                           𝑥
empty sequence and by 𝑀 * we denote the set of sequences of elements from 𝑀 . We use ⇒ for
transitions including 𝜏 actions (see [16]). By 𝑠|𝐵 we will denote the sequence obtained from 𝑠
by removing all actions not belonging to 𝐵. By 𝐿(𝑃 ) we will denote a set of actions which can
                                           𝑠.𝑥
be performed by 𝑃 , i.e. 𝐿(𝑃 ) = {𝑥|𝑃 →, 𝑠 ∈ 𝐴𝑐𝑡𝑡* }.
   We define two behavior equivalences trace equivalence and weak bisimulation, respectively
(see [16]).
                                                                                                    𝑠
Definition 1. The set of weak traces of process 𝑃 is defined as 𝑇 𝑟𝑤 (𝑃 ) = {𝑠 ∈ 𝐴𝑡⋆ |∃𝑃 ′ .𝑃 ⇒
𝑃 ′ }.
   Two processes 𝑃 and 𝑄 are weakly trace equivalent iff 𝑇 𝑟𝑤 (𝑃 ) = 𝑇 𝑟𝑤 (𝑄).

Definition 2. Let (TPA, 𝐴𝑐𝑡, →) be a labelled transition system (LTS). A relation ℜ ⊆ TPA×TPA
is called a weak bisimulation if it is symmetric and it satisfies the following condition: if (𝑃, 𝑄) ∈
           𝑥                                                                   𝑥
ℜ and 𝑃 → 𝑃 ′ , 𝑥 ∈ 𝐴𝑐𝑡𝑡 then there exists a process 𝑄′ such that 𝑄 ⇒ 𝑄′ and (𝑃 ′ , 𝑄′ ) ∈ ℜ.
                                                                               ̂︀

Two processes 𝑃, 𝑄 are weakly bisimilar, abbreviated 𝑃 ≈ 𝑄, if there exists a weak bisimulation
relating 𝑃 and 𝑄.

   To formalize an information flow we do not divide actions into public and private ones at the
system description level, as it is done for example in [18], but we use a more general concept of
observation and opacity. This concept was exploited in [19] and [20] in a framework of Petri
Nets and transition systems, respectively. Firstly we define observation function on sequences
from 𝐴𝑐𝑡⋆ . Various variants of observation functions differs according to contexts which they
take into account. For example, an observation of an action can depend on the previous actions.

Definition 3 (Observation). Let Θ be a set of elements called observables. Any function 𝒪 :
𝐴𝑐𝑡𝑡⋆ → Θ⋆ is an observation function. It is called static /dynamic /orwellian / m-orwellian
(𝑚 ≥ 1) if the following conditions hold respectively (below we assume 𝑤 = 𝑥1 . . . 𝑥𝑛 ):

    • static if there is a mapping 𝒪′ : 𝐴𝑐𝑡𝑡 → Θ ∪ {𝜖} such that for every 𝑤 ∈ 𝐴𝑐𝑡⋆ it holds
      𝒪(𝑤) = 𝒪′ (𝑥1 ) . . . 𝒪′ (𝑥𝑛 ),
    • dynamic if there is a mapping 𝒪′ : 𝐴𝑐𝑡𝑡⋆ → Θ ∪ {𝜖} such that for every 𝑤 ∈ 𝐴𝑐𝑡𝑡⋆ it
      holds 𝒪(𝑤) = 𝒪′ (𝑥1 ).𝒪′ (𝑥1 .𝑥2 ) . . . 𝒪′ (𝑥1 . . . 𝑥𝑛 ),
    • orwellian if there is a mapping 𝒪′ : 𝐴𝑐𝑡𝑡×𝐴𝑐𝑡𝑡⋆ → Θ∪{𝜖} such that for every 𝑤 ∈ 𝐴𝑐𝑡𝑡⋆
      it holds 𝒪(𝑤) = 𝒪′ (𝑥1 , 𝑤).𝒪′ (𝑥2 , 𝑤) . . . 𝒪′ (𝑥𝑛 , 𝑤),
    • m-orwellian if there is a mapping 𝒪′ : 𝐴𝑐𝑡𝑡 × 𝐴𝑐𝑡𝑡⋆ → Θ ∪ {𝜖} such that for every 𝑤 ∈
      𝐴𝑐𝑡𝑡⋆ it holds 𝒪(𝑤)               =      𝒪′ (𝑥1 , 𝑤1 ).𝒪′ (𝑥2 , 𝑤2 ) . . . 𝒪′ (𝑥𝑛 , 𝑤𝑛 ) where
      𝑤𝑖 = 𝑥𝑚𝑎𝑥{1,𝑖−𝑚+1} .𝑥𝑚𝑎𝑥{1,𝑖−𝑚+1}+1 . . . 𝑥𝑚𝑖𝑛{𝑛,𝑖+𝑚−1} .

   In the case of the static observation function each action is observed independently from
its context. In the case of the dynamic observation function an observation of an action
depends on the previous ones, in the case of the orwellian and m-orwellian observation function
an observation of an action depends on the all and on 𝑚 previous actions in the sequence,
respectively. The static observation function is the special case of m-orwellian one for 𝑚 = 1.
Note that from the practical point of view the m-orwellian observation functions are the most
interesting ones. An observation expresses what an observer - eavesdropper can see from a
system behavior and we will alternatively use both the terms (observation - observer) with the
same meaning. Note that the same action can be seen differently during an observation (except
static observation function) and this express a possibility to accumulate some knowledge by
intruder. For example, an action not visible at the beginning could become somehow observable.
From now on we will assume that Θ ⊆ 𝐴𝑐𝑡𝑡.
   Now let us assume hat an intruder is interested whether a given process has reached a
state with some given property which is expressed by a (total) predicate. This property might
be process deadlock, capability to execute only traces with some given actions, capability to
perform at the same actions form a given set, incapacity to idle (to perform 𝜏 action ) etc. We
do not put any restriction on such predicates but we only assume that they are consistent with
some suitable behavioral equivalence. The formal definition follows.

Definition 4. We say that the predicate 𝜑 over processes is consistent with respect to relation ∼
                                                                                                =
if whenever 𝑃 ∼
              = 𝑃 ′ then 𝜑(𝑃 ) ⇔ 𝜑(𝑃 ′ ).


   As the consistency relation ∼
                               = we could take bisimulation, weak bisimulation, weak trace
equivalence or any other suitable equivalence.
   An intruder cannot learn validity of predicate 𝜑 observing process’s behaviour iff there are
two traces, undistinguished for him (by observation function 𝒪), where one leads to a state
which satisfy 𝜑 and another one leads to a state for which ¬𝜑 holds. The formal definition
follows.

Definition 5 (Process Opacity). Given process 𝑃 , a predicate 𝜑 over processes is process opaque
                                                                            𝑤
w.r.t.        the observation function 𝒪 whenever 𝑃                         →          𝑃′     for
                                                                    𝑤′
𝑤 ∈ 𝐴𝑐𝑡𝑡* and 𝜑(𝑃 ′ ) holds then there exists 𝑃 ′′ such that 𝑃 → 𝑃 ′′ for some 𝑤′ ∈ 𝐴𝑐𝑡𝑡*
and ¬𝜑(𝑃 ′′ ) holds and moreover 𝒪(𝑤) = 𝒪(𝑤′ ). The set of processes for which the predicate 𝜑
is process opaque w.r.t. to the 𝒪 will be denoted by 𝑃 𝑂𝑝𝜑𝒪 .


3. Insertion functions
Timing attacks belong to powerful tools for attackers who can observe or interfere with systems
in real time. On the other side these techniques is useless for off-line systems and hence they
could be consider safe with respect to attackers who cannot observe (real) time behaviour. By
the presented formalism we have a way how to distinguish these two cases. First we define
untimed version of an observation function, i.e. a function which does not take into account time
information. From now on we will consider observation functions 𝒪 : 𝐴𝑐𝑡𝑡⋆ → 𝐴𝑐𝑡𝑡⋆ for which
there exists untimed variants 𝒪𝑡 . Function 𝒪𝑡 is untimed variant of 𝒪 iff 𝒪(𝑤) = 𝒪𝑡 (𝑤|𝐴𝑐𝑡 ),
i.e. untimed variant represents an observer who does not see elapsing of time since both traces,
with and without actions 𝑡, are seen equally. Now we can formally describe situation when a
process could be jeopardized by a timing attack i.e. is secure only if an observer cannot see
elapsing of time.

Definition 6 (Timinig Attacks). We say that process 𝑃 is prone to timing attacks with respect
to 𝜑 and 𝒪 iff 𝑃 ̸∈ 𝑃 𝑂𝑝𝜑𝒪 but 𝑃 ∈ 𝑃 𝑂𝑝𝜑𝒪𝑡 .

Example 1. Let us assume an intruder who tries to learn whether a private action ℎ was per-
formed by a given process while (s)he can observe only public action 𝑙 but not ℎ itself. Then
process 𝑃 = 𝑎.𝑡.𝑅 + 𝑏.𝑡.𝑡.𝑄 is not opaque for static observation function 𝒪(𝑥) = 𝜖 for 𝑥 ̸= 𝑡,
𝒪(𝑡) = 𝑡 and 𝜑(𝑅), ¬𝜑(𝑄) hold, i.e. 𝑃 ̸∈ 𝑃 𝑂𝑝𝜑𝒪 . But if an observer cannot see elapsing of time
this process is opaque, i.e. 𝑃 ∈ 𝑃 𝑂𝑝𝜑𝒪𝑡 .

   From now on we will consider only processes which are prone to timing attacks (see Definition
6) and moreover we will assume that predicate 𝜑 is decidable. There are basically three ways
how to solve vulnerability to timing attacks except putting a system off-line. First, redesign
the system, put some monitor or supervisor which prevents dangerous behavior which could
leak some classified information (see, for example, [2]) or hide this leakage by inserting some
time delays between system’s action (see [12, 10] for general insertion functions for non-
deterministic finite automata). Now we will define and study this possibility. First we need
some notation. For 𝑤, 𝑤′ ∈ 𝐴𝑐𝑡𝑡* and 𝑤 = 𝑥1 .𝑥2 . . . 𝑥𝑛 we will write 𝑤 ≪𝑆 𝑤′ for 𝑆 ⊂ 𝐴𝑐𝑡𝑡
iff 𝑤′ = 𝑥1 .𝑖1 .𝑥2 . . . 𝑥𝑛 .𝑖𝑛 where 𝑖𝑘 ∈ 𝑆 * for every 𝑘, 1 ≤ 𝑘 ≤ 𝑛. In general, an insertion
function inserts additional actions between original process’s actions (given by trace 𝑤) such
that for the resulting trace 𝑤′ we have 𝑤 ≪𝑆 𝑤′ and 𝑤′ is still a possible trace of the process.
In our case we would consider insertion functions (called time insertion functions) which insert
only time actions i.e. such functions that 𝑤 ≪{𝑡} 𝑤′ . Results of an insertion function depends
on previous process behaviour. We can define this dependency similarly as it is defined for
observation functions.

Definition 7 (Time Insertion function). Any function ℱ : 𝐴𝑐𝑡𝑡⋆ → 𝐴𝑐𝑡𝑡⋆ is an insertion
function iff for every 𝑤 ∈ 𝐴𝑐𝑡𝑡* we have 𝑤 ≪{𝑡} ℱ(𝑤). It is called static /dynamic /orwellian / m-
orwellian (𝑚 ≥ 1) if the following conditions hold respectively (below we assume 𝑤 = 𝑥1 . . . 𝑥𝑛 ):

    • static if there is a mapping 𝑓 : 𝐴𝑐𝑡𝑡 → {𝑡}* such that for every 𝑤 ∈ 𝐴𝑐𝑡𝑡⋆ it holds
      ℱ(𝑤) = 𝑥1 .𝑓 (𝑥1 ).𝑥2 .𝑓 (𝑥2 ) . . . 𝑥𝑛 .𝑓 (𝑥𝑛 ),
    • dynamic if there is a mapping 𝑓 : 𝐴𝑐𝑡𝑡⋆ → {𝑡}* such that for every 𝑤 ∈ 𝐴𝑐𝑡⋆ it holds
      ℱ(𝑤) = 𝑥1 .𝑓 (𝑥1 ).𝑥2 .𝑓 (𝑥1 .𝑥2 ) . . . 𝑥𝑛 .𝑓 (𝑥1 . . . . .𝑥𝑛 ),
    • orwellian if there is a mapping 𝑓 ′ : 𝐴𝑐𝑡𝑡 × 𝐴𝑐𝑡𝑡⋆ → {𝑡}* such that for every 𝑤 ∈ 𝐴𝑐𝑡𝑡⋆ it
      holds ℱ(𝑤) = 𝑥1 .𝑓 (𝑥1 , 𝑤).𝑥2 .𝑓 (𝑥2 , 𝑤) . . . 𝑥𝑛 .𝑓 (𝑥𝑛 , 𝑤),
    • m-orwellian if there is a mapping 𝑓 ′ : 𝐴𝑐𝑡𝑡×𝐴𝑐𝑡𝑡⋆ → {𝑡}* such that for every 𝑤 ∈ 𝐴𝑐𝑡𝑡⋆ it
      holds     ℱ(𝑤)                 =             𝑥1 .𝑓 (𝑥1 , 𝑤1 ).𝑥2 .𝑓 (𝑥2 , 𝑤2 ) . . . 𝑥𝑛 .𝑓 (𝑥𝑛 , 𝑤𝑛 ),
      𝑤𝑖 = 𝑥𝑚𝑎𝑥{1,𝑖−𝑚+1} .𝑥𝑚𝑎𝑥{1,𝑖−𝑚+1}+1 . . . 𝑥𝑚𝑖𝑛{𝑛,𝑖+𝑚−1} .

    Note that contrary to general insertion function (see [12, 10]) inserting time actions is much
simpler due to transition rules 𝐴1, 𝐴2, 𝑆. The purpose of time insertion function is to guaranty
security with respect to process opacity. Let 𝑃 ̸∈ 𝑃 𝑂𝑝𝜑𝒪 but 𝑃 ∈ 𝑃 𝑂𝑝𝜑𝒪𝑡 , i.e. the process 𝑃 is
prone to timing attack with respect to 𝒪 and 𝜑. If 𝒪 and 𝜑 is clear from a context we will omit
it. Now we define what it means that process can be immunized by a time insertion function.

Definition 8. We say that process 𝑃 can be immunized for process opacity with respect to a
                                                                           𝑤
predicate 𝜑 over 𝐴𝑐𝑡𝑡⋆ and the observation function 𝒪 if for every 𝑃 ′ , 𝑃 → 𝑃 ′ such that 𝜑(𝑃 ′ )
                                                    𝑤′
holds and there does not exists 𝑃 ′′ such that 𝑃 → 𝑃 ′′ for some 𝑤′ such that 𝒪(𝑤) = 𝒪(𝑤′ ) and
                                                               𝑤
𝜑(𝑃 ′′ ) does not hold, there exist 𝑤𝑡 , 𝑤 ≪{𝑡} 𝑤𝑡 such that 𝑃 →𝑡 𝑃 ′′ and and there exists 𝑃 ′′′ and
                  𝑤′′
𝑤′′ , such that 𝑃 → 𝑃 ′′′ such that ¬𝜑(𝑃 ′′′ ) holds and 𝒪(𝑤𝑡 ) = 𝒪(𝑤′′ ).

  In Fig. 1 process immunization is depicted.

                                                         𝑤
                                                𝑃    =⇒       𝜑(𝑃 ′ )        𝒪(𝑤)
                                                                             ‖
                                                      𝑤′
                                                𝑃    ̸ =⇒     ¬𝜑(𝑃 ′′ )      𝒪(𝑤′ )
                                                      𝑤
                                                𝑃     𝑡
                                                     =⇒       𝜑(𝑃 ′ )        𝒪(𝑤𝑡 )
                                                                             ‖
                                                      𝑤′′
                                                𝑃    =⇒       ¬𝜑(𝑃 ′′′ )     𝒪(𝑤′′ )


Figure 1: Process immunization


 Now we will study an existence of time insertion functions. First we need some notations.
We begin with observational functions which do not see 𝜏 actions.

Definition 9. We say that observational function 𝒪 is not sensitive to 𝜏 action iff 𝒪(𝑤) =
𝒪(𝑤|𝐴𝑡 ) for every 𝑤 ∈ 𝐴𝑐𝑡* . Otherwise we say that 𝒪 is sensitive to 𝜏 action.

Example 2. Process 𝑃, 𝑃 = 𝑡.𝑅 + (𝑎.𝑄|𝑎     ¯.𝑁 𝑖𝑙) ∖ {𝑎}, cannot be immunized if 𝒪 is sensitive to 𝜏
action and 𝜑(𝑅), ¬𝜑(𝑄) hold. An immunization should put a time delay into the trace performed
by the right part of the process 𝑃 but this subprocess cannot perform 𝑡 action due to the inference
rule 𝑃 𝑎 before communication by means of channel 𝑎.

Lemma 1. Let 𝑃 is prone to timing attack with respect to 𝒪 and 𝜑. Let 𝜏 ̸∈ 𝐿(𝑃 ), 𝑃 is sequential
(i.e. does not contain parallel composition) and 𝒪 is static. Then 𝑃 can be immunized.

   Another problem, which causes that processes cannot be immunized, is related to observation
of time, namely, if this observation is context sensitive, as it is stated by the following example.
Example 3. Process 𝑃, 𝑃 = ℎ.𝑅.𝑁 𝑖𝑙 + 𝑡.𝑄.𝑁 𝑖𝑙 cannot be immunized for dynamic 𝒪 such that
𝒪(𝑤.ℎ.𝑡* .𝑤′ ) = 𝒪(𝑤.𝑤′ ), 𝒪(𝑤.𝑙.𝑤′ ) = 𝒪(𝑤).𝑙.𝒪(𝑤′ ), if 𝑤 does not end with action ℎ we have
𝒪(𝑤.𝑡.𝑤′ ) = 𝒪(𝑤).𝑡.𝒪(𝑤′ ), and 𝜑(𝑅), ¬𝜑(𝑄) hold.

  Now we define time contextuality formally.

Definition 10. We say that observational function 𝒪 is time non-contextual if 𝒪𝑡 (𝑤) = 𝒪𝑡 (𝑤′ )
for every 𝑤, 𝑤′ such that 𝑤 ≪{𝑡} 𝑤′ .

Proposition 1. Let process 𝑃 is prone to timing attacks with respect to 𝜑 and time non-contextual
observation function 𝒪 which does not see 𝜏 . Then 𝑃 can be immunized for opacity with respect
to timing attacks.

Proof 1. The main idea. Let 𝑃 ̸∈ 𝑃 𝑂𝑝𝜑𝒪 but 𝑃 ∈ 𝑃 𝑂𝑝𝜑𝒪𝑡 . This means that there exists a
                                  𝑤                                                           𝑤′
sequence 𝑤 and 𝑃 ′ such that 𝑃 → 𝑃 ′ , 𝜑(𝑃 ′ ) holds and there does not exist 𝑃 ′′ such that 𝑃 → 𝑃 ′′
for some 𝑤′ such that 𝒪(𝑤) = 𝒪(𝑤′ ) and 𝜑(𝑃 ′′ ) does not hold.
                                                                               𝑤
   Suppose that 𝑃 cannot be immunized, i.e. for every 𝑤𝑡 , 𝑤 ≪{𝑡} 𝑤𝑡 if 𝑃 →𝑡 𝑃 ′′′ , 𝜑(𝑃 ′′′ ) holds,
                                       𝑤′′
there does not exist 𝑃 ′′′′′ such that 𝑃 → 𝑃 ′′′′ for some 𝑤′′ such that 𝒪(𝑤𝑡 ) = 𝒪(𝑤′′′ ). But due
to our assumption we have 𝒪𝑡 (𝑤𝑡 ) = 𝒪𝑡 (𝑤) and hence it is with contradiction that 𝑃 is prone to
timing attacks.

Corollary 1. Let 𝒪 is a static observation function such that 𝒪(𝜏 ) = 𝜖 and 𝒪(𝑡) = 𝑡. Then
process 𝑃 which is prone to timing attacks with respect to 𝜑 and observation function 𝒪 can
be immunized for process opacity with respect to timing attacks.

   Under some special conditions time insertion functions can be computed effectively a it is
stated by the following proposition.

Proposition 2. Let process 𝑃 is prone to timing attacks with respect to 𝜑 and time non-contextual
observation function 𝒪 which does not see 𝜏 . Then 𝑃 can be immunized for opacity with respect
to timing attacks by a m-orwellian insertion function, moreover such one, which can be emulated
by finite state process.

Proof 2. Sketch. The prove follows an idea from Proposition 3 and Theorem 4.10 and Lemma
4.5.in [13], where insertion functions are modeled by processes run in parallel with 𝑃 .

  No we define what it means that a predicate is time sensitive.

Definition 11. We say that predicate 𝜑 is time sensitive iff whenever 𝜑(𝑃 ) holds for 𝑃 then there
                             𝑡𝑛
exists 𝑛, 𝑛 > 0 such that 𝑃 → 𝑃 ′ and 𝜑(𝑃 ′ ) does not hold.

Proposition 3. Let process 𝑃 is prone to timing attacks with respect to time sensitive predicate
¬𝜑 and time non-contextual observation function 𝒪 which does not see 𝜏 . Then 𝑃 can be immu-
nized for opacity with respect to timing attacks, 𝒪 and 𝜑.
Proof 3. Sketch. By inserting some time actions after performing a sequence we can always reach
a state for which 𝜑 holds and hence 𝑃 becomes safe with respect to 𝑃 𝑂𝑝¬𝜑
                                                                        𝒪 .

Corollary 2. Let process 𝑃 is prone to timing attacks with respect 𝜑 and time non-contextual
observation function 𝒪 which does not see 𝜏 . Let ¬𝜑 is not time sensitive predicate then 𝑃 can
be immunized for opacity with respect to timing attacks and 𝒪 and𝜑.

   In general, we cannot decide whether process can be immunized as it is stated by the
following proposition. Fortunately, it is decidable for the most important static and m-orwellian
observation functions.
Proposition 4. Immunizability is undecidable i.e. it cannot be decided whether 𝑃 can be immu-
nized for opacity with respect to timing attacks.
Proof 4. Sketch. Let 𝑇𝑖 represents i-th Turing machine under some numeration ∑︀     of all Turing
machines. We start with generalized process from Example 3. Let 𝑃 = ℎ.𝑙.𝑅 + 𝑖∈𝑁 𝑡𝑖 .𝑙.𝑄.
Let 𝒪(𝑤.ℎ.𝑡𝑖 .𝑤′ ) = 𝒪(𝑤.𝑤′ ) whenever 𝑇𝑖 halts with the empty tape and 𝒪(𝑤.ℎ.𝑡𝑖 .𝑤′ ) =
𝒪(𝑤).𝑡𝑖 .𝒪(𝑤′ ) otherwise. It is easy to check that immunization of 𝑃 is undecidable.
Proposition 5. Immunizability is decidable for static and m-orwellian observation function 𝒪.
Proof 5. According to Proposition 3 it is enough to show that observation function 𝒪 is time non-
contextual observation function and it does not see 𝜏 . Clearly both these properties are decidable
for static and m-orwellian observation functions.


4. Conclusions
We have investigated time insertion functions for timed process algebra which enforce the
security with respect to timing attack formalized process opacity. Time insertion functions
add some delays to system’s behaviour to prevent a timing attack. We study an existence of
an insertion function for a given process, given observational function and a predicate over
processes. In future work we plan to investigate minimal insertion functions, i.e. such functions
which add as little as possible time delays to guarantee process’s security with respect to timing
attacks. The presented approach allows us to exploit also process algebras enriched by operators
expressing other "parameters" (space, distribution, networking architecture, power consumption
and so on). Hence we could obtain security properties which have not only theoretical but
also practical value. Moreover, we can use similar techniques as in [21] to minimize time,
as well as other resources, added to process’s behaviour. Moreover, we plan to model both
observation functions as well as predicates over processes by processes themselves, to obtain
some complexity results as it was done in [13] for trace based variant of opacity.


Acknowledgement
This work was supported by the Slovak Research and Development Agency under the Contract
no. APVV-19-0220 (ORBIS) and by the Slovak VEGA agency under Contract no. 1/0778/18
(KATO).
References
 [1] P. Ramadge, W. Wonham, The control of discrete event systems, Proceedings of the IEEE
     77 (1989) 81–98. doi:10.1109/5.21072.
 [2] D. P. Gruska, M. C. Ruiz, Opacity-enforcing for process algebras, in: B. Schlingloff, S. Akili
     (Eds.), Proceedings of the 27th International Workshop on Concurrency, Specification and
     Programming, Berlin, Germany, September 24-26, 2018, volume 2240 of CEUR Workshop
     Proceedings, CEUR-WS.org, 2018. URL: http://ceur-ws.org/Vol-2240/paper1.pdf.
 [3] P. C. Kocher, Timing attacks on implementations of diffie-hellman, rsa, dss, and other
     systems, in: N. Koblitz (Ed.), Advances in Cryptology — CRYPTO ’96, Springer Berlin
     Heidelberg, Berlin, Heidelberg, 1996, pp. 104–113.
 [4] J.-F. Dhem, F. Koeune, P.-A. Leroux, P. Mestré, J.-J. Quisquater, J.-L. Willems, A practical
     implementation of the timing attack, volume 1820, 1998, pp. 167–182. doi:10.1007/
     10721064_15.
 [5] H. Handschuh, H. M. Heys, A timing attack on rc5, in: Proceedings of the Selected Areas
     in Cryptography, SAC ’98, Springer-Verlag, Berlin, Heidelberg, 1998, p. 306–318.
 [6] A. Hevia, M. Kiwi, Strength of two data encryption standard implementations under
     timing attacks, ACM Trans. Inf. Syst. Secur. 2 (1999) 416–437. URL: https://doi.org/10.1145/
     330382.330390. doi:10.1145/330382.330390.
 [7] F. Koeune, F. Koeune, J.-J. Quisquater, J. jacques Quisquater, A timing attack against
     Rijndael, Technical Report, 1999.
 [8] D. X. Song, D. Wagner, X. Tian, Timing analysis of keystrokes and timing attacks on ssh,
     in: Proceedings of the 10th Conference on USENIX Security Symposium - Volume 10,
     SSYM’01, USENIX Association, USA, 2001.
 [9] R. Jacob, J.-J. Lesage, J.-M. Faure, Overview of discrete event systems opacity: Models,
     validation, and quantification, Annual Reviews in Control 41 (2016) 135–146. URL: https:
     //www.sciencedirect.com/science/article/pii/S1367578816300189. doi:https://doi.org/
     10.1016/j.arcontrol.2016.04.015.
[10] C. Keroglou, L. Ricker, S. Lafortune, Insertion functions with memory for opacity enforce-
     ment, IFAC-PapersOnLine 51 (2018) 394–399. URL: https://www.sciencedirect.com/science/
     article/pii/S240589631830661X. doi:https://doi.org/10.1016/j.ifacol.2018.06.
     331, 14th IFAC Workshop on Discrete Event Systems WODES 2018.
[11] C. Keroglou, S. Lafortune, Embedded insertion functions for opacity enforcement,
     IEEE Transactions on Automatic Control 66 (2021) 4184–4191. doi:10.1109/TAC.2020.
     3037891.
[12] Y.-C. Wu, S. Lafortune, Enforcement of opacity properties using insertion functions, in:
     2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012, pp. 6722–6728.
     doi:10.1109/CDC.2012.6426760.
[13] D. P. Gruska, Security and time insertion, Proceedings of the 23rd Pan-Hellenic Conference
     on Informatics (2019).
[14] D. P. Gruska, Time insertion functions, in: Proceedings CSMML 2021, to appear, 2021.
[15] D. P. Gruska, Process opacity for timed process algebra, in: A. Voronkov, I. B. Virbit-
     skaite (Eds.), Perspectives of System Informatics - 9th International Ershov Informatics
     Conference, PSI 2014, St. Petersburg, Russia, June 24-27, 2014. Revised Selected Papers,
     volume 8974 of Lecture Notes in Computer Science, Springer, 2014, pp. 151–160. URL: https:
     //doi.org/10.1007/978-3-662-46823-4_13. doi:10.1007/978-3-662-46823-4\_13.
[16] R. Milner, Communication and Concurrency, Prentice-Hall, Inc., USA, 1989.
[17] R. Focardi, R. Gorrieri, F. Martinelli, Information flow analysis in a discrete-time process
     algebra, in: Proceedings 13th IEEE Computer Security Foundations Workshop. CSFW-13,
     2000, pp. 170–184. doi:10.1109/CSFW.2000.856935.
[18] R. Gorrieri, F. Martinelli, A simple framework for real-time cryptographic protocol
     analysis with compositional proof rules, Science of Computer Programming 50 (2004)
     23–49. doi:10.1016/j.scico.2004.01.001.
[19] J. W. Bryans, M. Koutny, P. Y. Ryan, Modelling opacity using petri nets, Electronic Notes
     in Theoretical Computer Science 121 (2005) 101–115. URL: https://www.sciencedirect.
     com/science/article/pii/S1571066105000277. doi:https://doi.org/10.1016/j.entcs.
     2004.10.010, proceedings of the 2nd International Workshop on Security Issues with
     Petri Nets and other Computational Models (WISP 2004).
[20] J. Bryans, M. Koutny, L. Mazare, P. Ryan, Opacity generalised to transition systems,
     volume 7, 2008, pp. 421–435. doi:10.1007/11679219_7.
[21] Y. Ji, X. Yin, S. Lafortune,             Enforcing opacity by insertion functions un-
     der multiple energy constraints,           Automatica 108 (2019) 108476. URL: https://
     www.sciencedirect.com/science/article/pii/S0005109819303243. doi:https://doi.org/
     10.1016/j.automatica.2019.06.028.