<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Proceedings of the Formal Methods for Security Workshop</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Proceedings Editors:</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Riadh Robbana, Professor in LIP2 &amp; INSAT, University of Carthage</institution>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Tunis</institution>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Véronique Cortier, CNRS Researcher at LORIA, Lorraine University</institution>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2014</year>
      </pub-date>
      <fpage>37</fpage>
      <lpage>78</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Formal</title>
    </sec>
    <sec id="sec-2">
      <title>Methods for</title>
    </sec>
    <sec id="sec-3">
      <title>Security</title>
      <p>Workshop
This volume contains the proceedings of the 5th Workshop on Formal Methods for Security
(FMS) held in Tunis, Tunisia, June 23, 2014 as satellite event of Petri Nets 2014.
FMS is dedicated to the advancement of the theory and practice of formal methods for security.
We received 9 submissions. These submissions went through a rigorous review process; each
submission was reviewed by at least 3 Program Committee members. The Program Committee
members have selected 5 papers for publication. In addition to the five selected papers, two more
papers were selected for an informal presentation at the workshop.</p>
      <p>We also have the chance to welcome one invited talk by Mark Ryan « Du-Vote: Remote
Electronic Voting with Untrusted Computers » (joint work with Gurchetan Grewal, Michael
Clarkson, Liqun Chen).</p>
      <sec id="sec-3-1">
        <title>We hope you will enjoy the FMS 2014 edition!</title>
      </sec>
      <sec id="sec-3-2">
        <title>Véronique Cortier and Riadh Robbana</title>
        <p>Program Committee</p>
        <sec id="sec-3-2-1">
          <title>Myrto Arapinis</title>
          <p>Kamel Barkaoui
Narjes Ben Rajeb
Vincent Cheval
Stephen Chong
Veronique Cortier
Stephanie Delaune
Susanna Donatelli
Sibylle Froeschle
Pierre-Cyrille Heam
Be´chir Ktari
Yassine Lakhnech
Mahjoub Langar
Mohamed Mejri
Riadh Robbana
Hassen Saidi
Lilia Sfaxi
Jacques Traore</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>University of Birmingham</title>
          <p>Cedric-Cnam
LIP2 - INSAT - University of Carthage
School of Computer Science, University of Birmingham
Harvard University
CNRS, Loria
CNRS, LSV
Dipartimento di Informatica, Universita’ di Torino
University of Oldenburg
LSV, ENS Cachan, INRIA-CNRS
Laval University
VERIMAG
LIP2 - ENIT - Elmanar University
laval
LIP2 - INSAT - University of Carthage
SRI International
INSAT - University of Carthage
Orange Labs</p>
          <p>Berrima, Mouhebeddine
Yu, Jiangshan</p>
          <p>University of Sousse
University of Birmingham
A Timestamping Scheme with Eternal Security in the Bounded Storage Model . . . . . . . . . . . . . . . . . . . 68
Assia Ben Shil and Kaouther Blibech Sinaoui
Du-Vote: Remote Electronic Voting with</p>
          <p>Untrusted Computers</p>
          <p>(Invited Talk)
Mark Ryan, Gurchetan Grewal, Michael Clarkson, and Liqun Chen</p>
          <p>University of Birmingham, UK</p>
          <p>m.d.ryan@cs.bham.ac.uk
Abstract. Du-Vote is a new remote electronic voting protocol that
eliminates the often-required assumption that voters trust general-purpose
computers. Trust is distributed in Du-Vote between a simple hardware
token issued to the voter, the voters’s computer, and a server run by
election authorities. Verifiability is guaranteed with statistically high
probability even if all these machines are controlled by the adversary, and
privacy is guaranteed as long as at least either the voter’s computer or
the server is not controlled by the adversary. The design of the Du-Vote
protocol is presented in this paper. A new non-interactive zero-knowledge
proof is employed to verify the server’s computations. The security of the
protocol is analyzed to determine the extent to which, when components
of the system are malicious, privacy and verifiability are maintained.
Extraction of Insider Attack Scenarios from a</p>
          <p>Formal Information System Modeling
Amira Radhouani1,2,3,5, Akram Idani1,2, Yves Ledru1,2, Narjes Ben Rajeb3,4
1 Univ. of Grenoble Alpes, LIG, F-38000 Grenoble, France
2 CNRS, LIG, F-38000 Grenoble, France
3 LIP2-LR99ES18, 2092, Tunis, Tunisia
4 INSAT - Carthage University, Tunisia
5 FST - Tunis-El Manar University, Tunisia
Abstract. The early detection of potential threats during the modelling
phase of a Secure Information System is required because it favours the
design of a robust access control policy and the prevention of malicious
behaviours during the system execution. This paper deals with internal
attacks which can be made by people inside the organization. Such
attacks are di cult to find because insiders have authorized system access
and also may be familiar with system policies and procedures. We are
interested in finding attacks which conform to the access control policy, but
lead to unwanted states. These attacks are favoured by policies involving
authorization constraints, which grant or deny access depending on the
evolution of the functional Information System state. In this context, we
propose to model functional requirements and their Role Based Access
Control (RBAC) policies using B machines and then to formally reason
on both models. In order to extract insider attack scenarios from these
B specifications our approach first investigates symbolic behaviours. The
use of a model-checking tool allows to exhibit, from a symbolic behaviour,
an observable concrete sequence of operations that can be followed by
an attacker. In this paper, we show how this combination of symbolic
execution and model-checking allows to find out such insider attack
scenarios.
keywords: Information System, B-Method, RBAC, attack scenario, Model</p>
          <p>Checking, Symbolic Search.
1</p>
          <p>Introduction
Developing secure Information Systems remains an active research area
addressing a wide range of challenges mostly interested in how to prevent from external
attacks such as intrusion, code injection, denial of service, identity fraud, etc.</p>
          <p>Insider attacks are less addressed despite they may cause much more damage
because an insider is over all a trusted entity. Intrinsically it is given means
to violate a security policy, either by using legitimate access, or by obtaining
unauthorized access. This paper deals especially with Role Based Access
Control (RBAC) concerns with the aim to exhibit potential insider threats from a
formal modelling of secure Information Systems. We are interested in finding
attacks which conform to the access control policy, but lead to unwanted states.</p>
          <p>These attacks are favoured by policies involving authorization constraints, which
grant or deny access depending on the evolution of the functional Information
System state. This reveals, on the one hand, the need to link the security model
to the functional model of the information system, and on the other hand, to
build tools taking into account the dynamic evolution of the IS state.</p>
          <p>
            Tools such as SecureMova [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ] and USE [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ] are dedicated to validate security
policies related to a functional model. But these tools don’t take into account
dynamic evolution of the functional state. In [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ], we discussed shortcomings of
existing approaches in this context, and showed the advantages of using a formal
specification assisted by animation tools. This paper goes a step further than our
previous works by taking advantage of model-checking and proof tools in order
to automatically find insider attack scenarios composed of a sequence of actions
modifying the functional state and breaking the authorization constraint.
          </p>
          <p>This paper is organized as follows: section 2 gives an overview of our approach
and its underlying methodology. In section 3 we present a simple example that
illustrates our contribution. Section 4 defines semantics and technical aspects.</p>
          <p>In section 5 we propose a symbolic search that automates generation of attack
scenarios and we discuss results of its application on the given example. Finally,
we draw conclusions and perspectives.
2</p>
          <p>Overall Approach
Bridging the gap between formal (e.g. Z, B, VDM . . .) techniques and graphical
languages such as UML has been a challenge since several years. On the one
hand, formal techniques allow automatic reasoning assisted by proof and
modelchecking tools, and on the other hand, graphical techniques allow visualization
and better understanding of the system structure. These complementary aspects
are useful to ensure a software development process based on notations with
precise syntax and semantics and which allows to structure a system graphically.</p>
          <p>
            Most existing research works [
            <xref ref-type="bibr" rid="ref1 ref5 ref7">1, 5, 7, 13</xref>
            ] in this context have been focused only
on modelling and validation of functional aspects which are initially described
by various kinds of UML diagrams (class, state/transition, sequence, . . .) and
then translated into a formal specification. These works have shown the
interest of linking formal and graphical paradigms and also the feasibility of such
translations.
          </p>
          <p>
            In our work, we adopt a similar approach in order to graphically model
and formally reason on both functional and security models. We developed the
B4MSecure6 platform [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ] in order to translate a UML class diagram associated
to a SecureUML model into B specifications. The resulting B specifications
illustrated in figure 1 follow the separation of concerns principles in order to be
able to validate both models separately and then validate their interactions.
          </p>
          <p>The functional B model on the left hand side of figure 1 is issued from a
conceptual class diagram. It integrates all basic operations generated automatically
6 http://b4msecure.forge.imag.fr/
(constructors, destructors, setters, getters, . . .) and also additional user-defined
operations which are integrated into the graphical model and specified using
the B syntax. This functional specification can be further improved by adding
invariants and carrying out proof of correction with the help of AtelierB prover.</p>
          <p>The security model, on the right hand side of figure 1 is dedicated to control
the access to functional operations with respect to access control rules defined
in the SecureUML model. In our approach, we don’t deal with administration
operations because we make the simplifying assumption that access control rules
don’t evolve during the system execution. The security formal model allows
to validate RBAC well-formedness rules such as no role hierarchy cycles, and
separation of duty properties (SoD) such as assignment of conflicting roles to
users. . .</p>
          <p>This paper assumes that validation of both models in isolation is done:
operations of functional model don’t violate invariant properties, and the security
model is robust. Such validation activities are widely discussed in the literature
[12]. However, currently available validation approaches do not take su ciently
into account interactions between both models which result from the fact that
constraints expressed in the security model also refer to information of the
functional model. In fact, security policies often depend on dynamic properties based
on the functional system state. For example, a bank customer may transfer funds
from his account, but if the amount is greater than some limit the transfer must
be approved by his account manager. Access control decisions depend then on
the satisfaction of authorization constraints in the current system state.
Dynamic evolution of the functional state impacts these constraints and may lead
to a security vulnerability if it opens an unexpected access. In this paper we
use validation tools (prover and model-checker) in order to search for malicious
sequences of operations by analysing authorization constraints.
3</p>
          <p>
            A Simple Example
In this section we use a running example issued from [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ] and which deals with a
SecureUML model associated to a functional UML class diagram.
3.1
          </p>
          <p>Functional Model
The functional UML class diagram (presented in figure 2) describes a meeting
scheduler dedicated to manage data about two entities: Persons and Meetings.</p>
          <p>A meeting has one and only one owner (association MeetingOwner ), a list of
participants (association MeetingParticipants), a duration, and a starting date.</p>
          <p>
            A person can be the owner of several meetings and may participate to several
meetings. Operations notify and cancel are user-defined, and allow respectively
to send messages to participants and to delete a meeting after notifying their
participants by e-mail. Constructors, setters and getters are implicitly defined
for both classes and both associations.
This example is intended to be validated in [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ] based on a set of static queries
that query a given system state in order to grasp some useful information like
“which user can perform an action on a concrete resource in a given state”.
          </p>
          <p>Authorization constraint associated to OwnerMeetingPerm requires
information from the functional model because it deals with the MeetingOwner
association. In the rest of this article, we consider three users John, Alice and
Bob such that user assignments are as defined by figure 4 and a given initial
state in which Alice is owner of meeting m1, Bob is a participant of m1. In such
a state, the above static query establishes that only Alice is allowed to modify
or delete m1 because she is the owner of m1.</p>
          <p>
            In [
            <xref ref-type="bibr" rid="ref10 ref9">10, 9</xref>
            ] a dynamic analysis approach based on animation of a formal
specification showed that validation should not only be based on a given static state,
but should search for sequences of actions modifying this state and breaking the
authorization constraint. For example, starting from the above state, a static
query would only report that John, and also Bob, can’t modify m1 because none
of them satisfies the authorization constraint. A dynamic analysis would ask if
there exists a sequence of operations enabled by John, or Bob, that allows them
to modify m1. This paper contributes towards automatically finding these
malicious sequences. To perform these analysis, we applied the B4MSecure tool to
the UML and SecureUML diagrams and generated a B specification counting
946 lines. This tool generates automatically a specification for all basic
functional operations, which is enriched manually by some user-defined operations
(i.e. cancel, notify).
4
4.1
          </p>
          <p>Proposed Approach</p>
          <p>Trace Semantics for B Specifications
In order to find malicious behaviours of an operational secure IS modelling,
we rely on the set of finite observable traces of our B specifications. Indeed, B
specifications can be approached by means of a trace semantics composed of an
initialization substitution init, a set of operations O and a set of state variables
V. We note val a possible state predicate allowed by the invariant and op an
operation from O. A functional behaviour is an observable sequence Q</p>
          <p>Q =ˆ init ; op1 ; op2 ; . . . ; opm
such that ’ i.(i oe 1..m ∆ opi oe O) and there exists a sequence S of state
predicates which does not violate invariant properties:</p>
          <p>S =ˆ val0 ; val1 ; . . . ; valm
in which val0 is an initial state, and opi is enabled from state vali≠ 1 and state
vali is reached by opi, starting from state vali≠ 1.</p>
          <p>The security model filters functional behaviours by analysing access control
premises which are triplets (u, R, c) where u is a user, R is a set of possible
roles assigned to u, and c is an authorization constraint. An observable secure
behaviour is a sequence Q, where for every step i, premise (ui, Ri, ci) is valid
(expressed as (ui, Ri, ci) |= true). This means that roles Ri activated by user ui
grant him the right of running operation opi and if a constraint ci exists, then
it must be satisfied. The following premises sequence P must be valid for Q:</p>
          <p>P =ˆ (u1, R1, c1) ; (u2, R2, c2) ; . . . ; (um, Rm, cm)
4.2</p>
          <p>Tools to Exhibit Behaviours from B Specifications
Model-checking and symbolic proof techniques are of interest in order to exhibit
a relevant behaviour from an operational B specification. Proof techniques deal
with infinite systems and can prove constraint satisfiability, or establish that
some operation can be enabled from an abstract state predicate. Model-checking
is based on model exploration of finite systems, and can be used to find a sequence
of actions leading to a given state or property. In our approach, we combine both
techniques in order to overcome their shortcomings: complexity of proofs for the
first one, and state explosion for the second one. In this sub-section, we illustrate
both tools.
Model checking and animation (the ProB tool). ProB [11] is an animation and
a model-checker of B specifications that explores the concrete state space of
the specification and generates accessibility graphs. Then, every predicate vali
(where i oe {0, 1, . . . , m}) of sequence S is a valuation of variables issued from
V. For example, starting from an initial state val0 where:
V = {person, meeting, meetingOwner, meetingP articipants} and such that:
val0 =ˆ person = ÿ
· meeting = ÿ
· meetingOwner = ÿ
· meetingP articipant = ÿ
and having O = {personN ew, meetingN ew, meetingAddP articipants, . . .}, the
scenario of table 1 is successfully animated using ProB tool. Column “reached
states” gives only modified B variables from the previous step.
1
2
3
4
step Sequence Q</p>
          <p>In step 1, the tool animates operation personN ew which modifies variable
person (initially equal to emptyset) and this action was performed by user John
using role SystemAdministrator without need of authorization constraint. In
step 4, the tool adds participant Bob to the meeting m1 by animating operation
meetingAddParticipants, after validating that authorization constraint is valid
for Alice using role SystemUser. Indeed, Alice is the owner of m1.</p>
          <p>
            Symbolic proof (the GeneSyst tool). ProB is useful to animate scenarios
identified during requirements analysis, or to exhaustively explore a finite subset of
state space. As we are interested in finding malicious scenarios that exhibit a
potential internal attack, the ProB technique may be useful only if it explores
the right subset of state space in the right direction, which is not obvious for
infinite systems. Symbolic proof techniques, such as that of GeneSyst [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ], are
more interesting because they allow to produce symbolic transition systems that
represent a potentially infinite set of values. Such tools reason on the reachability
properties of a symbolic state F by some operation op from a symbolic state E.
          </p>
          <p>
            In [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ], three reachability properties are defined in terms of the following proof
obligations, where E and F are two disjoint state predicates:
(1) possibly reached: E · P re(op) ∆ ¬[Action(op)]¬F
(2) not reachable: E · P re(op) ∆ [Action(op)]¬F
(3) always reached: E · P re(op) ∆ [Action(op)]F
          </p>
          <p>In the generalized substitution theory, formula [S]R means that substitution
S always establishes predicate R, and ¬[S]¬R means that substitution S may
establish predicate R. Hence, proof (1) means that state F can be reached by
actions of operation op, when operation precondition is true in state E. Proof (2)
means that F is never reached by actions of operation op from state E. Finally,
proof (3) means that F is always reached by op from E. Let us consider, for
example, the functional operation meetingN ew:
meetingN ew(m, p)=ˆ</p>
          <p>EN D
P RE m ”oe meeting · p oe person T HEN
meeting := meeting fi {m}
|| meetingOwner := meetingOwner fi {(m ‘ae
p)}</p>
          <p>This operation adds a new meeting m and links it to an owner p. If we define
states E and F such that:</p>
          <p>E =ˆ meetingOwner[{m1}] = ÿ</p>
          <p>F =ˆ meetingOwner[{m1}] ”= ÿ
then proof obligation produced by GeneSyst for property (1) was successfully
proved showing that operation meetingN ew when enabled from a state where
m1 does not exist and there exists at least one person in the system, may lead
to a state where m1 is created and has an owner.</p>
          <p>Our work will be focused on proof (1) which states the reachability of a target
state from an initial one, illustrated above, because it is su cient to decide
wether an operation is potentially useful for a malicious behaviour. Proofs (2)
and (3) can be used if one would like to assume that a state can never be reached,
or it is always reached, by an operation.
4.3</p>
          <p>Malicious Behaviour
Based on the security requirements, several operations are identified as
critical. For example, security requirements have identified the integrity of meeting
information as critical. Therefore, operations which perform unauthorized
modifications are identified as critical.</p>
          <p>A malicious behaviour executed by a user u, regarding authorization
constraints, is an observable secure behaviour Q with m steps such that:
– opm is a critical operation to which an authorization constraint cm is
asso</p>
          <p>ciated.
– user u is malicious and would like to run opm by misusing his roles Ru.
– val0 : is an initial state where (u, Ru, cm) |= f alse
– for every step i (i oe 1..m) premise (u, Ru, ci) |= true</p>
          <p>In other words, malicious user u is not initially allowed to execute a critical
operation, but he is able to run a sequence of operations leading to a state from
which he can execute this operation. In our investigation we suppose that user u
executes this malicious sequence without collusion. This problem will be tackled
in a further work.</p>
          <p>Section 3.3 gave an example where neither Bob nor John are allowed to run
a modification operation, such as meetingSetStart which modifies attributes of
class Meeting, from the initial state due to the authorization constraint. This
initial state is:
val0 =ˆ person = {Alice, Bob}
· meeting = {m1}
· meetingOwner = {(Alice ‘ae
· meetingP articipant = {(m1 ‘ae</p>
          <p>In the following, we denote as init0 the sequence of operations leading to
val0 such as that presented in table 1. We used the model-checking facility of
ProB in order to explore exhaustively the state space and automatically find a
path starting from val0 and leading to a state where operation meetingSetStart
becomes permitted to John. We asked ProB to find a sequence where John
becomes the owner of m1:</p>
          <p>meetingOwner(m1) = J ohn
After exploring more than 1000 states, ProB found a scenario in which John
executes sequentially operations personNew, personAddMeetingOwner and
meetingSetStart. Indeed, this dynamic analysis showed that John, as a system
administrator, has a full access to entity Person. This permission allows him to
create, modify, read and delete any instance of class Person. First, he creates
an instance John of class Person that corresponds to him by running
operation personN ew(J ohn). Then he adds meeting m1 to the set of meetings owned
by John, by running operation personAddM eetingOwner(J ohn, m1) which is a
basic modification operation of class Person. These two actions allowed him to
become the owner of m1 and then he was able to modify the meeting of Alice.</p>
          <p>Like all model-checking techniques, when ProB explores exhaustively the state
space, it faces the combinatorial explosion problem which depends on the number
of operations provided to the tool and the state space size. In order to address
this problem, our approach proposes a symbolic search which finds a sequence
of potentially useful operations on which the model-checker should be focused.
The proposed symbolic search is performed by an algorithm that looks for an
observable sequence Q =ˆ init0 ; op1 ; . . . ; opm executed by a user u, and where
(u, Ru, cm) is not valid for a critical operation opm in the initial state val0 but
becomes valid for state valm≠ 1 where opm can be enabled. It is a backward search
algorithm, starting from the goal state valm≠ 1 from which the critical operation
opm can be enabled: valm≠ 1 =ˆ cm · P re(opm); and working backwards until the
initial state val0 is encountered. The algorithm ends when sequence Q is found
or when all operations are verified without encountering the initial state. We
consider that val0 is a completely valuated state such as that where Alice is the
owner of m1, and Bob is a participant to m1. This prevents the initial state from
being included in both states valm≠ 1 and valm≠ 2, which would never verify the
condition of the while loop. Note that each operation occurs at most once in a
computed sequence, which ensures the termination of our algorithm.</p>
          <p>1. Q =ˆ opm;
2. valm≠ 1 =ˆ cm · P re(opm);
3. valm≠ 2 =ˆ ¬valm≠ 1;
4. while val0 ”∆ valm≠ 1 do
5. choose any oi oe O where
6. (u, Ru, ci) |= true ·
7. valm≠ 2 · P re(oi) ∆
8. do
9. Q =ˆ oi ; Q ;
10. valm≠ 1 =ˆ valm≠ 2 · P re(oi);
11. valm≠ 2 =ˆ valm≠ 2 · ¬P re(oi);
12. else
13. raise exception: No sequence found
14. enddo
15. endwhile
16. Q =ˆ init ; Q ;</p>
          <p>¬[Action(oi)]¬valm≠ 1
We take advantage of abstraction and step by step we refine the valm≠ 2 symbolic
state:
1. At the first step of the algorithm, the state space is represented by two
symbolic states: the first one valm≠ 1 includes all states where the authorization
constraint cm is true and which are enabling opm, and the second one valm≠ 2
is the negation of valm≠ 1 which is then ¬cm ‚ ¬P re(opm). As they are two
disjoint state predicates, we conduct proof (1) in order to find an operation
oi that belongs to O and which possibly reaches the first state valm≠ 1 from
the second one valm≠ 2 and such that premise (u, Ru, ci) is valid. If oi does
not exist, then no sequence could be found for the expected attack and we
can try proof (2) for each operation attesting that all operations never reach
valm≠ 1 from valm≠ 2.
2. At the second step of the algorithm, if the proof (1) succeeds for some
operation opm≠ 1, then it may exist an observable sequence leading to the critical
operation where access control premise (u, Ru, cm) is valid, and hence a
potential symbolic attack scenario can be found. The algorithm looks inside
state valm≠ 2 in order to find out the previous operations that can be
invoked in the attack scenario. State valm≠ 2 is partitioned into two sub-states
which are:
valm≠ 2 · P re(opm≠ 1) © ¬(cm · P re(opm)) · P re(opm≠ 1)
valm≠ 2 · ¬P re(opm≠ 1) © ¬(cm · P re(opm)) · ¬P re(opm≠ 1)
Then, we look for operations that reach the first sub-state from the second
one.
3. The algorithm proceeds iteratively by partitioning the second state into two
sub-states until it finds a state that includes the initial state. In the best
case, our algorithm gives some symbolic attack scenario, which consists of
sequence (init0 ; opn ; opn+1 ; . . . ; opm) invoked by the same user u and
where:
valn≠ 1 =ˆ¬(cm · P re(opm)) · ¬P re(opm≠ 1) · ¬P re(opm≠ 2) · . . . · P re(opn)
and such that val0 ∆
valn≠ 1 · ’ i.(i oe (n..m) ∆
(u, Ru, ci) |= true)
We apply our algorithm to the meeting scheduler example starting from the
following initial state val0:
val0 =ˆ person = {Alice, Bob}
· meeting = {m1}
· meetingOwner = {(Alice ‘ae
· meetingP articipant = {(m1 ‘ae</p>
          <p>In this state user John is not allowed to modify meeting m1 because the
authorization constraint allows modification only for the owner of m1. A malicious
scenario would lead to a state where John becomes able to execute a
modification operation such as operation meetingSetStart on meeting m1. In this state
we have to verify:</p>
          <p>P re(meetingSetStart(m1, start))=ˆ m1 oe meeting · start oe N AT</p>
          <p>and (John, SystemUser, M eetingOwner(m1) = J ohn) |= true
1. First iteration: considering the following symbolic states
valm≠ 2 = ¬valm≠ 1
valm≠ 1 = (M eetingOwner(m1) = J ohn) · m1 oe M eeting · start oe N AT
we have:
– val0 ”∆ valm≠ 1 because, in state val0, M eetingOwner(m1) = Alice, and
– proof (1) succeeds for the operations meetingNew and personAddMeetingOwner.</p>
          <p>Then, we may go on the second iteration of the algorithm for each of these
operations.
2. Second iteration: we partition state valm≠ 2 into two sub-states:
valm≠ 2 = ¬valm≠ 1 · P re(opm≠ 1)
valm≠ 3 = ¬valm≠ 1 · ¬P re(opm≠ 1)
– case 1: we choose opm≠ 1 = meetingN ew, and then we have:
– P re(meetingN ew) =ˆ m1 oe meeting · J ohn oe person, and
– val0 ”∆ valm≠ 2 because J ohn ”oe person
In this case, the algorithm does not find an operation leading to a state
where operation meetingN ew becomes enabled. Indeed, no operation
satisfied proof obligation (1). Our algorithm concludes that it does not
exist an attack scenario invoking meetingN ew at this step.</p>
          <p>init0
meetingN ew
meetingSetStart
3. Third iteration: we partition state valm≠ 3 into two sub-states:
valm≠ 3 = ¬valm≠ 1 · ¬P re(personAddM eetingOwner) · P re(personN ew)
valm≠ 4 = ¬valm≠ 1 · ¬P re(personAddM eetingOwner) · ¬P re(personN ew)
This stops normally the algorithm because in this case val0 ∆ valm≠ 3.</p>
          <p>Indeed, P re(personN ew) =ˆ J ohn oe / person and in the initial state John
does not belong to set person. Figure 6 presents the full symbolic scenario
that allows John to modify Alice’s meeting.
personN ew
personAddM eetingOwner
meetingSetStart
Technically, our approach applies the GeneSyst tool in order to produce proof
obligations and then asks the AtelierB prover to discharge them automatically.</p>
          <p>As the resulting scenarios are symbolic and based on “possibly reached proofs”,
the analyst can conclude that attacks may exist but he can not attest their
feasibility for the concrete system. An interesting contribution of our proof-based
symbolic sequences, besides the fact that they draw the analyst’s attention to
potential flaws, is that they give useful inputs to the model-checker. Indeed, a
model-checking tool can be used to exhibit, from a symbolic behaviour, an
observable concrete sequence of operations that can be followed by an attacker. In
order to reduce significantly the state space, we can ask ProB to explore only
operations found in the symbolic malicious scenarios. For our example, when
trying only operations personNew, personAddMeetingOwner and meetingSetStart,
ProB exhibits a concrete attack scenario after visiting a dozen of states which
shows a significant speed up with respect to our initial ProB attempts (involving
more than 1000 states).</p>
          <p>Our technique was able to extract another scenario (figure 7) which can be
executed by user Bob from the same initial state, in order to steal the ownership
of m1. In this scenario, Bob first cancels the meeting and then he recreates it
before applying the critical operation.</p>
          <p>init0
meetingCancel
meetingN ew
meetingSetStart</p>
          <p>The first scenario, done by user John, is made possible by the full access
permission to class Person, associated to role SystemAdministrator, which includes
the right to modify association ends. This attack a ects meeting integrity. One
solution can be to add a SSD constraint between roles SystemAdministrator and
SystemUser. John will then still be able to become owner of the meeting, but
will not be able to log in as SystemUser in order to modify it.</p>
          <p>The second scenario done by Bob was possible due to role Supervisor which
gives him the right to cancel a meeting, and then, as a SystemUser he can
recreate it in order to become its owner. This scenario does not point out a flaw
since whenever a meeting is cancelled it should be legitimate that a user can
start a new meeting with the same identifier as the cancelled one.
6</p>
          <p>
            Conclusion
We described in this paper a symbolic search approach that can extract insider
malicious behaviours from a formal Information System modelling. The
meeting scheduler example was discussed in several articles [
            <xref ref-type="bibr" rid="ref2 ref3">3, 2</xref>
            ]. However, they do
not report the attack scenarios presented in this paper. This is due to the fact
that dynamic evolution of the functional state is not taken into account.
Contributions of this paper showed how dynamic analysis, assisted by proofs and
model-checking, is useful to find out potential threats. In addition, thanks to our
algorithm, proofs and model checking tools, our method can be fully automated
in order to extract attack scenarios breaking authorization constraint. In [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ],
a dynamic analysis is done interactively with the help of a Z animator, but it
is tedious and may miss many possible flaws. We also applied our approach on
the case study that has been treated in [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ] and we were able to find,
automatically, the discussed threat. Currently we are looking for application on a real
case study, issued from the ANR-Selkis project7, and which deals with a medical
information system involving various authorization constraints.
          </p>
          <p>Our approach is automated by exploiting tools B4MSecure8, GeneSyst9,
AtelierB10 and ProB11. B4MSecure translates functional and security graphical
models into B specification, from which we automatically produce proof
obligations on reachability properties by taking advantage of the GeneSyst tool.</p>
          <p>
            Then, these proof obligations are discharged automatically using the AtelierB
prover. When a symbolic scenario is found, ProB is used to explore concrete
state space focusing on operations issued from the symbolic scenario. The main
limitation of our work is that sometimes, when proof obligations are complex,
AtelierB fails to prove them automatically. Interactive proofs are then required,
but they may be pretty di cult for the analyst. One naive solution is to keep
operations for which proofs don’t succeed automatically in order to be exploited
further using the model-checker. A more interesting solution is to focus on other
kinds of proof obligations. For example, one can try to prove that an operation
o is never enabled from a state E and/or o never reaches a state F . Applying
7 http://lacl.univ-paris12.fr/selkis
8 http://b4msecure.forge.imag.fr
9 http://perso.citi.insa-lyon.fr/nstouls/?ZoomSur=GeneSyst
10 http://www.atelierb.eu/
11 http://www.stups.uni-duesseldorf.de/ProB
Calculus is inspired from the ⇡-calculus. It is an applied variant of ⇡-calculus
augmented with the notion of participants and their local states. In [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ],
authors have established a projection from the global calculus into the end-point
calculus. So descriptions of participants behaviors in end-point calculus are not
extracted directly from the choreography, but projected from the global calculus.
          </p>
          <p>
            Details of this theory of end-point projection are presented in [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ]. Since in the
proposed enforcement approach we transform the security policy to a monitor,
we reach immediately this goal by choosing EP C since it is a subset of EP C:
3.1
          </p>
          <p>Syntax of EP C
We show the syntax of EP C ; where s denote session channels, e an expression
which can be an atomic value (such as a natural number or a boolean value) or
a function (such as arithmetic functions and boolean operators) or a variable.
op1; op 2; : range over operations. x is a variable. The rst construct is a receiving
action, it is the invocation of one of the operators opi and reception of an
expression which will be evaluated and stored in xi: The second construct is a sending
action, it is the invocation of operator op with expression e: Furthermore, the
operator " " represents the alternative composition. Finally, recursion is used
for representing unbounded repetition. For representing recursive behaviors, we
use term variables X; Y; : : : The operator used for recursion is rec X:P: Each
occurrence of X in P denotes a recurring point. rec X:P behaves as P until an
occurrence of X is found in the execution of P; then it will return to rec X:P:
s B ⌃opi(xi):P i;
i
s C oph ei :P; P 1</p>
          <p>P2;</p>
          <p>X;
rec X:P;
0
Note that processes do not evolve alone. A process is located in a participant
which synchronize with another one to evolve. A participant A with its behavior
P at a local state is called a network and denoted by A[P ] : Syntax of networks
is given by the following grammar :
j
j
j
where A[P ] is a participant with its behavior as shown in the rst construct.</p>
          <p>Two communicating participants are represented by two networks combined by
parallel composition. When two participants initiate a session channel for
communication, this session channel must be restricted to these two participants,
this is given by (Res-NW): ✏ denote the lack of networks. Reduction rules of
networks are given by :</p>
          <p>M ⌘</p>
          <p>M 0</p>
          <p>M 0 ! N 0
M ! N</p>
          <p>N 0 ⌘ N</p>
          <p>Struct-NW
M j N ! M 0 j N</p>
          <p>M ! M 0
(⌫s)M ! (⌫s)M 0
Semantics of EP C is then given by :</p>
          <p>2 ` e + v
A[s C opj h ei :P] 1 j B[s B ⌃opi(xi):Qi] 2 ! A[P ] 1 j B[Qj ] 2[xj7! v]</p>
          <p>i</p>
          <p>A[P1] ! A[P 10] 0
A[P1</p>
          <p>P2] ! A[P 10] 0</p>
          <p>A[P [rec X:P=X ]] ! A[P 0 ] 0</p>
          <p>A[rec X:P] ! A[P 0 ] 0
Security policies are usually specied in logic-based languages. Such languages
employ mainly three operators to compose properties : and, or and not: The
or operator is given here by : The not operator is dened here as follows :
assume we have two participants A and B communicating and we want to apply
the property "A should not send op1 to participant B", then we write it using
EP C as follows :</p>
          <p>P = rec X:(sB C
i6=1
opih eii :X
sB B ⌃opi(xi):X)
i
Example 2. Assume we have a client wanting to check its account details within
a bank. The bank should not send him back his account details if he is not
yet authenticated. A client is said to be authenticated if he has received from
the bank an acceptance for his authentication's attempt using the operation
accept:The bank answers the client for his account request through the operation
resAccount:The security property will then be written as follows :</p>
          <p>P = rec X:(s B ⌃opi(xi):X</p>
          <p>i
s C
opi2f= accept;resAccount g
i
The block P 1 expresses the fact that the property allows all receiving actions.</p>
          <p>The block P 2 inhibits sending accept or resAccount to the client. The block P 3
intercepts the accept sending action and then no more restrictions are imposed
since the authentication of the client is accepted.</p>
          <p>So, an intruder who wants to check an account's details without authentication
cannot achieve its goal since the resAccount is only permitted in the block
following the accept sending action.
The technique used in this paper for describing the composition of web services
is the choreography. The WS-CDL is based on two engineering principles :
Service Channel Principle corresponds to the repeated availability of service
channels.</p>
          <p>Session Principle is a basic principle in many communication-centred programs
which says a sequence of conversations belonging to a protocol should not be
confused with other concurrent runs of this or other protocols by the participants.</p>
          <p>
            As a specication of the choreography description language WS-CDL, we
introduce a modied version of the end-point calculus which is a formalism
presented in [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ]. The end-point calculus describes the behavior of a participant
in a choreography from its end-point view.
The secured end-point calculus EP CS is a variant of EP C (End-Point Calculus)[
            <xref ref-type="bibr" rid="ref10">10</xref>
            ].
          </p>
          <p>EP CS has the particularity of explicitly handling the monitoring concept through
its operator @ P : For instance, the process @ P (P ) can execute only actions that
could be executed by both the controller P and the process P: We describe
hereafter the formal syntax and dynamic semantics of the secured end-point
calculus.</p>
          <p>Syntax of EP CS Since EP C is a subset of EP C and EP CS is an extension
of EP C; a part of the syntax of EP CS is presented in the last section. So in
this section we will present the overall syntax of EP CS but we will describe only
constructs that have not been described before.</p>
          <p>j
j
j
j
j
j
j
j
j
j
The two rst constructs represent session initiation. ! ch (s~):P is used for input
and ch (⌫ s~):Q for output. !ch (s~) says that the service channel c;h which is
available to public, is ready to receive an unbounded number of invocations, o↵ering
a communication via its freshly generated session channels s 2 s~: ch (⌫ s~) is an
invocation of a service located at the service channel ch and an initiation of a
communication session which will occur through session channels s~: After a
session has been initiated between two participants and freshly generated session
channels have been shared between them, they can communicate using the
communication constructs. The assignment operator is x := e:P . if e then P else Q
is a choice based on the evaluation of the expression e: The parallel
composition is given by P j Q: The restriction construct (⌫s)P indicates that the session
channel s is local to P: 0 denotes the lack of actions. Finally, @ P (P ) is the
enforcement operator which controls the execution of P by allowing it to evolve if
P is satised.</p>
          <p>Semantics of EP CS
Initiation/Communication simulation Let P be a process stipulating the local
behavior of a participant A in a web service. We observe in the end-point
calculus reduction rules that processes always evolve without an external factor
unless in initiation and communication. In fact, a participant initiating a
communication or communicating with an another participant needs to synchronize
with him to realize the interaction . Therefore, for enforcing security policies
on a participant's behavior, we need to have a totally local description of its
interactions. So we need to dene a simulation relation for the initiation and the
communication of processes so that we can simulate the evolution of a process
locally, i.e. without synchronizing with an another participant . We will dene
the normal form of a process then introduce the simulation relation.</p>
          <p>Denition 1 (Normal form of a process). Every process representing the
local behavior of a participant in a web service can be written as an internal sum
of processes, which we call the normal form of a process :</p>
          <p>8 P 2 P ; P = i aiPi
where P denotes the set of processes, ai range over atomic actions and Pi range
over processes in P:
Denition 2 (Simulation Relation). We dene a simulation relation over
networks, denoted by A[P ] a A[P 0 ] 0 ; which says that process P in A at the
state is able to execute the action a and reduce to P 0 with a new local state 0 :
The simulation relation is dened following this rule</p>
          <p>P = i aiPi 9 i 2 f 1; : : : ; n</p>
          <p>g : a = ai</p>
          <p>This rule says that when P is written in its normal form and one of the
constituting processes is able to do an action a then A[P ] is also able to do it.
Semantics of EPCS :</p>
          <p>Init
A[!ch (s~):P j P0 ] A j B[ch (⌫s~):Q j Q0 ] B ! (⌫s~)(A[!ch (s~):P j Pj P0 ] A j B[Qj Q0 ] B)</p>
          <p>A ` e + v
A[x := e:P j P0 ] A ! A[Pj P0 ] A[x7! v]</p>
          <p>Assign</p>
          <p>A ` e + tt
A[if e then P1 else P2j P0 ] A ! A[P1j P0 ] A IfTrue
A[@ P (!ch (s~):P )] Aj B[ch (⌫s~):Q j R] B ! (⌫s~)(A[@ P (P)j @ P (!ch (s~):P )] Aj B[Qj R])</p>
          <p>Init-In-Sec</p>
          <p>Init-Out-Sec
A[@ P (ch (⌫s~):P )] Aj B[!ch (s~):Q j R] B ! (⌫s~)(A[@ P (P)] Aj B[!ch (s~):Q j Qj R] B)</p>
          <p>A[P] A sB op(x) A[P0 ] A A[P ] A sB op(x) A[P0 ] A A ` e + v
A[@ P (P)] Aj B[s C oph ei :Q j R] B ! A[@ P0 (P0 )] A[x7! v]j B[Qj R] B Comm-In-Sec</p>
          <p>Comm-Out-Sec</p>
          <p>A[P] A sC opjh ei A[P0 ] A A[P ] A sC opjh ei A[P0 ] A B ` e + v
A[@ P (P)] Aj B[s B ⌃i opi(xi):Q ij R] B ! A[@ P0 (P0 )] Aj B[Qjj R] B[xj7! v]</p>
          <p>A ` e + v
A[@ P (x := e:P )] A ! A[@ P (P)] A[x7! v]</p>
          <p>Assign-Sec</p>
          <p>A[@ P (P1)] ! A[@ P0 (P10)] 0
A[@ P (P1j P2)] ! A[@ P0 (P10j P2)] 0 Par-Sec</p>
          <p>` e + tt
A[@ P (Pj if e then P 1 else P2)] ! A[@ P (Pj P1)] IfTrue-Sec</p>
          <p>` e + ff
A[@ P (Pj if e then P 1 else P2)] ! A[@ P (Pj P2)] IfFalse-Sec</p>
          <p>The Init-rule shows how two participants initiate a session by sharing new
freshly generated session channels s~: These session channels are restricted to
participants A and B by the binding operator (⌫): Assignment is a local construct.</p>
          <p>Assign-rule evaluates an expression e and assigns the result of this evaluation
to the variable x in A; then A behaves as P: The Res-rule restricts the use of
session channels s~ to the process P in A: Init-In-Sec and Init-Out-Sec are the
rules for communication initiation. We do not control session initiations but we
control communication messages between the participants. Communication rules
say if P and P are able to send or receive through the same session channel
the same operation and become respectively P 0 and P 0 then @ P (P ) do this
action and becomes @ P 0 (P 0 ): Secured assignment rule says that assignment is not
considered by enforcement. The secured parallel composition rule says that the
security operator applied to P1j P2 can evolve into (@ P 0 (P 10j P2)) if P1 can evolve
simultaneously with the policy security P into respectively P 10 and P 0 : For the
restriction rule, binding session channels does not a↵ect the enforcement. Finally,
the conditional rules say that enforcement is not a↵ected by conditionals.
5
The goal of this research is to enforce security policies over a choreography.</p>
          <p>Some of the important features of the enforcement operator is that it allows us
to enforce only concerned participants by the security policies. In this in-lined
monitoring framework, we do not modify the original behaviors of participants
in a choreography. But if the security policy is not veried the evolution of the
choreography will stop. In this section, we prove the correctness of our theory
by dening rst some notions such as the partial order over processes and
satisfaction notions.</p>
          <p>Denition 3 (Partial Order over Processes). Let A[P1] ; A [P2] be two
networks. We say A[P1] v A[P2] if the following condition hold :</p>
          <p>A[P1]
a A[P 10]</p>
          <p>a A[P 20] and A[P 10] v A[P 20] :
Denition 4 (Satisfaction Notions). Let P be a security policy and ⇠ a
trace. Symbols ✏ and j s are dened as follows :
{ We say that ⇠ satises P ; denoted by ⇠ ✏ P ; if ⇠ 2 JP K where JP K denotes</p>
          <p>the set of traces of P :
{ We say that ⇠ could satisfy P ; denoted by ⇠ j s P ; if it exists a trace ⇠0</p>
          <p>such that ⇠:⇠ 0 ✏ P :
Theorem 1. Let P be a process and P a policy security. The following
properties hold :
1. @P (P ) j s P ;
2. A[@P (P )] v A[P ] ;
3. 8 P 0 : ((P 0 j s P ) and (A[P 0 ] v A[P ] )) =) A[P 0 ] v A[@P (P )] :
Proof. 1. The process @P (P ) is dened so that it can evolve into another
pro</p>
          <p>cess only if the security policy P is satised.
2. As well as the rst property, the proof is obtained directly from the reduction</p>
          <p>rules of the security operator and the denition of the partial order.
3. Consider a process P 0 2 P such that P 0 j s P and A[P 0 ] v A[P ] : Suppose</p>
          <p>A[P 0 ] a A[P 10] 0 : Since A[P 0 ] v A[P ] 0 ; we conclude directly from the
denition of v that A[P ] a A[P1] 0 : Since P 0 j s P ; we can also conclude
from the denition of j s ; that a j s P : Finally, as A[P ] a A[P1] 0 and
a j s P so A[@P (P )] a A[@P 0 (P1)] 0 and then A[P 0 ] v A[@P (P )] :
Example 3 (Buyer-Seller Protocol). The buyer initiates a communication with
the seller and requests for a quote. The seller sends back the quote. If the buyer
rejects it then the protocol terminates. Otherwise, the seller sends to the buyer
an order conrmation and the buyer conrms his command. In this case, the
seller contacts the shipper and asks for delivery details which he transfer to the
buyer and the protocol terminates. The security property that we want to
apply in this protocol is : "the seller should communicate with the shipper only if
the buyer conrms his command". As a consequence, the seller won't send to
buyer the delivery details if he has not conrmed his command. The protocol is
depicted in Fig.1. Critical actions that we have to supervise are in dark grey in
Fig.1.</p>
          <p>The choreography's description in global calculus is the following :</p>
          <p>Buyer ! Seller : B2SCh (s):
Seller ! Buyer : sh ackSessioni :
Buyer ! Seller : sh reqQuotei :
Seller ! Buyer : sh respQuote; vquote; xquotei :
if resonable(xquote)@Buyer then</p>
          <p>Buyer ! Seller : sh quoteAccepti :
Seller ! Buyer : sh orderConf irmi :
Buyer ! Seller : sh ackConf irmi :
Seller ! Shipper : B2SHCh (s0 ):
Shipper ! Seller : s0 h ackSessioni :
Seller ! Shipper : s0 h reqDelivDeti :
Shipper ! Seller : s0 h delivDet; vdelivDet; xdelivDeti :</p>
          <p>Seller ! Buyer : sh delivDet; xdelivDet; xdelivDeti :0
else</p>
          <p>Buyer ! Seller : sh quoteRejecti :0
The end-point projection gives the end-point behaviors. The buyer's behavior is
the following :</p>
          <p>Buyer[B2SCh (⌫s):sB ackSession:s C reqQuote:sB respQuote(xquote)
ackSession
reqQuote
respQuote
quoteAcc
quoteRej
orderConrm
ackConrm
delivDet
Shipper
arcekqSDesesliiovnDet</p>
          <p>delivDet</p>
          <p>Fig. 1. Buyer Seller Protocol
:if reasonable(vquote) then s C quoteAcc:sB orderConf irm:s C ackConf irm</p>
          <p>:sB delivDet(xdelivDet):0 else s C quoteRej:0]
The seller's behavior is the following :</p>
          <p>Seller[B2SCh (s):s C ackSession:sB reqQuote:s C respQuoteh equotei :</p>
          <p>(s B quoteAcc:s C orderConf irm:sB ackConf irm:
B2SHCh (⌫s0 ): s0B ackSession:s0 C reqDelivDeth ebuyeri : s0B delivDet(xdelivDet)</p>
          <p>:s C delivDet(xdelivDet):0 ) + (s B quoteRej:0)]
The shipper's behavior is the following :</p>
          <p>Shipper [B2SHCh (s0 ):s0 C ackSession: s0 B reqDelivDet(xbuyer)
:s0 C delivDeth edelivDeti :0]
The security property has the following behavior :</p>
          <p>P = rec X: ( s C</p>
          <p>op6=delivDet
s B ⌃ opi(xi):X</p>
          <p>op6=ackConfirm
s B ackConf irm: rec Y: (s0 C
The security property P is written using the recursion operator. In the recursion
block, there are 3 processes combined with the internal choice. The rst process
is : sC opih ei :X which says that, through the session channel s (shared</p>
          <p>op6=delivDet
between the buyer and the seller), the buyer can execute any sending action
unless delivery details which should be done after the seller has received a
conrmation from the buyer. The second process is : sB ⌃ opi(xi):X which</p>
          <p>op6=ackConfirm
says that the seller can receive any operation other than ackConf irm. This
action is intercepted in the third process s B ackConf irm: rec Y: (s0 C opih ei :Y
s0 B ⌃opi(xi):Y s C</p>
          <p>i i
conrmation from the buyer, he can communicate with the shipper through the
session channel s0 without any restriction and he has also no restriction on his
sending actions to the buyer, so he can send him the delivery details.</p>
          <p>i
opih ei :Y ): In this process, after the seller has received a
6
Web services verication have been a subject of interest of several research
efforts. Some of the relevant contributions in this domain are cited in this section.</p>
          <p>Most of formal approaches introduced a monitor which does not stop the
program when a violation is detected. Moreover, these contributions implement a
monitor as a web service in addition to other web services. The originality of our
work is the introduction of the monitor within concerned participants processes.</p>
          <p>In [11], a run-time event-based approach to deal with the problem of monitoring
conformance of interaction sequences is presented. When a violation is detected,
the program shows errors in dashboards. In [12], authors introduce an approach
to verify the conformance of a web service implementation against a behavioral
specication, through the application of testing. The Stream X-machines are
used as an intuitive modeling formalism for constructing the behavioral
specication of a stateful web service and a method for deriving test cases from that
specication in an automated way. The test generation method produces
complete sets of test cases that, under certain assumptions, are guaranteed to reveal
all non-conformance faults in a service implementation under test. However, this
approach only returns non-conformance faults and does not react dynamically
against these errors. While, in [13], authors propose a monitoring framework of
a choreographed service which supports the early detection of faults and decide
whether it is still possible to continue the service. Authors in [?] have proposed
service automata as a framework for enforcing security policies in distributed
systems. They encapsulate the program in a service automaton composed of
the monitored program, an interceptor, an enforcer, a coordinator and a local
policy. The interceptor intercepts critical actions and passes them to the
coordinator that determines whether the action complies the security policy or not
and decides upon possible countermeasures then the enforcer implements these
decisions. However the authors do not precise how to detect critical actions.
7
In this paper, we have introduced a formal approach allowing to automatically
enforce a security policy on choreographed services. Indeed, we introduced a
new calculus with an enforcement operator @ P : The semantics of the proposed
calculus insure that choreographed services can evolve only if it does not violate
the enforced security policy. The originality of our work consists on the fact that
we do not add a new web service as a monitor but rather we wrap the security
policy inside the choreographed services.</p>
          <p>Future work will focus on the denition of a complete mapping between
WSCDL and global calculus. Moreover, we will seek means to optimize the enforced
choreographed services so that we reduce as much as we can the overhead due
to the enforcement operator.
11. Baouab, A., Perrin, O., Godart, C.: An optimized derivation of event queries to</p>
          <p>monitor choreography violations. In: ICSOC. (2012) 222{236
12. Dranidis, D., Ramollari, E., Kourtesis, D.: Run-time verication of behavioural</p>
          <p>conformance for conversational web services. In: ECOWS. (2009) 139{147
13. Ardissono, L., Furnari, R., Goy, A., Petrone, G., Segnan, M.: Monitoring
choreographed services. In: Innovations and Advanced Techniques in Computer and</p>
          <p>Information Sciences and Engineering. (2007) 283{288
14. Fu, X., Bultan, T., Su, J.: Analysis of interacting bpel web services. In: WWW.</p>
          <p>(2004) 621{630
A Timestamping Scheme with Eternal Security
in the Bounded Storage Model</p>
          <p>Assia Ben Shil1
Faculty of Sciences of Bizerte</p>
          <p>University of Carthage
Email: essia.benshil@gmail.com</p>
          <p>and
Kaouther Blibech Sinaoui2</p>
          <p>ISSAT of Mateur</p>
          <p>University of Carthage
Email: kaouther.blibech@gmail.com</p>
          <p>1,2 LIP2 Laboratory, Tunis, Tunisia
Abstract. Digital timestamping is a cryptographic technique allowing
a xing a reliable date to a digital document. The security of most
existing timestamping systems is based on the security of the used
cryptographic techniques as hash functions. However, a hash function has a
limited lifetime. In this context, we provide a non-interactive
timestamping scheme in the bounded storage model (BSM) whose security is not
related to the life of any cryptographic technique. We prove, in fact, that
our timestamping scheme is eternally secure even against an adversary
with unlimited computing power.</p>
          <p>Keywords: Timestamping, bounded storage model, computing power, eternal
security.
1
In the last years, digital documents are beginning to replace paper documents
in several areas. For this reason, it is important to locate a digital document in
time to prove its existence and integrity since a given date. This task is achieved
through a timestamping system that operates in two phases: a timestamping
phase allowing one or more Timestamping Authority (TSA) to a x a reliable
date to a document and generate the associated timestamp and a verification
phase allowing any potential verifier to verify the correctness of the produced
timestamp.</p>
          <p>
            The security of most existing timestamping systems [
            <xref ref-type="bibr" rid="ref10 ref2 ref3 ref4 ref5 ref6 ref7 ref8 ref9">2-10</xref>
            ] is based on the
security of the used cryptographic techniques as hash functions. These
techniques are secure under the assumption saying that the users’ computing power
is limited. However, nowadays, the computing power of computers grows
exponentially. Therefore, the used hash functions are not secure all the time [17].
          </p>
          <p>
            So, the existing timestamping systems cannot be considered secure forever. To
propose timestamping systems producing timestamps with eternal validity, we
are placed in the Bounded Storage Model (BSM) [13, 14]. In this model, Maurer
assumes that users’ storage capacity is limited, while their computing power can
be unlimited. In this model, the ciphers are eternally secure [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ]. The principle is
that a very long random string is transmitted in every round t (the interval
between the time t and the time t+1). At the time of the transmission of this string,
no participant has su cient storage capacity to store it fully.Even if later, his
storage capacity becomes su cient to fully store the string transmitted in t, the
user has already lost he access to this string. Thus, the performed encryptions
are valid forever. Moran proposed in [15] a non-interactive timestamping scheme
in the bounded storage model. Indeed, in the scheme of Moran, each stamper
or document owner can timestamp his document locally without
communicating with any third party [15], unlike conventional timestamping systems that
involve at least one TSA. Thus, the non-interactive timestamping ensures
total confidentiality and hides even the fact that a timestamping occurred. These
characteristics have made the proposition of Moran very interesting, but the
timestamping system he proposed su↵ ers from a lack of precision and practical
details. In this context, we present a non-interactive timestamping scheme in the
BSM. This paper is organized as follows: After introducing the BSM in section
2, we present firstly Moran’s timestamping system, and then our non-interactive
timestamping scheme in the BSM in section 3. Then, we detail more formally
our timestamping scheme. Finally, we formally prove the eternal security of our
timestamping solution.
2
Generally, in cryptography, the proposed systems and the used functions are
secure under the assumption that there is a limit of the computational power of
any user or adversary. In the bounded storage model, the proposed systems must
be secure even against an adversary with an unlimited computational power. The
BSM was proposed by Maurer in 1992 [13] for the development of encryption
keys. It aims to generate, from a short secret key K, a key with a large size X
[14] that can be used as encryption key. The system operates as follows: In this
model, it is assumed that the storage capacity is unlimited, no assumptions about
the computing power was made. Let s be the assumed limit on a user’s storage
capacity. Ciphers in the BSM use a very long string R called randomizer. The
latter may for example be a random sequence of bits transmitted by a satellite.
          </p>
          <p>
            If R is a random string of r bits, the space of R is {0, 1}r. Notice that r s is
required to ensure that no user can fully store R. Having a secret key K of size k
in the space {0, 1}k, we can use a known function f : {0, 1}r ⇥ {0, 1}k, x ! {0, 1}
to generate the derived key X = f (R, K) of size x bits. The function f must use
only a small part of R so that we do not need to fully read R. Maurer’s system
has been the subject of intensive studies. Indeed, many key generation systems
have been proposed in the BSM [11]. The BSM was used for timestamping by
Moran in [15]. In [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ], we proposed an improvement of Moran’s timestamping
system. In the following section, we present the timestamping system of Moran
and then our proposition is presented.
3
          </p>
          <p>Timestamping Solutions in the BSM
In the BSM, we assume that a long random string R of r bits is transmitted
during the round t (between t and t + 1). If s is the maximum storage capacity
of any entity in t, then s ⌧ r. Notice the space of R is {0, 1}r where r is the
size of the string R. Similarly, we consider that a document D of size d has a
value in {0, 1}d. In the timestamping scheme proposed by Moran, to timestamp
a document D, its content is used to select a few blocks from R whose values will
be inserted in the timestamp T . Any verifier must save randomly some blocks
of R (using a function named Sketch(R)) in order to verify, later, the validity of
any timestamp made during the round t. Verifying the validity of a timestamp
associated to a document D is performed at a later date by a verifier who has
simply to verify that there are no conflicts between his sketch and the timestamp
of D. However, in this solution, a timestamp includes some additional values of
R. If the verifier cannot store during the round t more than s bits of R, he may
at the verification time store s0 with s0 &gt; s. Each verification of a timestamp
generated in the round t can lead him to discover new blocks of R. After a
number of verification processes, he can reconstruct partially, if not entirely R
and backdate any document.</p>
          <p>To remedy this problem, we propose a timestamping protocol allowing
verifying the timestamp of a document D without learning additional values of R.</p>
          <p>To this aim, instead of inserting the blocks of R in the timestamp, these blocks
will be the secret of the stamper. The verifier must then prove that he has the
value of a given block in his sketch and the stamper has to prove that he has used
this value to create the timestamp. In the verification process of this scheme,
a verifier may accept or reject a timestamp without discovering any additional
information about the string R transmitted during the round t. The idea is to
timestamp the document D locally using the secret sharing scheme of Shamir
[18] in order to divide D into n shares Di. Assuming that the blocks of R are
indexed by a number beginning from 1 till the number of blocks, the values RDi ,
indexed by the shares Di for 1  i  n are recovered. The polynomial that
passes through the points Pi(RDi , Di) for 1  i  n represents the timestamp of
D. Note Share(D) the set of shares Di. Any user of the system saves a random
subset of R named Sketch(R) formed by a number of couples of values (index of
block, value of the block). To verify the validity of the timestamp T of D for a
random string R the verifier who stored Sketch(R) proceeds as follows: For each
index in both Share(D) and Sketch(R) he recovers the associated block Ri of
R, computes its associated index by the polynomial given in the timestamp and
verifies that the associated value in Sketch(R) is Ri. In the following sections,
we provide a formal representation of our timestamping system and we prove its
eternal security. To this aim, we will show that once the string R is transmitted,
the probability of forging a fake timestamp for a document D is close to zero.
In this section, we introduce some notations that we will use later in this paper.</p>
          <p>- Randomizer: We call ”randomizer” and we denote R the random string
transmitted during a round t. This string is divided into n blocks of size b bits
indexed from 1 to n and denoted R1, . . . , Rn. Knowing that the length of R is
r bits and the size of a block is b bits, r = n.b. For each subset S ✓ I(R) where
I(R) is the set of indexes of blocks Ri (1  i  n), we denote R|S the set of
blocks of R 8 i 2 S.</p>
          <p>- Hamming Distance: A vector being a set of blocks, we define the
Hamming Distance between two vectors c1 and c2 denoted DH as the number of
blocks for which the two vectors di↵ er.</p>
          <p>- Threshold secret sharing: The threshold secret sharing is a technique
for dividing a secret S into l shares such that the coalition of at least shares is
necessary to reconstruct the secret while the coalition of at most 1 shares do
not reveal even partially the secret ( &lt; l). A -out of-l threshold secret sharing
scheme is denoted SSS( , l).</p>
          <p>- Shamir secret sharing scheme: The secret sharing scheme based on
Shamir’s polynomial interpolation scheme is a SSS( , l). We denote it SSSS
( , l). The principle is to fix a polynomial P of degree 1 and X a set of
values (X = X1, . . . , Xl). The secret sharing function denoted Share takes as
input the secret S and generates the set of shares Share(S) = (S1, . . . , Sl) such
that 8 i, 1  i  l, Si = P (Xi). The Reconstruction function denoted Share 1
take as input a subset of X denoted XS = [XS1 , . . . , XS ] such that |XS | =</p>
          <p>and the associated shares: Share 1(XS , SXS1 , . . . , SXS ) = P , with P a
polynomial such that 8 Xi 2 XS and Si 2 [SXS1 , . . . , SXS ], P (Xi) = Si. The
secret S is computed as follows: S = P (0).</p>
          <p>Presentation of our Timestamping Scheme
A stamper is represented by the two following functions:</p>
          <p>- Store(D, R) that uses Shamir’s secret sharing process to compute
Share(D) for a given document D. Then, It computes the vector R|Share(D) and
stores it. More formally, Store(D, R) consists in computing Share(D) = (D1,
. . . ,Dl), where Di is the ithindexspecif iedbyD.T othevector(RD1 , . . . , RDl ) is
associated the vector Share(D) where RDi is the block of R indexed by Di. We
denote this vector R|Share(D), where the notation R|I means the values of blocks
of R indexed by I1, . . . , In, with I = I1, . . . , In.
Definition 1. Store(D, R) = R|Share(D).</p>
          <p>- Stamp(D, Store(D, R)) uses Shamir’s reconstruction process to find
the unique polynomial passing through the points Pi(x, y) where x is a block of
the vector R|Share(D) and y the associated blocks in Share(D). This polynomial
is the timestamp T .</p>
          <p>Definition 2. T = Share 1(R|Share(D), Share(D)).</p>
          <p>Definition 3. Sketch(R) = (H, R|H ).</p>
          <p>Definition 4. V erif y(Sketch(R), D, T ) allows verifying the following equality
: T (R|H\ Share(D)) = H \ Share(D)</p>
          <p>In other words:
Definition 5. V erif y(Sketch(R), D, T ) allows verifying that DH(T (R|H \
Share(D)), H \ Share(D)) = 0 for a timestamp T . If this equality is
verified, the timestamp T is accepted by the verifier and is said ”valid”.</p>
          <p>The behavior of an adversary
An adversary consists in the two following functions:</p>
          <p>- Store⇤ (R) which saves a subset of R called C. The di↵ erence
between Sketch(R) and Store⇤ (R) is that Sketch(R) is computed ”online” while
Store⇤ (R) function may not be.</p>
          <p>- Stamp⇤ (D, C) that given a document D and a string C tries to produce
a timestamp T ⇤ of D.</p>
          <p>Definition 6. Stamp⇤ (D, C) = R⇤ |Share(D).</p>
          <p>Definition 7. T ⇤ = Share 1(R⇤ |Share(D), Share(D)).</p>
          <p>Where R⇤ |Share(D) is the vector of blocks associated to Share(D) according
to T ⇤ . If an adversary A produces for a document D and a randomizer R, a
timestamp T ⇤ that is equal to the timestamp T produced by Stamp(D, Store(D, R)),
we say that he backdates ”correctly” the document. More formally:
Definition 8. An adversary backdates a document D with a success probability
for a given randomizer R if : P r[V erif y(Sketch(R), D, Stamp⇤ (D, Store⇤ (R)))]
Definition 9. An adversary backdates correctly a document D for a given
randomizer R if DH(V1, V2) = 0 with V1 = R⇤ |Share(D) and V2 = R|Share(D).</p>
          <p>Definition 10. An adversary backdates correctly a document D for a given
randomizer R with at most err errors, if DH(V1, V2)  err, with V1 = R⇤ |Share(D)
and V2 = R|Share(D).
6</p>
          <p>Security Proofs of our Timestamping Scheme
Given the following parameters:
- s : the storage capacity of the most powerful adversary.
- r : the size of the random string R transmitted during a round t.</p>
          <p>- b: the size of a block of the random string R transmitted during a
round t.</p>
          <p>- l: the number of indexes specified by a given document.</p>
          <p>- n: the number of blocks of the random string R transmitted during a</p>
          <p>Negligible Probability of backdating documents
In our timestamping scheme, the probability that an adversary backdates a
document D for a string R already transmitted using his stored blocks of R is
negligible. This proof is established in two steps. In the first step, we show that
if an adversary A wants to backdate successfully a document D for a random
string R, then he must backdate it ”correctly” for this string R with an error
err negligible. In the second step, we show that the probability of backdating
correctly a document D for a random string R is negligible.
First step If the adversary produces a timestamp of D such that the vector of
blocks associated to Share(D) according to this timestamp is far in Hamming
distance from the vector of blocks associated to the timestamp produced by
Stamp(D, Store(D, R)) then the verifier may with a high probability reject the
timestamp of the adversary because the values indexed by Share(D) according
to the timestamp do not match the values indexed by Share(D) according to his
sketch. Given a correct timestamp T and a timestamp produced by an adversary
A for the document D and the random string R denoted T ⇤ , the adversary
backdates the document D for R successfully, if he produces a timestamp T ⇤
such that the vector of blocks associated to Share(D) according to T ⇤ denoted
R⇤ |Share(D) is close in Hamming distance to R|Share(D). In this case, we say that
the adversary backdates “correctly”the document D for the random string R
with err errors. Where err is an integer very close to zero. More formally, let
A be an adversary. Denote Rsuccessful(D) = R successful(D) the set of strings R
for which A has the necessary storage to try to backdate the document D with
a probability of success greater than .</p>
          <p>Lemma 1. If an adversary backdates a document D for a random string R
with a probability of success then he backdates it “correctly”with at most
(n/|H|)ln(1/ ) errors. [16]
is:
Proof. Let us suppose that the adversary provides a timestamp for the document
D for the string R such that the timestamp is made with err⇤ &gt; err incorrect
indices. Denote IN CORRECT (D, R) the set of incorrect indices for D and R.</p>
          <p>If H \ IN CORRECT (D, R) 6= ; the verifier will reject the timestamp of the
adversary.</p>
          <p>Let i be an indice of H, the probability that i be in IN CORRECT (D, R)
is : P r[i 2 IN CORRECT (D, R)] = err⇤ /n.</p>
          <p>The probability that i does not belong to IN CORRECT (D, R) is 1 P r[i 2
IN CORRECT (D, R)] = 1 err⇤ /n.</p>
          <p>The probability that all the elements of |H| do not belong to IN CORRECT (D, R)
P r[8 i 2 H, i 2 / IN CORRECT (D, R)] = |H|.(1 err⇤ /n)  e (err⇤ |H|)/n.</p>
          <p>If an adversary backdates a document D for a random string R with a
probability of success  e (err⇤ |H|)/n  e (err|H|)/n then he backdates it correctly
with at most err  n/|H|.ln(1/ ).</p>
          <p>Denote Rcorrect(D) the set of strings R for which A can “correctly”backdate
D with at most (n/|H|)ln(1/ ) errors.</p>
          <p>Theorem 1. If err  n/|H|.ln(1/ ) then, Rsuccessful(D) is a subset of Rcorrect(D).
[16]
Proof. According to the lemma 1, if an adversary backdates a document D for
a random string R with a probability of success then he backdates it
correctly with at most err  n/|H|.ln(1/ ). So, if a random string R belongs to
Rsuccessful(D) then it belongs to Rcorrect(D). Thus, Rsuccessful(D) subseteq
Rcorrect(D). In addition, more the probability of success become close to 1,
more this error become close to 0. So, successfully backdating means correctly
backdating with a “negligible”error. We prove, in the second step, that the
probability that the random string R chosen by the adversary to backdate a document
D be in Rcorrect(D) is negligible.</p>
          <p>Second step We now prove that for an adversary A a document D and a string
R: P r[R 2 Rcorrect(D)] is negligible.</p>
          <p>Theorem 2. If l 1 and b
the size of a block of R.</p>
          <p>1 then P r[R 2 Rcorrect(D)] is negligible, with b
Proof. We proved in the first step, that if an adversary backdates a document
“successfully”, he backdated it “correctly”with at most a negligible error. Then
we proved in the second stage that the probability that the string R for which
the adversary tries to backdate the document D be in Rcorrect(D) is negligible.</p>
          <p>In fact, knowing that the size of blocks of a random string R is b and the
number of these blocks is n, the number of possible random strings is (2b)n.</p>
          <p>Moreover, to backdate a document D, the adversary has to create a
timestamp T ⇤ for D such that at least l err blocks of R indexed by D are used to
generate T ⇤ .</p>
          <p>In other words, he can try to backdate D only for random strings for which
he knows the values of at least l err blocks from the l blocks indexed by D.</p>
          <p>Thus, since the adversary tries to correctly backdate D with at most err errors,
the number of random strings he can use is at most (2b)n l.</p>
          <p>So, the probability that a random string R belongs to Rcorrect(D) is:</p>
          <p>P r[R 2 Rcorrect(D)]  (2b)n l/(2b)n  1/2lb. Since l
probability is negligible.
1 and b
1, this</p>
          <p>The Eternal Security of our Timestamping Scheme
In [16], Moran proves that in his non-interactive timestamping scheme, an
adversary with a storage M can easily backdate = M/T documents by running
the timestamping process on some k documents and storing the generated
timestamps (each of which has length at most T ). However, the probability that an
adversary backdates more than k documents is negligible. We show here that, in
our timestamping scheme, after the transmission of R, it is very di cult to forge
a fake timestamp for a given document. Moreover, we show that an adversary
having a document D and correct timestamps can forge a fake timestamp for
D only with a negligible probability. Thus, we prove the following theorem:
Theorem 3. If 2b r/b and r/b &gt; l then the probability to forge a fake
timestamp for a document D and a string R using correct timestamps related to R
is negligible.
Proof. The inequality 2b r/b means that the number of values for a block
of size b bits is greater than the number of blocks of the string R transmitted
during a round t.</p>
          <p>l is the number of indices specified by a given document, this number must
always be less than the number of blocks of R. So, r/b &gt; l.</p>
          <p>It follows that the 2b l, which means that the number of possible values
for a block of R is much greater than the number of indices specified by the
document D.</p>
          <p>For each timestamp Tj (1  j  ), if the adversary gives any value v from
the 2b possible values of a block of R, it will recover a given index i.</p>
          <p>However, the fact that i = Tj (v) does not mean that Ri = v. This means
that i is the value associated to v by the polynomial Tj but the couple (i, v)
does not necessarily belong to the string R. In other words, the string R may
not associate the value v to the block indexed by i. Moreover, it may exist i
such that Ri0 = v and there is no way to verify if i = i0. The only points of
Tj for which the adversary knows that they belong to R are the points whose
indices are specified by the document associated to Tj . The probability that the
adversary chooses one of these points is l/2b ⌧ 1.</p>
          <p>To obtain the k points required to forge a fake timestamp, the probability
is negligible since it is the product of the probabilities of selecting each of the
points belonging to a valid timestamp.</p>
          <p>Thus, the adversary can obtain the k points needed to forge a fake timestamp
for a document D for a random string R with a negligible probability.</p>
          <p>Conclusion
In this paper, we have presented a non-interactive timestamping solution in the
bounded storage model. Our solution is not interactive and hides even the fact
that a timestamping occurred. It also ensures total confidentiality of the provided
timestamps. In addition, our solution provides eternal security for the provided
timestamps. In fact, neither increasing the storage capacity of an adversary or
the evolution of his computing power will compromise a provided timestamp.</p>
          <p>Thus, our solution is more secure than existing systems whose timestamps can
be challenged when the computing power or storage capacity of users increase.</p>
          <p>In this context, we studied the security of our solution and formally proved that
the possibility of cheating is negligible. In our future work, we plan to adopt new
secret sharing schemes for timestamping.</p>
          <p>References
1. Y. Aumann and Y. Z.Ding and M. O. Rabin, Everlasting security in the bounded</p>
          <p>storage model, IEEE Transactions on Information Theory. 2002.
2. D. Bayer and S. Haber and W. S. Stornetta, Improving the e ciency and reliability
of digital timestamping, In: Sequences91: Methods in Communication, Security and
Computer Science. 1992.
3. A. Ben Shil and K. Blibech and R. Robbana, A New Timestamping Schema in the</p>
          <p>Bounded Storage Model, In: Proceedings of the 3rd Conference on Risks and Security
of Internet and Systems. CRiSIS. 2008.
4. K. Blibech and A. Gabillon, A New Timestamping Scheme Based on Skip Lists, In:</p>
          <p>ICCSA (3). pp. 395-405, 2006.
5. K. Blibech and A. Gabillon, A New Totally Ordered Timestamping Scheme, In: 5th</p>
          <p>Conference on Security and Network Architectures SAR. . 2006.
6. K. Blibech and A. Gabillon, CHRONOS: an authenticated dictionary based on skip</p>
          <p>lists for timestamping systems, In: SWS. pp. 84-90, 2005.
7. K. Blibech and A. Gabillon and A. Bonnecaze, Etude des syst`emes d’horodatage,</p>
          <p>Technique et Science Informatiques 26(3-4). pp. 249-278, 2007.
8. A. Bonnecaze and P. Liardet and A. Gabillon and K. Blibech, A distributed time
stamping scheme, 4th Conference on Security and Network Architectures SAR 2005.</p>
          <p>2005.
9. A. Budas and P. R. Laud and H. Lipmaa and J. Willemson, Timestamping with</p>
          <p>Binary Linking Schemes, In: CRYPTO. ICICS. pp. 486-501, 1998.
10. A. Budas and P. R. Laud and B. Schoenmakers, Optimally e cient accountable</p>
          <p>timestamping, In: Public Key Cryptography. 2000.
11. S. Dziembowski and U. Maurer, On Generating the Initial Key in the
Bounded</p>
          <p>Storage Model. In: Advances in Cryptology- EUROCRYPT. pp.126-137, 2004.8
12. S. Haber and W. S. Stornetta, How to Time-Stamp a Digital Document, J.
Cryp</p>
          <p>tology 3(2). pp. 99-111, 1991.
13. U. Maurer, Conditionally-perfect secrecy and a provably-secure randomized cipher,</p>
          <p>Journal of Cryptology. pp. 53-66, 1992.
14. U. Maurer, Secret key agreement by public discussion, IEEE Transaction on
Infor</p>
          <p>mation Theory, 39. pp. 733-742, 1993.
15. T. Moran and R. Shaltiel and A. Ta-Shma, Non-interactive Timestamping in
the Bounded Storage Model, In: In Advances in Cryptology. CRYPTO’04. LNCS,</p>
          <p>Springer, 3152. pp. 460-476, 2004.
16. T. Moran and R. Shaltiel and A. Ta-Shma, Non-interactive Timestamping in the</p>
          <p>Bounded Storage Model, In: J. Cryptology. pp. 189-226, 2009.
17. National Institute of Standards and Technology (NIST), Announcement of
Weak</p>
          <p>ness in the Secure Hash Standard, Technical report. 1994.
18. A. Shamir, How to share a secret, Commun. ACM 22(11). pp. 612-613, 1979.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Corporation</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Business process execution language for web services bpel-4ws</article-title>
          . http://www.ibm.com/developerworks/library/ws-bpel/ (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>F.</given-names>
            <surname>Leymann</surname>
          </string-name>
          :
          <article-title>Web services ow language (ws) version 1.0</article-title>
          .
          <string-name>
            <surname>Technical</surname>
            <given-names>Report</given-names>
          </string-name>
          , International Business Machines Corporation (IBM) (May
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Kavantzas</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burdett</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ritzinger</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fletcher</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lafon</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Web services choreography description language version 1.0</article-title>
          .
          <string-name>
            <given-names>W3C</given-names>
            <surname>Working Draft</surname>
          </string-name>
          (
          <year>December 2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Langar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mejri</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adi</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Formal enforcement of security policies on concurrent systems</article-title>
          .
          <source>J. Symb. Comput</source>
          .
          <volume>46</volume>
          (
          <issue>9</issue>
          ) (
          <year>2011</year>
          )
          <volume>997</volume>
          {
          <fpage>1016</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Khoury</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tawbi</surname>
          </string-name>
          , N.:
          <article-title>Corrective enforcement: A new paradigm of security policy enforcement by monitors</article-title>
          .
          <source>ACM Trans. Inf. Syst. Secur</source>
          .
          <volume>15</volume>
          (
          <issue>2</issue>
          ) (
          <year>2012</year>
          )
          <fpage>10</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Yi</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kochut</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>A cp-nets-based design and verication framework for web services composition</article-title>
          .
          <source>In: ICWS</source>
          . (
          <year>2004</year>
          )
          <volume>756</volume>
          {
          <fpage>760</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Hamadi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benatallah</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>A petri net-based model for web service composition</article-title>
          .
          <source>In: ADC</source>
          . (
          <year>2003</year>
          )
          <volume>191</volume>
          {
          <fpage>200</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8. Salaun, G.,
          <string-name>
            <surname>Bordeaux</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaerf</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Describing and reasoning on web services using process algebra</article-title>
          . In: ICWS. (
          <year>2004</year>
          )
          <volume>43</volume>
          {
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kazhamiakin</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pandya</surname>
            ,
            <given-names>P.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pistore</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Timed modelling and analysis in web service compositions</article-title>
          .
          <source>In: ARES</source>
          . (
          <year>2006</year>
          )
          <volume>840</volume>
          {
          <fpage>846</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Carbone</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Honda</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yoshida</surname>
          </string-name>
          , N.:
          <article-title>Structured communication-centred programming for web services</article-title>
          .
          <source>In: ESOP</source>
          . (
          <year>2007</year>
          )
          <volume>2</volume>
          {
          <fpage>17</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>