<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Extraction of Insider Attack Scenarios from a Formal Information System Modeling</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Amira Radhouani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Akram Idani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yves Ledru</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Narjes Ben Rajeb</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CNRS, LIG</institution>
          ,
          <addr-line>F-38000 Grenoble</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>FST - Tunis-El Manar University</institution>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>INSAT - Carthage University</institution>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>LIP2-LR99ES18</institution>
          ,
          <addr-line>2092, Tunis</addr-line>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Univ. of Grenoble Alpes, LIG</institution>
          ,
          <addr-line>F-38000 Grenoble</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The early detection of potential threats during the modelling phase of a Secure Information System is required because it favours the design of a robust access control policy and the prevention of malicious behaviours during the system execution. This paper deals with internal attacks which can be made by people inside the organization. Such attacks are difficult to find because insiders have authorized system access and also may be familiar with system policies and procedures. We are interested in finding attacks which conform to the access control policy, but lead to unwanted states. These attacks are favoured by policies involving authorization constraints, which grant or deny access depending on the evolution of the functional Information System state. In this context, we propose to model functional requirements and their Role Based Access Control (RBAC) policies using B machines and then to formally reason on both models. In order to extract insider attack scenarios from these B specifications our approach first investigates symbolic behaviours. The use of a model-checking tool allows to exhibit, from a symbolic behaviour, an observable concrete sequence of operations that can be followed by an attacker. In this paper, we show how this combination of symbolic execution and model-checking allows to find out such insider attack scenarios.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Developing secure Information Systems remains an active research area
addressing a wide range of challenges mostly interested in how to prevent from external
attacks such as intrusion, code injection, denial of service, identity fraud, etc.
Insider attacks are less addressed despite they may cause much more damage
because an insider is over all a trusted entity. Intrinsically it is given means
to violate a security policy, either by using legitimate access, or by obtaining
unauthorized access. This paper deals especially with Role Based Access
Control (RBAC) concerns with the aim to exhibit potential insider threats from a
formal modelling of secure Information Systems. We are interested in finding
attacks which conform to the access control policy, but lead to unwanted states.
These attacks are favoured by policies involving authorization constraints, which
grant or deny access depending on the evolution of the functional Information
System state. This reveals, on the one hand, the need to link the security model
to the functional model of the information system, and on the other hand, to
build tools taking into account the dynamic evolution of the IS state.</p>
      <p>
        Tools such as SecureMova [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and USE [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] are dedicated to validate security
policies related to a functional model. But these tools don’t take into account
dynamic evolution of the functional state. In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], we discussed shortcomings of
existing approaches in this context, and showed the advantages of using a formal
specification assisted by animation tools. This paper goes a step further than our
previous works by taking advantage of model-checking and proof tools in order
to automatically find insider attack scenarios composed of a sequence of actions
modifying the functional state and breaking the authorization constraint.
      </p>
      <p>This paper is organized as follows: section 2 gives an overview of our approach
and its underlying methodology. In section 3 we present a simple example that
illustrates our contribution. Section 4 defines semantics and technical aspects.
In section 5 we propose a symbolic search that automates generation of attack
scenarios and we discuss results of its application on the given example. Finally,
we draw conclusions and perspectives.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Overall Approach</title>
      <p>
        Bridging the gap between formal (e.g. Z, B, VDM . . .) techniques and graphical
languages such as UML has been a challenge since several years. On the one
hand, formal techniques allow automatic reasoning assisted by proof and
modelchecking tools, and on the other hand, graphical techniques allow visualization
and better understanding of the system structure. These complementary aspects
are useful to ensure a software development process based on notations with
precise syntax and semantics and which allows to structure a system graphically.
Most existing research works [
        <xref ref-type="bibr" rid="ref1 ref13 ref5 ref7">1, 5, 7, 13</xref>
        ] in this context have been focused only
on modelling and validation of functional aspects which are initially described
by various kinds of UML diagrams (class, state/transition, sequence, . . .) and
then translated into a formal specification. These works have shown the
interest of linking formal and graphical paradigms and also the feasibility of such
translations.
      </p>
      <p>
        In our work, we adopt a similar approach in order to graphically model
and formally reason on both functional and security models. We developed the
B4MSecure6 platform [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] in order to translate a UML class diagram associated
to a SecureUML model into B specifications. The resulting B specifications
illustrated in figure 1 follow the separation of concerns principles in order to be
able to validate both models separately and then validate their interactions.
      </p>
      <p>The functional B model on the left hand side of figure 1 is issued from a
conceptual class diagram. It integrates all basic operations generated automatically
6 http://b4msecure.forge.imag.fr/
(constructors, destructors, setters, getters, . . .) and also additional user-defined
operations which are integrated into the graphical model and specified using
the B syntax. This functional specification can be further improved by adding
invariants and carrying out proof of correction with the help of AtelierB prover.</p>
      <p>The security model, on the right hand side of figure 1 is dedicated to control
the access to functional operations with respect to access control rules defined
in the SecureUML model. In our approach, we don’t deal with administration
operations because we make the simplifying assumption that access control rules
don’t evolve during the system execution. The security formal model allows
to validate RBAC well-formedness rules such as no role hierarchy cycles, and
separation of duty properties (SoD) such as assignment of conflicting roles to
users. . .</p>
      <p>
        This paper assumes that validation of both models in isolation is done:
operations of functional model don’t violate invariant properties, and the security
model is robust. Such validation activities are widely discussed in the literature
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. However, currently available validation approaches do not take sufficiently
into account interactions between both models which result from the fact that
constraints expressed in the security model also refer to information of the
functional model. In fact, security policies often depend on dynamic properties based
on the functional system state. For example, a bank customer may transfer funds
from his account, but if the amount is greater than some limit the transfer must
be approved by his account manager. Access control decisions depend then on
the satisfaction of authorization constraints in the current system state.
Dynamic evolution of the functional state impacts these constraints and may lead
to a security vulnerability if it opens an unexpected access. In this paper we
use validation tools (prover and model-checker) in order to search for malicious
sequences of operations by analysing authorization constraints.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>A Simple Example</title>
      <p>
        In this section we use a running example issued from [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and which deals with a
SecureUML model associated to a functional UML class diagram.
3.1
      </p>
      <sec id="sec-3-1">
        <title>Functional Model</title>
        <p>The functional UML class diagram (presented in figure 2) describes a meeting
scheduler dedicated to manage data about two entities: Persons and Meetings.</p>
        <p>A meeting has one and only one owner (association MeetingOwner ), a list of
participants (association MeetingParticipants), a duration, and a starting date.
A person can be the owner of several meetings and may participate to several
meetings. Operations notify and cancel are user-defined, and allow respectively
to send messages to participants and to delete a meeting after notifying their
participants by e-mail. Constructors, setters and getters are implicitly defined
for both classes and both associations.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Access Control Rules</title>
        <p>The access control model is given in Figure 3. It features three different roles:
– SystemUser: defines persons who are registered on the system and then
have permission UserMeetingPerm which allow them to create and read
meetings. Deletion and modification of meetings (including operation cancel)
are granted to system users by means of permission OwnerMeetingPerm,
featuring an authorization constraint checking that the user who tries to run
these actions is the meeting owner.
– Supervisor: defines system users with more privileges because they can run
actions notify and cancel on any meeting even if they are not owners.
– SystemAdministrator: having a full access on entity Person, an
administrator manages system users. Full access grants him the right to create
a new person, remove or modify an existing one. Furthermore, a system
administrator has only a read access on meetings.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Validation</title>
        <p>
          This example is intended to be validated in [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] based on a set of static queries
that query a given system state in order to grasp some useful information like
“which user can perform an action on a concrete resource in a given state”.
        </p>
        <p>Authorization constraint associated to OwnerMeetingPerm requires
information from the functional model because it deals with the MeetingOwner
association. In the rest of this article, we consider three users John, Alice and
Bob such that user assignments are as defined by figure 4 and a given initial
state in which Alice is owner of meeting m1, Bob is a participant of m1. In such
a state, the above static query establishes that only Alice is allowed to modify
or delete m1 because she is the owner of m1.</p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref10 ref9">10, 9</xref>
          ] a dynamic analysis approach based on animation of a formal
specification showed that validation should not only be based on a given static state,
but should search for sequences of actions modifying this state and breaking the
authorization constraint. For example, starting from the above state, a static
query would only report that John, and also Bob, can’t modify m1 because none
of them satisfies the authorization constraint. A dynamic analysis would ask if
there exists a sequence of operations enabled by John, or Bob, that allows them
to modify m1. This paper contributes towards automatically finding these
malicious sequences. To perform these analysis, we applied the B4MSecure tool to
the UML and SecureUML diagrams and generated a B specification counting
946 lines. This tool generates automatically a specification for all basic
functional operations, which is enriched manually by some user-defined operations
(i.e. cancel, notify).
4
4.1
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Proposed Approach</title>
      <sec id="sec-4-1">
        <title>Trace Semantics for B Specifications</title>
        <p>In order to find malicious behaviours of an operational secure IS modelling,
we rely on the set of finite observable traces of our B specifications. Indeed, B
specifications can be approached by means of a trace semantics composed of an
initialization substitution init, a set of operations O and a set of state variables
V. We note val a possible state predicate allowed by the invariant and op an
operation from O. A functional behaviour is an observable sequence Q</p>
        <p>Q =ˆ init ; op1 ; op2 ; . . . ; opm
such that ∀i.(i ∈ 1..m ⇒ opi ∈ O) and there exists a sequence S of state
predicates which does not violate invariant properties:</p>
        <p>S =ˆ val0 ; val1 ; . . . ; valm
in which val0 is an initial state, and opi is enabled from state vali−1 and state
vali is reached by opi, starting from state vali−1.</p>
        <p>The security model filters functional behaviours by analysing access control
premises which are triplets (u, R, c) where u is a user, R is a set of possible
roles assigned to u, and c is an authorization constraint. An observable secure
behaviour is a sequence Q, where for every step i, premise (ui, Ri, ci) is valid
(expressed as (ui, Ri, ci) |= true). This means that roles Ri activated by user ui
grant him the right of running operation opi and if a constraint ci exists, then
it must be satisfied. The following premises sequence P must be valid for Q:</p>
        <p>P =ˆ (u1, R1, c1) ; (u2, R2, c2) ; . . . ; (um, Rm, cm)
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Tools to Exhibit Behaviours from B Specifications</title>
        <p>Model-checking and symbolic proof techniques are of interest in order to exhibit
a relevant behaviour from an operational B specification. Proof techniques deal
with infinite systems and can prove constraint satisfiability, or establish that
some operation can be enabled from an abstract state predicate. Model-checking
is based on model exploration of finite systems, and can be used to find a sequence
of actions leading to a given state or property. In our approach, we combine both
techniques in order to overcome their shortcomings: complexity of proofs for the
first one, and state explosion for the second one. In this sub-section, we illustrate
both tools.</p>
        <p>
          Model checking and animation (the ProB tool). ProB [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] is an animation and
a model-checker of B specifications that explores the concrete state space of
the specification and generates accessibility graphs. Then, every predicate vali
(where i ∈ 0, 1, . . . , m}) of sequence S is a valuation of variables issued from V.
For example, starting from an initial state val0 where:
V = {person, meeting, meetingOwner, meetingP articipants} and such that:
val0 =ˆ person = ∅
∧ meeting = ∅
∧ meetingOwner = ∅
∧ meetingP articipant = ∅
and having O = {personN ew, meetingN ew, meetingAddP articipants, . . .}, the
scenario of table 1 is successfully animated using ProB tool. Column “reached
states” gives only modified B variables from the previous step.
1
2
3
4
step Sequence Q
        </p>
        <p>
          In step 1, the tool animates operation personN ew which modifies variable
person (initially equal to emptyset) and this action was performed by user John
using role SystemAdministrator without need of authorization constraint. In
step 4, the tool adds participant Bob to the meeting m1 by animating operation
meetingAddParticipants, after validating that authorization constraint is valid
for Alice using role SystemUser. Indeed, Alice is the owner of m1.
Symbolic proof (the GeneSyst tool). ProB is useful to animate scenarios
identified during requirements analysis, or to exhaustively explore a finite subset of
state space. As we are interested in finding malicious scenarios that exhibit a
potential internal attack, the ProB technique may be useful only if it explores
the right subset of state space in the right direction, which is not obvious for
infinite systems. Symbolic proof techniques, such as that of GeneSyst [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], are
more interesting because they allow to produce symbolic transition systems that
represent a potentially infinite set of values. Such tools reason on the reachability
properties of a symbolic state F by some operation op from a symbolic state E.
In [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], three reachability properties are defined in terms of the following proof
obligations, where E and F are two disjoint state predicates:
(1) possibly reached: E ∧ P re(op) ⇒ ¬[Action(op)]¬F
(2) not reachable: E ∧ P re(op) ⇒ [Action(op)]¬F
(3) always reached: E ∧ P re(op) ⇒ [Action(op)]F
        </p>
        <p>In the generalized substitution theory, formula [S]R means that substitution
S always establishes predicate R, and ¬[S]¬R means that substitution S may
establish predicate R. Hence, proof (1) means that state F can be reached by
actions of operation op, when operation precondition is true in state E. Proof (2)
means that F is never reached by actions of operation op from state E. Finally,
proof (3) means that F is always reached by op from E. Let us consider, for
example, the functional operation meetingN ew:
meetingN ew(m, p)=ˆ</p>
        <p>EN D
P RE m 6∈ meeting ∧ p ∈ person T HEN
meeting := meeting ∪ {m}
|| meetingOwner := meetingOwner ∪ {(m 7→ p)}</p>
        <p>This operation adds a new meeting m and links it to an owner p. If we define
states E and F such that:</p>
        <p>E =ˆ meetingOwner[{m1}] = ∅</p>
        <p>F =ˆ meetingOwner[{m1}] 6= ∅
then proof obligation produced by GeneSyst for property (1) was successfully
proved showing that operation meetingN ew when enabled from a state where
m1 does not exist and there exists at least one person in the system, may lead
to a state where m1 is created and has an owner.</p>
        <p>Our work will be focused on proof (1) which states the reachability of a target
state from an initial one, illustrated above, because it is sufficient to decide
wether an operation is potentially useful for a malicious behaviour. Proofs (2)
and (3) can be used if one would like to assume that a state can never be reached,
or it is always reached, by an operation.
4.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Malicious Behaviour</title>
        <p>Based on the security requirements, several operations are identified as
critical. For example, security requirements have identified the integrity of meeting
information as critical. Therefore, operations which perform unauthorized
modifications are identified as critical.</p>
        <p>A malicious behaviour executed by a user u, regarding authorization
constraints, is an observable secure behaviour Q with m steps such that:
– opm is a critical operation to which an authorization constraint cm is
associated.
– user u is malicious and would like to run opm by misusing his roles Ru.
– val0 : is an initial state where (u, Ru, cm) |= f alse
– for every step i (i ∈ 1..m) premise (u, Ru, ci) |= true</p>
        <p>In other words, malicious user u is not initially allowed to execute a critical
operation, but he is able to run a sequence of operations leading to a state from
which he can execute this operation. In our investigation we suppose that user u
executes this malicious sequence without collusion. This problem will be tackled
in a further work.</p>
        <p>Section 3.3 gave an example where neither Bob nor John are allowed to run
a modification operation, such as meetingSetStart which modifies attributes of
class Meeting, from the initial state due to the authorization constraint. This
initial state is:
val0 =ˆ person = {Alice, Bob}
∧ meeting = {m1}
∧ meetingOwner = {(Alice 7→ m1)}
∧ meetingP articipant = {(m1 7→ Bob)}</p>
        <p>In the following, we denote as init0 the sequence of operations leading to
val0 such as that presented in table 1. We used the model-checking facility of
ProB in order to explore exhaustively the state space and automatically find a
path starting from val0 and leading to a state where operation meetingSetStart
becomes permitted to John. We asked ProB to find a sequence where John
becomes the owner of m1:</p>
        <p>meetingOwner(m1) = J ohn
After exploring more than 1000 states, ProB found a scenario in which John
executes sequentially operations personNew, personAddMeetingOwner and
meetingSetStart. Indeed, this dynamic analysis showed that John, as a system
administrator, has a full access to entity Person. This permission allows him to
create, modify, read and delete any instance of class Person. First, he creates
an instance John of class Person that corresponds to him by running
operation personN ew(J ohn). Then he adds meeting m1 to the set of meetings owned
by John, by running operation personAddM eetingOwner(J ohn, m1) which is a
basic modification operation of class Person. These two actions allowed him to
become the owner of m1 and then he was able to modify the meeting of Alice.
Like all model-checking techniques, when ProB explores exhaustively the state
space, it faces the combinatorial explosion problem which depends on the number
of operations provided to the tool and the state space size. In order to address
this problem, our approach proposes a symbolic search which finds a sequence
of potentially useful operations on which the model-checker should be focused.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Symbolic Search</title>
      <p>The proposed symbolic search is performed by an algorithm that looks for an
observable sequence Q =ˆ init0 ; op1 ; . . . ; opm executed by a user u, and where
(u, Ru, cm) is not valid for a critical operation opm in the initial state val0 but
becomes valid for state valm−1 where opm can be enabled. It is a backward search
algorithm, starting from the goal state valm−1 from which the critical operation
opm can be enabled: valm−1=ˆ cm ∧ P re(opm); and working backwards until the
initial state val0 is encountered. The algorithm ends when sequence Q is found
or when all operations are verified without encountering the initial state. We
consider that val0 is a completely valuated state such as that where Alice is the
owner of m1, and Bob is a participant to m1. This prevents the initial state from
being included in both states valm−1 and valm−2, which would never verify the
condition of the while loop. Note that each operation occurs at most once in a
computed sequence, which ensures the termination of our algorithm.
1. Q =ˆ opm;
2. valm−1 =ˆ cm ∧ P re(opm);
3. valm−2 =ˆ ¬valm−1;
4. while val0 6⇒ valm−1 do
5. choose any oi ∈ O where
6. (u, Ru, ci) |= true ∧
7. valm−2 ∧ P re(oi) ⇒ ¬[Action(oi)]¬valm−1
8. do
9. Q =ˆ oi ; Q ;
10. valm−1 =ˆ valm−2 ∧ P re(oi);
11. valm−2 =ˆ valm−2 ∧ ¬P re(oi);
12. else
13. raise exception: No sequence found
14. enddo
15. endwhile
16. Q =ˆ init ; Q ;
5.1</p>
      <sec id="sec-5-1">
        <title>Step by Step Illustration</title>
        <p>We take advantage of abstraction and step by step we refine the valm−2 symbolic
state:
1. At the first step of the algorithm, the state space is represented by two
symbolic states: the first one valm−1 includes all states where the authorization
constraint cm is true and which are enabling opm, and the second one valm−2
is the negation of valm−1 which is then ¬cm ∨ ¬P re(opm). As they are two
disjoint state predicates, we conduct proof (1) in order to find an operation
oi that belongs to O and which possibly reaches the first state valm−1 from
the second one valm−2 and such that premise (u, Ru, ci) is valid. If oi does
not exist, then no sequence could be found for the expected attack and we
can try proof (2) for each operation attesting that all operations never reach
valm−1 from valm−2.
2. At the second step of the algorithm, if the proof (1) succeeds for some
operation opm−1, then it may exist an observable sequence leading to the critical
operation where access control premise (u, Ru, cm) is valid, and hence a
potential symbolic attack scenario can be found. The algorithm looks inside
state valm−2 in order to find out the previous operations that can be
invoked in the attack scenario. State valm−2 is partitioned into two sub-states
which are:
valm−2 ∧ P re(opm−1) ≡ ¬(cm ∧ P re(opm)) ∧ P re(opm−1)
valm−2 ∧ ¬P re(opm−1) ≡ ¬(cm ∧ P re(opm)) ∧ ¬P re(opm−1)
Then, we look for operations that reach the first sub-state from the second
one.
3. The algorithm proceeds iteratively by partitioning the second state into two
sub-states until it finds a state that includes the initial state. In the best
case, our algorithm gives some symbolic attack scenario, which consists of
sequence (init0 ; opn ; opn+1 ; . . . ; opm) invoked by the same user u and
where:
valn−1=ˆ ¬(cm ∧ P re(opm)) ∧ ¬P re(opm−1) ∧ ¬P re(opm−2) ∧ . . . ∧ P re(opn)
and such that val0 ⇒ valn−1 ∧ ∀i.(i ∈ (n..m) ⇒ (u, Ru, ci) |= true)
5.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Application</title>
        <p>We apply our algorithm to the meeting scheduler example starting from the
following initial state val0:
val0 =ˆ person = {Alice, Bob}
∧ meeting = {m1}
∧ meetingOwner = {(Alice 7→ m1)}
∧ meetingP articipant = {(m1 7→ Bob)}</p>
        <p>In this state user John is not allowed to modify meeting m1 because the
authorization constraint allows modification only for the owner of m1. A malicious
scenario would lead to a state where John becomes able to execute a
modification operation such as operation meetingSetStart on meeting m1. In this state
we have to verify:</p>
        <p>P re(meetingSetStart(m1, start))=ˆ m1 ∈ meeting ∧ start ∈ N AT
and (John, SystemUser, M eetingOwner(m1) = J ohn) |= true
1. First iteration: considering the following symbolic states
valm−1 = (M eetingOwner(m1) = J ohn) ∧ m1 ∈ M eeting ∧ start ∈ N AT
valm−2 = ¬valm−1
we have:
– val0 6⇒ valm−1 because, in state val0, M eetingOwner(m1) = Alice, and
– proof (1) succeeds for the operations meetingNew and personAddMeetingOwner.
Then, we may go on the second iteration of the algorithm for each of these
operations.
2. Second iteration: we partition state valm−2 into two sub-states:
valm−2 = ¬valm−1 ∧ P re(opm−1)
valm−3 = ¬valm−1 ∧ ¬P re(opm−1)
– case 1: we choose opm−1 = meetingN ew, and then we have:
– P re(meetingN ew) =ˆ m1 ∈ meeting ∧ J ohn ∈ person, and
– val0 6⇒ valm−2 because J ohn 6∈ person
In this case, the algorithm does not find an operation leading to a state
where operation meetingN ew becomes enabled. Indeed, no operation
satisfied proof obligation (1). Our algorithm concludes that it does not
exist an attack scenario invoking meetingN ew at this step.</p>
        <p>init0
meetingN ew
meetingSetStart
3. Third iteration: we partition state valm−3 into two sub-states:
valm−3 = ¬valm−1 ∧ ¬P re(personAddM eetingOwner) ∧ P re(personN ew)
valm−4 = ¬valm−1 ∧ ¬P re(personAddM eetingOwner) ∧ ¬P re(personN ew)
This stops normally the algorithm because in this case val0 ⇒ valm−3.
Indeed, P re(personN ew) =ˆ J ohn ∈/ person and in the initial state John
does not belong to set person. Figure 6 presents the full symbolic scenario
that allows John to modify Alice’s meeting.
personN ew
personAddM eetingOwner
meetingSetStart
Technically, our approach applies the GeneSyst tool in order to produce proof
obligations and then asks the AtelierB prover to discharge them automatically.
As the resulting scenarios are symbolic and based on “possibly reached proofs”,
the analyst can conclude that attacks may exist but he can not attest their
feasibility for the concrete system. An interesting contribution of our proof-based
symbolic sequences, besides the fact that they draw the analyst’s attention to
potential flaws, is that they give useful inputs to the model-checker. Indeed, a
model-checking tool can be used to exhibit, from a symbolic behaviour, an
observable concrete sequence of operations that can be followed by an attacker. In
order to reduce significantly the state space, we can ask ProB to explore only
operations found in the symbolic malicious scenarios. For our example, when
trying only operations personNew, personAddMeetingOwner and meetingSetStart,
ProB exhibits a concrete attack scenario after visiting a dozen of states which
shows a significant speed up with respect to our initial ProB attempts (involving
more than 1000 states).</p>
        <p>Our technique was able to extract another scenario (figure 7) which can be
executed by user Bob from the same initial state, in order to steal the ownership
of m1. In this scenario, Bob first cancels the meeting and then he recreates it
before applying the critical operation.</p>
        <p>init0
meetingCancel
meetingN ew
meetingSetStart</p>
        <p>The first scenario, done by user John, is made possible by the full access
permission to class Person, associated to role SystemAdministrator, which includes
the right to modify association ends. This attack affects meeting integrity. One
solution can be to add a SSD constraint between roles SystemAdministrator and
SystemUser. John will then still be able to become owner of the meeting, but
will not be able to log in as SystemUser in order to modify it.</p>
        <p>The second scenario done by Bob was possible due to role Supervisor which gives
him the right to cancel a meeting, and then, as a SystemUser he can recreate
it in order to become its owner. This scenario does not point out a flaw since
whenever a meeting is cancelled it should be legitimate that a user can start a
new meeting with the same identifier as the cancelled one.
6</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>
        We described in this paper a symbolic search approach that can extract insider
malicious behaviours from a formal Information System modelling. The
meeting scheduler example was discussed in several articles [
        <xref ref-type="bibr" rid="ref2 ref3">3, 2</xref>
        ]. However, they do
not report the attack scenarios presented in this paper. This is due to the fact
that dynamic evolution of the functional state is not taken into account.
Contributions of this paper showed how dynamic analysis, assisted by proofs and
model-checking, is useful to find out potential threats. In addition, thanks to our
algorithm, proofs and model checking tools, our method can be fully automated
in order to extract attack scenarios breaking authorization constraint. In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
a dynamic analysis is done interactively with the help of a Z animator, but it
is tedious and may miss many possible flaws. We also applied our approach on
the case study that has been treated in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and we were able to find,
automatically, the discussed threat. Currently we are looking for application on a real
case study, issued from the ANR-Selkis project7, and which deals with a medical
information system involving various authorization constraints.
      </p>
      <p>Our approach is automated by exploiting tools B4MSecure8, GeneSyst9,
AtelierB10 and ProB11. B4MSecure translates functional and security graphical
models into B specification, from which we automatically produce proof
obligations on reachability properties by taking advantage of the GeneSyst tool.
Then, these proof obligations are discharged automatically using the AtelierB
prover. When a symbolic scenario is found, ProB is used to explore concrete
state space focusing on operations issued from the symbolic scenario. The main
limitation of our work is that sometimes, when proof obligations are complex,
AtelierB fails to prove them automatically. Interactive proofs are then required,
but they may be pretty difficult for the analyst. One naive solution is to keep
operations for which proofs don’t succeed automatically in order to be exploited
further using the model-checker. A more interesting solution is to focus on other
kinds of proof obligations. For example, one can try to prove that an operation
o is never enabled from a state E and/or o never reaches a state F . Applying
7 http://lacl.univ-paris12.fr/selkis
8 http://b4msecure.forge.imag.fr
9 http://perso.citi.insa-lyon.fr/nstouls/?ZoomSur=GeneSyst
10 http://www.atelierb.eu/
11 http://www.stups.uni-duesseldorf.de/ProB
these proofs to the meeting scheduler example we were able to eliminate half of
the operations after proving automatically that they cannot be involved in the
attack scenario.</p>
      <p>We believe that reachability properties can be expressed by means of LTL
formula. We started exploring this direction basing on the LTL formula checking
facilities of ProB and the first results are promising.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>K.</given-names>
            <surname>Anastasakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bordbar</surname>
          </string-name>
          , G. Georg,
          <string-name>
            <surname>and I. Ray.</surname>
          </string-name>
          <article-title>Uml2alloy: A challenging model transformation</article-title>
          .
          <source>Model Driven Engineering Languages and Systems</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>D.</given-names>
            <surname>Basin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Clavel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Doser</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Egea</surname>
          </string-name>
          .
          <source>Automated analysis of security-design models. Information Software Technology</source>
          ,
          <volume>51</volume>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>D.</given-names>
            <surname>Basin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Doser</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Lodderstedt</surname>
          </string-name>
          .
          <article-title>Model driven security: From uml models to access control infrastructures</article-title>
          .
          <source>ACM Trans. Softw</source>
          . Eng. Methodol.,
          <volume>15</volume>
          (
          <issue>1</issue>
          ),
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>M-L. Potet D. Bert</surname>
            and
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Stouls</surname>
          </string-name>
          .
          <article-title>GeneSyst: a Tool to Reason about Behavioral Aspects of B Event Specifications</article-title>
          .
          <article-title>Application to Security Properties</article-title>
          . In H. Treharne,
          <string-name>
            <given-names>S.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Henson</surname>
          </string-name>
          , and
          <string-name>
            <surname>S. A</surname>
          </string-name>
          . Schneider, editors,
          <source>ZB</source>
          <year>2005</year>
          :
          <article-title>Formal Specification and Development in Z and B</article-title>
          ,
          <source>4th International Conference of B and Z Users</source>
          , volume
          <volume>3455</volume>
          <source>of LNCS</source>
          . Springer-Verlag,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>A.</given-names>
            <surname>Idani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Boulanger</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Philippe</surname>
          </string-name>
          .
          <article-title>Linking paradigms in safety critical systems</article-title>
          .
          <source>International Journal of Computers and their Applications</source>
          (IJCA),
          <source>Special Issue on the Application of Computer Technology to Public Safety and Law Enforcement</source>
          ,
          <volume>16</volume>
          (
          <issue>2</issue>
          ),
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuhlmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sohr</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Gogolla</surname>
          </string-name>
          .
          <article-title>Employing uml and ocl for designing and analysing role-based access control</article-title>
          .
          <source>Mathematical Structures in Computer Science</source>
          ,
          <volume>23</volume>
          (
          <issue>4</issue>
          ),
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>K.</given-names>
            <surname>Lano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Clark</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Androutsopoulos</surname>
          </string-name>
          . UML to B:
          <article-title>Formal Verification of Object-Oriented Models</article-title>
          .
          <source>In Integrated Formal Methods</source>
          , volume
          <volume>2999</volume>
          <source>of LNCS</source>
          . Springer,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ledru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Milhau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Qamar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Laleau</surname>
          </string-name>
          ,
          <string-name>
            <surname>J-L. Richier</surname>
          </string-name>
          , and
          <string-name>
            <surname>M-A. Labiadh</surname>
          </string-name>
          .
          <article-title>Taking into account functional models in the validation of is security policies</article-title>
          .
          <source>In Camille Salinesi and Oscar Pastor</source>
          , editors,
          <source>CAiSE Workshops</source>
          , volume
          <volume>83</volume>
          <source>of Lecture Notes in Business Information Processing</source>
          . Springer,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ledru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Milhau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Qamar</surname>
          </string-name>
          , Régine Laleau,
          <string-name>
            <surname>J-L. Richier</surname>
          </string-name>
          , and
          <string-name>
            <surname>M-A. Labiadh</surname>
          </string-name>
          .
          <article-title>Validation of IS security policies featuring authorisation constraints</article-title>
          .
          <source>International Journal of Information System Modeling and Design (IJISMD)</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ledru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Qamar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idani</surname>
          </string-name>
          ,
          <string-name>
            <surname>J-L. Richier</surname>
          </string-name>
          , and
          <string-name>
            <surname>M-A. Labiadh</surname>
          </string-name>
          .
          <article-title>Validation of security policies by the animation of z specifications</article-title>
          .
          <source>In Proceedings of the 16th ACM Symposium on Access Control Models and Technologies</source>
          , SACMAT '
          <fpage>11</fpage>
          , New York, NY, USA,
          <year>2011</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>M.</given-names>
            <surname>Leuschel</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Butler</surname>
          </string-name>
          .
          <article-title>ProB: A Model Checker for B</article-title>
          .
          <source>In FME 2003: Formal Methods Europe</source>
          , volume
          <volume>2805</volume>
          <source>of LNCS</source>
          . Springer-Verlag,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>N.</given-names>
            <surname>Qamar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ledru</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Idani</surname>
          </string-name>
          .
          <article-title>Evaluating RBAC Supported Techniques and their Validation and Verification</article-title>
          .
          <source>In 6th International Conference on Availability, Reliability and Security (ARES</source>
          <year>2011</year>
          ), Vienna, Autriche,
          <year>August 2011</year>
          . IEEE Computer Society.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>C.</given-names>
            <surname>Snook</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Butler.</surname>
          </string-name>
          UML-B:
          <article-title>Formal modeling and design aided by UML</article-title>
          .
          <source>ACM Transactions on Software Engineering and Methodology</source>
          ,
          <volume>15</volume>
          (
          <issue>1</issue>
          ),
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>