<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Propagation, Transformation and Re nement of Safety Requirements</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dominik Sojer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Buckl</string-name>
          <email>buckl@fortiss.org</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alois Knoll</string-name>
          <email>knollg@in.tum.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Technische Universitat Munchen, Department of Informatics</institution>
          ,
          <addr-line>85748 Garching bei Munchen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>fortiss GmbH</institution>
          ,
          <addr-line>Cyber-Physical Systems, 80805 Munchen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Safety requirements are an important artifact in the development of safety critical systems. They are used by experts as a basis for appropriate selection and implementation of fault detection mechanisms. Various research groups have worked on their formal modeling with the goal of determining if a system can meet these requirements. In this paper, we propose the application of formal models of safety requirements throughout all constructive development phases of a modeldriven development process to automatically generate appropriate fault detection mechanisms. The main contribution of this paper is a rigorous formal speci cation of safety requirements that allows the automatic propagation, transformation and re nement of safety requirements and the derivation of appropriate fault detection mechanisms. This is an important step to guarantee consistency and completeness in the critical transition from requirements engineering to software design, where a lot of errors can be introduced into a system by using conventional, nonformal techniques.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        During software development, there is usually a logical gap between
requirements speci cation and software design speci cation. This is typically the step
where informal, human-readable requirements have to be transformed into a
formal system design. In the development of safety critical systems, this gap in the
development chain also exists for safety requirements. Safety requirements are
requirements that are dealing with system safety. Safety of a system is de ned
as the absence of catastrophic consequences on the users and the environment of
the system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The gap in the development process between requirements
speci cation and software design speci cation is one of the key points where system
safety can be violated by the introduction of faults. Therefore we propose a fully
automatic approach that uses formally modeled safety requirements to
automatically generate appropriate fault detection mechanisms in the system thus the
safety requirements can be ful lled without human interaction.
The main contribution of this paper is a rigorous formal speci cation of safety
requirements that allows an automatic propagation, transformation and re
nement of safety requirements and the derivation of appropriate fault detection
mechanisms. De nitions for all of them will be given in Section 2. This is an
important step to guarantee consistency and completeness in the transition from
requirements engineering to software design, where a lot of errors can be
introduced into a system by using conventional, non-formal techniques. The three
      </p>
    </sec>
    <sec id="sec-2">
      <title>The approach aims at accompanying traditional safety enhancing techniques like the selection and implementation of appropriate hardware and software architectures.</title>
    </sec>
    <sec id="sec-3">
      <title>To show the validity of our work, we implemented the approach in FTOS [5], a tool for model-based development of fault tolerant embedded systems that we developed.</title>
    </sec>
    <sec id="sec-4">
      <title>In Section 2.1, our approach will be described informally to give the reader a ba</title>
      <p>sic understanding of the technique. Section 2.2 presents how safety requirements
and fault detection mechanisms can be described and compared in a formal way.
Section 2.3 shows how our work can be integrated in a formal system model
and how the propagation, transformation and re nement of safety requirements
can be performed formally. Section 3 gives an evaluation of the speci c
implementation in FTOS and Section 4 will compare our approach to the related
work. Finally, Section 5 concludes this paper and presents some possible areas
for future work.
2</p>
      <sec id="sec-4-1">
        <title>Approach</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Our approach is based on a formal foundation, but for a better understanding, Section 2.1 will explain it in an informal way. This Section will refer to Figure 1, which presents a very small example system where the propagation, transformation and re nement steps of safety requirements are visualized.</title>
      <p>2.1</p>
      <sec id="sec-5-1">
        <title>Informal Description of the Approach</title>
        <p>
          Safety requirements usually deal with the behavior of the whole system and
therefore are speci ed in natural language. Examples are \an airbag has to
activate if there is an emergency" and \an airbag must not activate if there is no
emergency". Due to safety requirements being very application speci c, speci
cation techniques for them on system level are very powerful and therefore only
little information can be extracted automatically from them. Thus we propose
that requirements have to be re ned manually to an abstraction layer where they
can be handled in an algorithmic way, for example the actor level of actor-based
models of computation [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], after they have been identi ed. Figure 1 shows an
exempli ed system consisting of 5 actors (A to E), two safety requirements and
one safety assurance. Actor C consists of the two hardware components CP U
and RAM on a more speci c layer of abstraction. This example will be used
throughout the paper.
        </p>
        <p>SafetyRequirement req*
A</p>
        <p>B</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>On the actor level safety requirements consist of a link to an actor and a list</title>
      <p>
        propagaotiofn failures whose occurrence has to be detected by this actor. To describe these
faSualfettysR,equirementerreqmid [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] de ned a co mSafpetyrReehqueirenmseinvtreeq*list of basic failure classes, which
      </p>
      <p>McD
we extended to describe the time and value domain of failures in more detail.</p>
    </sec>
    <sec id="sec-7">
      <title>These extended failure classes aAre:</title>
      <p>{ Wrong value (with threshold for deviation)
{ Wrong timing (with threRsehfinoelmdentfor too early and too late)
{ No result
{ Wrong values in subsequent time steps
{ Multiple wrong vaCPluUes aRtAMtheRsOaMme tI/Oime
SafetyRequirement req*
propagation
Safety SraefeqtyRueiqruieremmenet nretq*s can be propagated along data ow paths in systems.
DurSafetyRequirreemqenutrierqe*ment may haSvafeetyRteoquirbemeent req
ing this propagation, the speci cation of a safety</p>
      <p>SafetyRequirement req*
changed automatically. Therefore we inStarfeotydReuqcuireemteAhnt ereqc* oncept of saBfety assurances.</p>
    </sec>
    <sec id="sec-8">
      <title>Safety assurances are speci ed for actors and describe how safety requirements</title>
      <p>are transformed when they pass the speci ed actor. On the one hand, a safety
assurance speci es the further propagation path of a safety requirement by
mapping ports for \incoming safety requirements" to ports for \outgoing safety
re</p>
    </sec>
    <sec id="sec-9">
      <title>SafetyRequiremqeunt irerqe1ments". On the other hand, a safety assurance describes how the failures</title>
      <p>that are speci ed by a safety requirement are transformed. Some safety assur- CPU</p>
    </sec>
    <sec id="sec-10">
      <title>B ances cDan automatically be extracted from system models, but the majority of</title>
      <p>them has to be speci ed manually, similar to safety requirements. SafetyRequirement req*
After the propagation, safety requirements can be re ned from the actor level SafetyRequirement req*
{req1, rteqo2} the Ehardware level on which appropriate fault detection mechanisms can be
req1 automatically selected to ful ll the requirements.</p>
      <p>SafetyRequirement req2
SafetyAssurance assur1</p>
      <p>SafetyRequirement req1</p>
      <p>SafetyAssurance assur1</p>
      <p>SafetyRequirement req1
A
C</p>
      <p>B
(a) before</p>
      <p>D
E
req1</p>
      <p>{req1, req2}
A
C</p>
      <p>B
req1
(b) after
CPU</p>
      <p>RAM</p>
      <p>SafetyRequirement req2</p>
      <p>CPU</p>
      <p>RAM</p>
      <p>SafetyRequirement req2
C</p>
      <sec id="sec-10-1">
        <title>Step 1: Propagation and Transformation Safety requirements and safety</title>
        <p>assurances have to be speci ed manually. Afterwards, the safety requirCePUmentsRAM
can automatically be back propagated along the data ow paths. This is
necessary because a system's output does not only depend on its output actor but on
SafetyRequirement req2</p>
        <p>D
E
all actors that form the data ow chain from the system's inputs to the output.</p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>Obviously, this back propagation is an iterative process because the safety requirements have to be propagated not only once but until they reach the input actors of the system.</title>
    </sec>
    <sec id="sec-12">
      <title>In the example in Figure 1 the rst iteration of the back propagation results in</title>
      <p>copies of req1 and req2 being instantiated for actor B.</p>
      <p>During propagation, safety requirements may reach an actor, which in uences
them (e.g. the voting component of a triple-modular redundant system). We
introduce the concept of safety assurances to describe these in uences. A safety
assurance may change the failures that a safety requirement prohibits. Moreover,
it may also alter the propagation paths, which is useful because it is not always
necessary that a safety requirement has to be propagated to all predecessors
of an actor. The interaction of safety requirements and safety assurances is
described more detailed in Section 2.2.</p>
    </sec>
    <sec id="sec-13">
      <title>In the example, the safety assurance assur1 in uences the next iteration of the propagation in a way that req1 is propagated to actor C only and that req2 is automatically ful lled.</title>
    </sec>
    <sec id="sec-14">
      <title>Step 2: Re nement After the safety requirements have been propagated along</title>
      <p>the actor chains in the system, the safety requirements on each actor can be
processed further by re ning them to the di erent hardware components on
which the actor is executed. This transforms every safety requirement for actors
to safety requirements for hardware components, e.g. CPUs, memories or buses.</p>
    </sec>
    <sec id="sec-15">
      <title>In the example, this results in req1 being re ned on actor C to its hardware components CP U and RAM .</title>
    </sec>
    <sec id="sec-16">
      <title>Step 3: Mechanism Selection On the hardware component re nement level</title>
      <p>of safety requirements, they can be ful lled automatically by selecting fault
detection mechanisms. A fault detection mechanism is a software or hardware
function that can detect a de ned set of faults of speci c hardware components.</p>
    </sec>
    <sec id="sec-17">
      <title>Moreover it is annotated with non-functional parameters, e.g. worst-case execution time (WCET), memory requirements and development costs. The mapping between failures and faults can be derived from safety standards, e.g. IEC 61508 [15].</title>
    </sec>
    <sec id="sec-18">
      <title>It is possible to create a library of fault detection mechanisms L, from where</title>
      <p>they can be selected without further preparation. For each actor a, a subset
Sa L can be chosen so that each mechanism m 2 Sa ful lls at least one safety
requirement req 2 Reqa. With Reqa being the set of all safety requirements on
actor a. In a second step, the power set P(Sa) has to be calculated because
P(Sa) = Sa+ [ Sa where Sa+ is the set of all subsets of P(Sa) that ful ll all
safety requirements Reqa and Sa is the set of all subsets of P(Sa) that do not
ful ll all safety requirements Reqa.</p>
      <p>
        The approach based on the power set of S is necessary because some fault
detection mechanisms may be able to handle multiple faults in multiple hardware
components and therefore it is not su cient to simply select one fault
detection mechanism for each safety requirement. The nal step of our approach is
the selection of an optimal subset of Sa+. This is obviously a non-trivial
multidimensional optimization task because the importance of the non-functional
parameters of fault detection mechanisms may di er tremendously from
application to application. For example, in some applications, WCET may be the
single determining feature, whereas in others, it may be a combination of cost
and memory consumption. In our example in Figure 1, the safety requirements
on actor C may be ful lled by a walking bit CPU test [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and a Galpat RAM
test [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
    </sec>
    <sec id="sec-19">
      <title>As multidimensional optimization is not the focus of our research, we propose a very straight forward solution to this problem, which is a score based approach that can be adjusted to the needs of the actual application. For each subset</title>
      <p>Input: Power set P of fault detection mechanisms</p>
      <p>Output: optimal subset of P
1 foreach Subset s 2 P do</p>
      <p>W CETs = X wcetm ;
2 m2s</p>
      <p>memorys = X memorym ;
3 m2s
costss = X costsm ;</p>
      <p>m2s
4
5 scores = W CETs + memorys + costss ;
6 end
7 return s 2 P : 8s2 2 P n fsg : scores scores2 ;</p>
      <p>Algorithm 1: Selection of Fault Detection Mechanisms
s 2 P(S), a score is calculated. The highest scoring set is selected and the
according fault detection mechanisms can automatically be generated. Due to
the non-functional parameters being comparable numbers, their values W CETs,
memorys and costss can be interpreted as scores. The nal score can be
calculated via
scores =</p>
      <p>W CETs +
memorys +
costss
with , and being weights for customizing the algorithm for di erent
application areas. To make di erent applications comparable, the sum of , and
has to be normalized: + + = 1. The set s with the lowest nal score
scores can automatically be determined and its fault detection mechanisms can
be generated. The respective algorithm is listed in algorithm 1. The runtime of
this algorithm is obviously not optimal. It is only used in this paper to illustrate,
which problem has to be solved. A summary of the whole proposed work ow is
shown in algorithm 2.
1 Manual identi cation of system level safety requirements ;
2 Manual re nement of safety requirements to actor level ;
3 Manual determination of safety assurances ;
4 foreach SafetyRequirement req do
5 Propagation of req along the chain of actors from output to input ;
6 end
7 foreach Actor a do
8 foreach SafetyRequirement req on a do
9 Re nement of req to the hardware level ;
10 end
11 Selection of appropriate fault detection mechanisms from library S
12 Generation of the power set P(S) ;
13 Evaluation of all subset s 2 S according to algorithm 1 ;
14 Source code generation for the result of algorithm 1 ;
15 end</p>
    </sec>
    <sec id="sec-20">
      <title>Algorithm 2: Work ow Overview</title>
      <p>L ;</p>
      <sec id="sec-20-1">
        <title>Comparability of Safety Requirements and Fault Detection</title>
      </sec>
      <sec id="sec-20-2">
        <title>Mechanisms</title>
        <p>Section 2.1 showed that it is essential for our approach that safety requirements
and fault detection mechanisms can be compared in a formal way. This
comparison has to be performed on the attributes of safety requirements and fault
detection mechanisms. Safety requirements consist of a list of failure classes and
a link to a component. The relationship between failure classes, basic
component types and fault detection mechanisms is visualized exemplarily in Figure</p>
      </sec>
    </sec>
    <sec id="sec-21">
      <title>2, where a green slot in the cube implies that the selected failure class on the selected component type is detectable by the selected fault detection mechanism. To achieve this relationship, fault detection mechanisms have to be de ned by the following attributes:</title>
    </sec>
    <sec id="sec-22">
      <title>1. Detectable failure classes (DF C)</title>
    </sec>
    <sec id="sec-23">
      <title>2. Basic component types (BCT )</title>
    </sec>
    <sec id="sec-24">
      <title>3. Worst case execution time (W CET )</title>
    </sec>
    <sec id="sec-25">
      <title>4. Memory</title>
    </sec>
    <sec id="sec-26">
      <title>5. Development costs</title>
    </sec>
    <sec id="sec-27">
      <title>The attributes DF C and BCT are required to determine the suitability of the</title>
      <p>fault detection mechanism for a given safety requirement, whereas the features</p>
    </sec>
    <sec id="sec-28">
      <title>W CET , memory and costs can be used to choose the optimal fault detection</title>
      <p>mechanism. As the failure classes of safety requirements and DF C are both
subsets of the comprehensive set of failure classes, which was de ned in this</p>
    </sec>
    <sec id="sec-29">
      <title>Section, they are comparable. Moreover, the basic component types of safety</title>
      <p>requirements and BCT are also subsets of the same super set.</p>
    </sec>
    <sec id="sec-30">
      <title>Similar to the comparison of safety requirements and fault detection mechanisms, the comparison of multiple fault detection mechanisms can also be performed component-by-component. W CET , memory and costs can be represented as integers and therefore be easily compared.</title>
      <p>2.3</p>
      <sec id="sec-30-1">
        <title>Formal Foundation</title>
      </sec>
    </sec>
    <sec id="sec-31">
      <title>The theory is based on the formal system model of Buckl et al. [6]. Safety requirements, safety assurances and fault detection mechanisms are added. Propagation, transformation and re nement of safety requirements are added and expressed in the notation of [6].</title>
      <p>De nition 1 A system S = (V; ) can be de ned by a nite set of variables
V = fv1; :::; vng and a nite set of processes = f 1; :::; ng. The domain Di is
nite for each variable vi. A state s of system S is the valuation (d1; :::; dn) with
di 2 Di of the program variables V. A transition is a function tr : Vin ! Vout
that transforms a state s into the result state s0 by changing the values of the
variables in the set Vout V based on the values of the variables in the set
Vin V .</p>
      <p>De nition 2 The system is build up from a set of components C. A set of
variables Vc V is associated with each component c 2 C. Vc = Vc;internal [
Vc;interface [ Vc;environment is composed by three disjoint variable sets: the set
of internal variables Vc;internal, the set of interface variables Vc;interface and the
set of environment variables Vc;environment, which can only be accessed by exactly
one component.</p>
      <p>Environment variables can only be accessed and altered by the set of processes
associated with C : c . Interface variables are used for component
interaction and can be accessed by all interacting processes. Environment variables
are variables that are shared between the component and the environment of
the system. This set can again be divided into the input variables Vc;input that
are read from the environment and the output variables that are written to the
environment Vc;output.
De nition 3 A subsystem T = (VT ; T ) of S is de ned by a subset VT V
of the variables of S and by a subset T of the processes of S. A subsystem
is a system itself, so it has to be self-contained apart from its interface variables
VT;interface and environment variables VT;environment, similar to de nition 2.
De nition 4 Components can be structured in a hierarchical way. A component
c 2 C may consist of several components c1; :::; cn C. Moreover, c can be
a software component, a hardware component or a mixture of both: type(c) 2
fsof tware; hardware; mixedg. On the most concrete level, hardware components
are instances of the hardware component types:
HCT = fcpu; bus; rom; ram; sensor; actor; digital hardware; interrupt; clock;
communication; mass storageg
Vc [ Vinterface [ Vc;output.</p>
      <p>
        De nition 5 The functional behavior of a component c 2 C is re ected by
the corresponding processes c. Let Vinterface = fvjv 2 Vc0;interface ^ c0 2 Cg be
the set of all interface variables. c is speci ed as a nite set of operations of the
form guard ! transition, where guard : Vguard ! bool is a boolean expression
over a subset Vguard Vc [ Vinterface [ Vc;input and transition : Vin ! Vout
is the appendant transition with Vin Vc [ Vinterface [ Vc;input and Vout
De nition 6 A fault is a physical defect, an imperfection or a aw that occurs
within some hardware or software component. An error is the manifestation of
a fault and a failure occurs, when the component's behavior deviates from its
speci ed behavior [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
    </sec>
    <sec id="sec-32">
      <title>Depending on the level of abstraction where a system is investigated, the occur</title>
      <p>rence of a malicious event may be classi ed as a fault, error or failure. Therefore
we de ne all malicious events that might occur on a component c as errors Ec.</p>
    </sec>
    <sec id="sec-33">
      <title>Errors can alter the functional behavior of a component, which was de ned in de nition 5, in the time or value domain:</title>
      <p>Ec</p>
      <p>fearly; late; omission; commission; subtle incorrect; coarse incorrectg</p>
    </sec>
    <sec id="sec-34">
      <title>This alteration can be expressed formally by the addition of new transitions</title>
      <p>s ! serr to the functional behavior of the system.</p>
      <p>De nition 7 A state predicate P is a boolean function over a set of
variables Vp V . The set of state predicates represents the speci cation of the
system and is therefore de ned implementation independent. The set of variables
Vp Sc2C Vc;environment is a subset of all variables that can be observed by the
environment of the system.</p>
      <p>
        De nition 8 Fault detection mechanisms are based on the concept of
detectors [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. A fault detection mechanism m = (E; C; O) is a state predicate used
to check if a speci c error has occurred. Its attributes are a set of errors that it
is able to detect
      </p>
      <p>E</p>
      <p>fearly; late; omission; commission; subtle incorrect; coarse incorrectg
a set of component types where it is applicable C HCT [ fsof twareg and a
set of optimization criteria that can be used to compare di erent fault detection
mechanisms O = fcost; runtime; memoryg.</p>
      <p>Lemma 1. Following de nition 2, the data ow between components is
unambiguously de ned by the sets of interface variables of all components Vc;interface.
Lemma 2. Based on de nitions 6, 7 and lemma 1, a Safety Requirement
src = (E) of a component c is a state predicate and its attributes are a set of
errors that are not allowed to occur at c.</p>
      <p>E</p>
      <p>fearly; late; omission; commission; subtle incorrect; coarse incorrectg
A Safety Assurance sa = (EM; P ) of a component is also a state predicate and
it describes how a component can in uence errors. Safety assurances' attributes
are error mappings EM : Ec ! E0 , where Ec0 = Ec [ fcorrectg for the errors
c
speci ed by safety requirements and mappings of the interface variables of the
component, which de ne the paths where the e ects of errors propagate inside
the system: P : vin ! wout with v; w 2 Vc;interface., for a component c.
Lemma 3. A fault detection mechanism m ful lls a safety requirement sr (m ^
sr ) &gt;), if (srE mE ) ^ (c 2 mC ). That means that m has to be able to detect
at least all errors, which sr requires and that m is applicable to the component
where sr has been de ned.</p>
      <p>De nition 9 Back propagation: Safety requirements src of a component c 2
C can be back propagated to the predecessors c1; :::; cn of c in the data ow:
src ) src ^ src1 ^ ::: ^ srcn .</p>
    </sec>
    <sec id="sec-35">
      <title>Back propagation of safety requirements is necessary, because isolated components of a system cannot guarantee the safety of the complete system.</title>
      <p>Lemma 4. Transformation: According to lemma 2, safety assurances change
the e ects of errors that are propagated inside a system. A transformation is the
mapping of a safety requirement sr and a safety assurance sa to a new safety
requirement sr0: (sr; sa) ) sr0.</p>
      <p>Safety assurances also in uence safety requirements that are propagated inside
a system, which was described in de nition 9: a safety assurance sac on a
component c may shrink the set of predecessors in the data ow that have to ful ll
the safety requirements src on c. Moreover, the set of errors that are not allowed
to occur as de ned by src may also change for the predecessors of c. The
instantiation of a safety requirement src and a safety assurance sac results in an
altered safety requirement src ^ sac ) src0.
Lemma 5. Re nement: According to de nition 4, a component c 2 C may
consist of several subcomponents c1; :::; cn C. Safety requirements can be
rened along this subcomponent relationship, which is orthogonal to the propagation
de ned in de nition 9: src ) src1 ^ ::: ^ srcn with sr 2 SR (note that src does
not exist any more on the right side of the implication).</p>
    </sec>
    <sec id="sec-36">
      <title>Re nement is necessary, because fault detection mechanisms are usually very</title>
      <p>speci c to certain component types where they can be applied. A Galpat test,
for example, can only detect errors in RAM. So safety requirements have to be
re ned to an abstraction level where appropriate fault detection mechanisms are
available.</p>
      <p>De nition 10 Mechanism Selection: When all safety requirements SR on
a system S have been back propagated and re ned, fault detection mechanisms
can be selected that guarantee that all safety requirements are ful lled. However,
it is very likely that there are multiple subsets of all available fault detection
mechanisms Mi M; Mj M , with i 6= j, that are able to ful ll V SR: (Mi ^
V SR ) &gt;) _ (Mj ^ V SR ) &gt;). Therefore, the optimization criteria of the fault
detection mechanisms can be exploited to nd an optimal solution.</p>
    </sec>
    <sec id="sec-37">
      <title>As this is obviously a computationally complex multi-dimensional optimization</title>
      <p>problem, techniques like branch-and-bound should be used, because the ful
llment relation is transitive: Mi Mj M ^ (Mj ^ V SR ) ?) ) (Mi ^ V SR )
?). Algorithm 1 is an exempli ed solution for this problem.
3</p>
      <sec id="sec-37-1">
        <title>Evaluation</title>
        <p>
          We implemented our approach to prove its feasibility in the model-driven
development tool FTOS [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], which we developed. FTOS is a tool for model-driven
development of fault-tolerant embedded systems. It focuses on the generation
of code for non-functional system aspects, e.g. fault tolerance mechanisms and
communication schemes. FTOS provides four di erent metamodels that can be
used for hardware modeling, software modeling, fault modeling and modeling
of fault tolerance mechanisms. The fault tolerance metamodel is used to model
mechanisms to handle faults in the system, e.g. redundancy schemes or test
functions. The interdependencies between these models are visualized in Figure 3.
        </p>
      </sec>
    </sec>
    <sec id="sec-38">
      <title>The generative work ow of FTOS starts with a model-to-model transformation</title>
      <p>that combines and extends all application models. Afterwards, a template-based
code generation is invoked.</p>
      <p>We implemented safety requirements and safety assurances as new classes in the
fault metamodel and the combined metamodel. The fault detection mechanisms
were implemented only in the combined metamodel, because they are handled
automatically. Moreover, we extended the test functions, which are provided
by FTOS, to match our concept of fault detection mechanisms by enriching
them with information about detectable failure classes, basic component types
where they are applicable and the non-functional parameters safety integrity</p>
      <p>Hardware, Network Topology</p>
      <p>Software Components, Interaction Schedule</p>
      <p>Expected Faults, Effects on Hardware / Software Components
Pro-active Operations, Error Detection, Online Error Treatment, Offline Error</p>
      <p>
        Recovery
level (SIL), WCET, memory consumption and costs. We created a library of
11 additional fault detection mechanisms to the already existing test functions,
which we derived from the safety standard IEC 61508 [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. For the description
of failure classes, we mapped our extension of McDermid's failure classes to an
already existing class Failure in the fault metamodel.
      </p>
    </sec>
    <sec id="sec-39">
      <title>The work ow that was described in Section 2.3 was implemented in the model</title>
      <p>to-model transformation right after the combination of the four input models.
The rationale for this decision was that the generation of safety-related functions
has to deal with all parts of the modeled system (hardware, software, faults and
fault tolerance). The propagation, transformation and re nement steps of the
work ow were implemented as described in Section 2.1. The selection of
appropriate fault detection mechanisms was also implemented similar to the
description in Section 2.1, but for performance reasons we used branch-and-bound for
the power set calculation.</p>
    </sec>
    <sec id="sec-40">
      <title>After the implementation, we successfully introduced safety requirements into existing sample applications to assure that the fault detection mechanisms are derived properly from the safety requirements and that the appropriate fault detection mechanisms are generated.</title>
      <p>4</p>
      <sec id="sec-40-1">
        <title>Related Work</title>
        <p>To the best of our knowledge, our approach is original work and there exists
no related work that is dealing with the idea of propagation, transformation
and re nement of safety requirements. But obviously at lot of work has been
performed in various areas around safety requirements (origin and formalization)
and propagation. An overview of important ideas in these areas is presented in
this Section.
4.1</p>
        <sec id="sec-40-1-1">
          <title>Origin of Safety Requirements</title>
          <p>
            Safety requirements are a part of the system speci cation. Hanmer [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ] states
that \a system without a speci cation cannot fail". According to Leveson [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ],
safety requirements are imposed on a system from its environment in a
sociotechnical process. On a more technical layer safety requirements can be derived
from system states that are dangerous for the system's environment. These
dangerous system states can be identi ed via safety analysis techniques like hazard
and operability studies (HAZOP) [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ], failure mode and e ect analysis (FMEA)3
and functional hazard analysis (FHA) [
            <xref ref-type="bibr" rid="ref24">24</xref>
            ].
4.2
          </p>
        </sec>
        <sec id="sec-40-1-2">
          <title>Formalization of Safety Requirements</title>
          <p>
            A lot of work has been performed to formalize safety requirements and to
derive bene ts from it. Pap et al. [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ] identi ed 47 general safety criteria for the
speci cation of software systems with state charts. Due to this huge variety they
decided to use di erent formal techniques to describe and check them. These
techniques are the Object Constraint Language (OCL) of UML [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ], graph
transformations, reachability analysis and special programs. Many other approaches
for the modeling of safety requirements use only one description language of
Pap's portfolio. The two most popular ones are on the one hand the description
by UML constraints, like in [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ] and on the other hand the description by
(temporal) logics, like in [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. The modeling of safety requirements via (temporal) logics
is very widely used for formal veri cation of systems. Well-known
representatives are the computational tree logic (CTL) [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ] and the linear time temporal
logic (LTL) [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. (Temporal) logics are a very powerful way of describing safety
requirements but they di er widely from the typical modeling techniques that
are used for system modeling, which makes them di cult to use.
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-41">
      <title>Some research groups work on the development of domain speci c languages</title>
      <p>for the description of safety requirements, like the Requirements State Machine</p>
    </sec>
    <sec id="sec-42">
      <title>Language (RSML*) [26]. The research groups that deal with formal modeling of safety requirements are mostly aiming for formal veri cation by trying to prove that a modeled system complies to the modeled safety requirements. This approach is taken for example by [26], [22] and [16].</title>
    </sec>
    <sec id="sec-43">
      <title>Schneider and Trapp [25] use a similar technique as our mapping of safety re</title>
      <p>quirements and fault detection mechanisms in their ConSert approach to assure
safety in dynamically recon gurable systems by matching \inport" and
\outport" safety requirements of plug and play services at runtime.</p>
    </sec>
    <sec id="sec-44">
      <title>Other approaches formalize safety requirements in graphs to develop and present safety arguments, e.g. Goal Structuring Notation [17] and Assurance Based Development [10].</title>
      <p>4.3</p>
      <sec id="sec-44-1">
        <title>Propagation</title>
      </sec>
    </sec>
    <sec id="sec-45">
      <title>The propagation of safety requirements in our approach shows similarities to the</title>
      <p>research area of failure propagation. The relationship between safety requirement
3 http://www.quality-one.com/services/fmea.php
propagation and failure propagation is very similar to the relationship between</p>
    </sec>
    <sec id="sec-46">
      <title>FMEA and fault tree analyses (FTA) [8]: FTA is a top-down analysis technique</title>
      <p>(safety requirement propagation) and FMEA is a bottom-up analysis technique
(failure propagation). The main di erence between the FTA/FMEA and safety
requirement propagation/failure propagation is the \dimension" in which they
operate: the rst ones operate along a chain of (hazard-) re nements and the
later ones operate along the data ow in a system.</p>
    </sec>
    <sec id="sec-47">
      <title>Various research groups work on di erent aspects of failure propagation, like</title>
      <p>
        [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. The general goal is to analyze the propagation paths of failures in
systems to get an understanding of the overall emergent failure behavior. A very
important insight is that failures may change their \appearance" while being
propagated, which was under investigation in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. We adopted this
idea in our approach with the concept of safety assurances.
      </p>
    </sec>
    <sec id="sec-48">
      <title>Apart from failures, the concept of propagation can also used for the automatic allocation of safety integrity levels [23].</title>
      <p>5</p>
      <sec id="sec-48-1">
        <title>Conclusion and Future Work</title>
        <p>During the development of safety critical systems, bridging the gap between
requirements speci cation and software design speci cation is a very important
step in assuring that safety requirements are ful lled in the nal system. This
paper presented our approach of automatically deriving fault detection
mechanisms and generating their source code directly from safety requirements. The
main contribution of this paper is a rigorous formal speci cation of safety
requirements that allows an automatic propagation, transformation and re nement
of safety requirements and the derivation of appropriate fault detection
mechanisms. This is an important step to guarantee consistency and completeness
during the transition from requirements engineering to software design, where a
lot of errors can be introduced into a system by using conventional, non-formal
techniques.</p>
      </sec>
    </sec>
    <sec id="sec-49">
      <title>We implemented our approach in the model-driven development tool FTOS, which we developed, and tested it successfully on various sample applications. A more extensive evaluation will be performed in the future with the help of two demonstrators, which are currently being developed.</title>
      <p>One area of possible future work in our approach is the missing link to the
functional behavior of components. Currently, we only consider the data ow between
components and the user is required to model the connections between functional
behavior and safety requirements by hand via safety assurances. However, if the
functional and temporal behavior of a component are also modeled, e.g. by a
more conventional model-driven development approach like Matlab Simulink4,
then it might be possible to automatically derive safety assurances from these
descriptions. This step would help to guarantee consistency and completeness of
safety assurances, as our approach does for safety requirements.
A second important point for future work is the handling of the generated fault
4 http://www.mathworks.com/products/simulink/
detection mechanisms at runtime. With the help of our approach, it is possible
to generate the source code of appropriate fault detection mechanisms. However,
the main purpose of a safety critical system are still its functional tasks. So a safe
runtime platform is required that takes care of the scheduling of the functional
tasks, the fault detection mechanisms and the proof tests, which check in large
intervals the operability of the fault detection mechanisms.</p>
      <p>Finally, future work could also try to analyze the results of fault detection
mechanisms. Usually, there is a gap in the chain of reasoning between the real world
and the fault detection mechanism: if, for example, a mechanisms reports that
a network connection to another component of a distributed system has been
lost, then there can be various reasons for this, like message loss or hardware
failures at both ends of the communication channel. A probabilistic evaluation
of the occurrence of certain errors would allow to reason about events in the
real world at runtime, which could help to initiate more granular fault handling
techniques.</p>
      <sec id="sec-49-1">
        <title>Acknowledgments</title>
      </sec>
    </sec>
    <sec id="sec-50">
      <title>This work was partially funded by the German Federal Ministry of Education and Research (BMBF), grant \SPES2020, 01IS08045T".</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Gul</given-names>
            <surname>Agha</surname>
          </string-name>
          .
          <article-title>Actors: A model of concurrent computation in distributed systems</article-title>
          . MIT Press,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Anish</given-names>
            <surname>Arora</surname>
          </string-name>
          and
          <string-name>
            <given-names>Sandeep S.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          .
          <article-title>Detectors and correctors: A theory of fault-tolerance components</article-title>
          .
          <source>Proceedings of the 18th International Conference on Distributed Computing Systems</source>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Algirdas</given-names>
            <surname>Avizienis</surname>
          </string-name>
          ,
          <string-name>
            <surname>Jean-Claude</surname>
            <given-names>Laprie</given-names>
          </string-name>
          , Brian Randell, and
          <string-name>
            <given-names>Carl</given-names>
            <surname>Landwehr</surname>
          </string-name>
          .
          <article-title>Basic concepts and taxonomy of dependable and secure computing</article-title>
          .
          <source>IEEE Transactions on Dependable and Secure Computing</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>J.F.</given-names>
            <surname>Briones</surname>
          </string-name>
          , M. de Miguel,
          <string-name>
            <given-names>J.P.</given-names>
            <surname>Silva</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Alonso</surname>
          </string-name>
          .
          <article-title>Integration of safety analysis and software development methods</article-title>
          .
          <source>Proceedings of the 1st International Conference on System Safety Engineering</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>C.</given-names>
            <surname>Buckl</surname>
          </string-name>
          .
          <article-title>Model-Based Development of Fault-Tolerant Real-Time Systems</article-title>
          .
          <source>PhD thesis</source>
          , TU Munchen,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Christian</given-names>
            <surname>Buckl</surname>
          </string-name>
          , Alois Knoll, Ina Schieferdecker, and
          <string-name>
            <given-names>Justyna</given-names>
            <surname>Zander</surname>
          </string-name>
          .
          <article-title>Model-Based Engineering of Embedded Real-Time Systems, chapter Model-Based Analysis</article-title>
          and
          <source>Development of Dependable Systems</source>
          . Springer-Verlag,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Edmund</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Clarke</surname>
          </string-name>
          ,
          <string-name>
            <surname>Edmund</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jr. Clarke</surname>
            , and
            <given-names>Orna</given-names>
          </string-name>
          <string-name>
            <surname>Grumberg</surname>
          </string-name>
          . Model Checking. MIT Press,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Clifton</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ericson</surname>
          </string-name>
          .
          <article-title>Fault tree analysis: A history</article-title>
          .
          <source>Proceedings of the 17th International System Safety Conference</source>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Xiaocheng</surname>
            <given-names>Ge</given-names>
          </string-name>
          , Richard F. Paige,
          <string-name>
            <given-names>and John A.</given-names>
            <surname>McDermid</surname>
          </string-name>
          .
          <article-title>Probabilistic failure propagation and transformation analysis</article-title>
          .
          <source>SAFECOMP</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Patrick J. Graydon</surname>
          </string-name>
          , John C. Knight, and
          <string-name>
            <surname>Elisabeth</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Strunk</surname>
          </string-name>
          .
          <article-title>Assurance based development of critical systems</article-title>
          .
          <source>Proceedings of the 37th Annual IEEE International Conference on Dependable Systems and Networks</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Robert</surname>
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Hanmer</surname>
          </string-name>
          .
          <article-title>Patterns for Fault Tolerant Software</article-title>
          . John Wiley &amp; Sons,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Constance</surname>
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Heitmeyer</surname>
          </string-name>
          .
          <article-title>Software cost reduction</article-title>
          .
          <source>Encyclopedia of Software Engineering</source>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. H.
          <article-title>Holscher and</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Rader</surname>
          </string-name>
          . Microcomputers in Safety Technique. T UV Rheinland,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. International Electrotechnical Commission.
          <article-title>IEC 61882, hazard and operability studies (HAZOP studies) - application guide</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. International Electrotechnical Commission.
          <article-title>IEC 61508, functional safety of electrical/electronic/programmable electronic safety-related systems</article-title>
          ,
          <year>April 2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Anjali</surname>
            <given-names>Joshi</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Steven P.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Michael</given-names>
            <surname>Whalen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Mats P.E.</given-names>
            <surname>Heimdahl</surname>
          </string-name>
          .
          <article-title>A proposal for model-based safety analysis</article-title>
          .
          <source>Proceedings of the 24th Digital Avionics Systems Conference</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>Tim</given-names>
            <surname>Kelly</surname>
          </string-name>
          and
          <string-name>
            <given-names>Rob</given-names>
            <surname>Weaver</surname>
          </string-name>
          .
          <article-title>The goal structuring notation a safety argument notation</article-title>
          .
          <source>Proceedings of the Dependable Systems and Networks 2004 Workshop on Assurance Cases</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>Nancy</given-names>
            <surname>Leveson</surname>
          </string-name>
          . Engineering a Safer World.
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>J. A.</given-names>
            <surname>McDermid</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Pumfrey</surname>
          </string-name>
          .
          <article-title>A development of hazard analysis to aid software design</article-title>
          .
          <source>Proceedings of the Ninth Annual Conference on Computer Assurance</source>
          , pages
          <volume>17</volume>
          {
          <fpage>25</fpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <given-names>Atef</given-names>
            <surname>Mohamed</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mohammad</given-names>
            <surname>Zulkernine</surname>
          </string-name>
          .
          <article-title>On failure propagation in componentbased software systems</article-title>
          .
          <source>Proceedings of the Eighth International Conference on Quality Software</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21. Object Management Group.
          <article-title>Object constraint language</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Zsigmond</surname>
            <given-names>Pap</given-names>
          </string-name>
          , Istvan Majzik, and
          <string-name>
            <given-names>Andras</given-names>
            <surname>Pataricza</surname>
          </string-name>
          .
          <source>Checking general safety criteria on uml statecharts. Lecture Notes in Computer Science</source>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Papadopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Walker</surname>
          </string-name>
          , M.
          <article-title>-</article-title>
          <string-name>
            <surname>O. Reiser</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Weber</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            , M. Torngren, David Servat,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Abele</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Stappert</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Lonn</surname>
            , L. Berntsson,
            <given-names>Rolf</given-names>
          </string-name>
          <string-name>
            <surname>Johansson</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Tagliabo</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Torchiaro</surname>
            , and
            <given-names>Anders</given-names>
          </string-name>
          <string-name>
            <surname>Sandberg</surname>
          </string-name>
          .
          <article-title>Automatic allocation of safety integrity levels</article-title>
          .
          <source>Proceedings of the 1st Workshop on Critical Automotive applications: Robustness &amp; Safety</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24. SAE International.
          <article-title>ARP 4754, certi cation considerations for highly-integrated or complex aircraft systems</article-title>
          ,
          <year>November 1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <given-names>Daniel</given-names>
            <surname>Schneider</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mario</given-names>
            <surname>Trapp</surname>
          </string-name>
          .
          <article-title>Conditional safety certi cates in open systems</article-title>
          .
          <source>Proceedings of the 1st Workshop on Critical Automotive applications: Robustness &amp; Safety</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <given-names>A.C.</given-names>
            <surname>Tribble</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.P.</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <article-title>Software intensive systems safety analysis</article-title>
          .
          <source>IEEE Aerospace and Electronic Systems Magazine</source>
          ,
          <volume>19</volume>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <given-names>Malcolm</given-names>
            <surname>Wallace</surname>
          </string-name>
          .
          <article-title>Modular architectural representation and analysis of fault propagation and transformation</article-title>
          .
          <source>Proceedings of the Workshop on Formal Foundations of Embedded Systems and Component-based Software Architecture</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>