<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Toulouse, France, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>  ACESMB 2008 </article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Edited by: Stefan Van Baelen (K.U.Leuven ‐ DistriNet, Belgium) Iulian Ober (University of Toulouse ‐ IRIT, France) Susanne Graf (Université Joseph Fourier ‐ CNRS ‐ VERIMAG, France) Mamoun Filali (University of Toulouse ‐ CNRS ‐ IRIT, France) Thomas Weigert (Missouri University of Science and Technology</institution>
          ,
          <addr-line>USA) Sébastien Gérard (CEA ‐ LIST</addr-line>
          ,
          <country country="FR">France)</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2008</year>
      </pub-date>
      <volume>29</volume>
      <issue>2008</issue>
      <fpage>119</fpage>
      <lpage>160</lpage>
      <abstract>
        <p>Organized in conjunction with MoDELS'08  11th International Conference on Model Driven Engineering Languages and Systems </p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p> 
Workshop Proceedings 
Table of Contents.....................................................................................................................................3
MultiͲ Level Power Consumption Modelling in the AADL Design Flow for DSP, GPP, and FPGA
VTSͲ based Specification and Verification of Behavioral Properties of AADL Models
D. Monteverde (Universidad Argentina de la Empresa and Universidad de Buenos Aires,
Argentina), A. Olivero (Universidad Argentina de la Empresa, Argentina), S. Yovine (VERIMAGͲ
CNRS, France), and V. Braberman (Universidad de Buenos Aires, Argentina) ................................. 23
Translating AADL into BIPͲ Application to the Verification of RealͲ time Systems</p>
      <p>M.Y. Chkouri, A. Robert, M. Bozga, and J. Sifakis (VERIMAG, France) ............................................. 39</p>
    </sec>
    <sec id="sec-2">
      <title>Deriving Component Designs from Global Requirements</title>
      <p>G.v. Bochmann (SITE, Canada) ......................................................................................................... 55</p>
    </sec>
    <sec id="sec-3">
      <title>Scalable Models Using Model Transformation</title>
      <p>T.H. Feng and E.A. Lee (University of California, USA) ..................................................................... 71
ISE language: The ADL for Efficient Development of Cross Toolkits</p>
      <p>N. Pakulin, and V. Rubanov (Institute for System Programming of the Russian Academy of Sciences,
Russia) ............................................................................................................................................... 87
Towards ModelͲ Based Integration of Tools and Techniques for Embedded Control System Design,
Verification, and Implementation</p>
      <p>J. Porter, G. Karsai, P. Völgyesi, H. Nine, P. Humke, G. Hemingway, R. Thibodeaux, and J.</p>
      <p>Sztipanovits (Vanderbilt University, USA) ........................................................................................ 99
Modeling RadioͲ Frequency FrontͲ Ends Using SysML: A Case Study of a UMTS Transceiver</p>
      <p>S. Lafi, R. Champagne, A.B. Kouki, and J. Belzile (École de Technologie Supérieure, Canada) ...... 115
From HighͲ Level Modelling of Time in MARTE to RealͲ Time Scheduling Analysis</p>
      <p>M.Ͳ A. PeraldiͲ Frati, and Y. Sorel (I3S, France) ................................................................................ 129
A Reinterpretation of Patterns to Increase the Expressive Power of ModelͲ Driven Engineering
Approaches</p>
      <p>M. Bordin (AdaCore, France), M. Panunzio, C. Santamaria, and T. Vardanega (University of Padua,
Italy) ................................................................................................................................................ 145
Foreword 
The development of embedded systems with real‐time and other types of critical constraints 
implies  handling  very  specific  architectural  choices,  as  well  as  various  types  of  critical  non‐
functional  constraints  (related  to  real‐time  deadlines  and  to  platform  parameters,  such  as 
energy consumption and memory footprint). The last few years have seen a growing interest 
in (1) using precise (preferably formal) domain‐specific models for capturing such dedicated 
architectural and non‐functional information, and (2) using model‐driven engineering (MDE) 
techniques  for  combining  these  models  with  platform  independent  functional  models  to 
obtain  a  running  system.  As  such,  MDE  can  be  used  as  a  means  for  developing  analysis 
oriented specifications that represent the design model at the same time. 
The objective of this workshop is to bring together researchers and practitioners interested 
in  all  aspects  of  model‐based  software  engineering  for  real‐time  embedded  systems.  We 
target  this  subject  at  different  levels,  from  modelling  languages  and  related  semantics  to 
concrete  application  experiments,  from  model  analysis  techniques  to  model‐based 
implementation and deployment. In particular the workshop focus on the following: 
•
•
•</p>
      <p>Architecture  description  languages  (ADLs).  Architecture  models  are  crucial  elements 
in system and software development, as they capture the earliest decisions that have 
a huge impact on the realisation of the (non‐functional) requirements, the remaining 
development  of  the  system  or  software,  its  deployment,  etc.  In  particular,  we  are 
interested in examining: 
o the position of ADLs in an MDE approach 
o the  relation  between  architecture  models  and  other  types  of  models  used 
during requirement engineering (e.g., SysML), design (e.g., UML), etc. 
o techniques for deriving architecture models from requirements, and deriving 
high‐level design models from architecture models 
o verification and early validation using architecture models 
Domain  specific  design  and  implementation  languages.  To  achieve  the  high 
confidence  levels  required  from  critical  embedded  systems  through  analytical 
methods, specific languages with particularly well‐behaved semantics are often used 
in  practice,  such  as  synchronous  languages  and  models  (Lustre/SCADE, 
Signal/Polychrony, Esterel), time triggered models (TTA, Giotto), scheduling‐oriented 
models  (HRT‐UML,  Ada  Ravenscar),  etc.  We  are  interested  in  examining  the  model‐
oriented  counterparts  of  such  languages,  together  with  the  related  analysis  and 
development methods.  
Languages  for  capturing  non‐functional  constraints  (UML‐MARTE,  AADL,  OMEGA, 
etc.) 
•</p>
      <p>Component  languages  and  system  description  languages  (SysML,  BIP,  FRACTAL, 
Ptolemy, etc.). 
We  received  16  submissions  from  8  different countries,  of  which  10  papers  were  accepted 
for  the  workshop.  We  hope  that  the  contributions  for  the  workshop  and  the  discussions 
during  the  workshop  will  help  to  contribute  and  provide  interesting  new  insights  in  Model 
Based Architecting and Construction of Embedded Systems. 
 
The ACESMB 2008 organising committee, 
 
Iulian Ober, 
Stefan Van Baelen, 
Susanne Graf, 
Mamoun Filali, 
Thomas Weigert, 
Sébastien Gérard, 
September 2008. 
Acknowledgments
The Organising Committee of ACESMB 2008 would like to thank the workshop Program
Committee for their helpful reviews.
This workshop is organised as an event in the context of</p>
      <p>The ISTͲ 004527 ARTIST2 Network of Excellence on Embedded Systems Design
The research project EUREKAͲ ITEA SPICES (Support of Predictable Integration of
mission Critical Embedded Systems)
Multi-Level power consumption modelling in the</p>
      <p>AADL design flow for DSP, GPP, and FPGA</p>
      <p>Eric SENN, Johann LAURENT, and Jean-Philippe DIGUET</p>
      <p>Universit´e de Bretagne Sud, Lab-STICC,</p>
      <p>CNRS UMR3192,</p>
      <p>F-56321 LORIENT Cedex, France
Abstract. This paper presents a method that permits to estimate the
power consumption of components in the AADL component assembly
model, once deployed onto components in the AADL target platform
model. This estimation is performed at different levels in the AADL
refinement process. Multi-level power models have been specifically
developed for the different type of possible hardware targets: General
Purpose Processors (GPP), Digital Signal Processors (DSP) and Field
Programmable Gate Arrays (FPGA). Three models are presented for a
complex DSP (the Texas Instrument C62), a RISC GPP (the PowerPC 405),
and a FPGA from Altera (Stratix EP1S80). The accuracy of these models
depends on the refinement level. The maximum error introduced ranges
from 70% for the FPGA at the first refinement level (only the operating
frequency is considered here) to 5% for the GPP at the third refinement
level (where the component’s actual source code is considered).
1</p>
      <sec id="sec-3-1">
        <title>Introduction</title>
        <p>
          Originally coming from the avionic domain, AADL (Architecture Analysis &amp;
Design Language) is now commonly used as an input modelling language for
real-time embedded systems [
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ]. It allows the early analysis of the
specification, the verification of functional and non functional properties of the system,
and even code generation for the targeted hardware platform [
          <xref ref-type="bibr" rid="ref3 ref4 ref5">3–5</xref>
          ]. In the
context of the European project SPICES (Support for Predictable Integration of
mission Critical Embedded Systems) [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], our aim is to enrich the AADL
component based design flow to permit energy and power consumption estimations at
different levels in the refinement process. However, such early verifications are
only possible if power estimations are completed in a reasonable delay. Only at
this condition a fast and fruitful exploration of the design space is permitted.
        </p>
        <p>
          Significant research efforts have been devoted to develop tools for power
consumption estimation at different abstraction levels in embedded system design.
A lot of those tools however work at the Register Transfer Level (RTL) (this is
the case for tools like SPICE, Diesel [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and Petrol [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]), at the Cycle Accurate
Bit Accurate (CABA) level ([
          <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
          ]), and a few tools at the architectural level
(Wattch [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] and Simplepower [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]). Such approaches cannot be used at high
levels because simulation times at such low abstraction levels become enormous
for complete and complex systems, like multiprocessor heterogeneous platforms.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] and [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], the authors present a characterization methodology for
generating power models within TLM for peripheral components. The pertinent
activities are identified at several levels and granularities. The characterization
phase of the activities is performed at the gate level and is used to deduce
the power of coarse-grained activities at higher level. Again, applying such
approaches for complex processors or complete systems is not doable. Instruction
level or functional level approaches have been proposed [
          <xref ref-type="bibr" rid="ref15 ref16 ref17">15–17</xref>
          ]. They however
only work at the assembly level, and need to be improved to take into account
pipelined architectures, large VLIW instruction sets, and internal data and
instruction caches.
        </p>
        <p>We introduced the Functional Level Power Analysis (FLPA) methodology
which we have applied to the building of high-level power models for
different hardware components, from simple RISC processors to complex superscalar
VLIW DSP [18, 19], and for different FPGA circuits [20]. In this paper we show
how this modelling approach fits into the AADL design flow and how our power
models, being interoperable, are used at different refinement levels. Section 2
presents the AADL component based design flow and the deployment of the
Platform Independent Models (PIM) to obtain the Platform Specific Model (PSM)
of the target. Section 3 presents the methodology for power estimations and
the global power analysis of a complete system described with AADL. Section
4 presents the building of power models and define the three refinement levels
where they can be used. The power models of the DSP TI C62, the GPP
PowerPC 405, and the FPGA Altera Stratix EP1S80 are presented as examples. The
accuracy of our power estimations is finally evaluated.
2</p>
      </sec>
      <sec id="sec-3-2">
        <title>AADL design flow</title>
        <p>AADL component
assembly
model
AADL models library:
- components
- interfaces</p>
        <p>AADL target
platform models:
- hw components
- services
- connectors
AADL deployment</p>
        <p>plan model
AADL PSM model
composition
SystemC
model
SystemC
C++ code
model transformation
code generation</p>
        <p>HLS/LS/P&amp;R; Compil./Link.</p>
        <p>FPGA</p>
        <p>DSP</p>
        <p>GPP
of different plug-ins included in the tool set. During the deployment, software
components in the component assembly model are bound to hardware
components in the target platform model [22]. According to the deployment plan
model, OSATE scheduling analysis plug-in uses information embedded in the
software components description to propose a binding for the system [23]. Figure
2 shows the typical binding of an application on a multiprocessor architecture.
In this example, process main_process and its data block data_b are bound
to the memory sysRAM. Threads control_thread, ethernet_driver_thread
and post_thread are bound to the first general purpose processor GPP1. Thread
pre_thread is bound to GPP2. Thread hw_thread1 is, like hw_thread2 a
hardware thread. It will be implemented in the reconfigurable FPGA circuit FPGA1.
One connection between pre_thread and post_thread has been declared using
in and out data ports in the threads. This connection is bound to bus sys_bus
since it connects two threads bound to two different components in the
platform. Intra-component connections, like between threads control_thread and
ethernet_driver_thread, do not need to be bound to specific buses. They will
however consume hardware resources while being used.</p>
        <p>In addition to communication buses, dedicated supply buses can also been
declared. A power analysis command in the OSATE resources analysis plug-in
permits to check if the power capacity of a supply bus is not exceeded. To do
that, a power capacity property (SEI::PowerCapacity) is declared for a bus,
and a power budget is declared for every component that requires an access to
this bus (property SEI::PowerBudget). The plug-in adds all the power budgets
for a bus and compares the result with its power capacity. This mechanism, even
if it is interesting, is extremely limited: power budgets for every component are
only a guess from the user, and are only used to compute the power consumption
sysRAM
GPP1
sysCPU1</p>
        <p>bus_conn
of buses in a very simplistic way. In this paper, we propose a method to greatly
enhance power analysis in the AADL flow, and to do it in an efficient way not
only for buses, but for every consuming component in the system. Moreover,
we propose to base power analysis on realistic power estimates, by using an
accurate power estimation tool and precise power consumption models for every
component in the targeted hardware platform.
3</p>
      </sec>
      <sec id="sec-3-3">
        <title>High-level power consumption estimations</title>
        <p>In order to complete power consumption analysis for the whole system, we need
first to compute the power budget for every software component in the AADL
component assembly model. This is the power estimation phase (1) represented
with plain edges on figure 3 in the case of the binding of a thread to a processor.</p>
        <p>Next, the power budgets of software components are combined to get the
power budgets for hardware components. This is the power analysis phase (2)
represented with thick dotted edges on the figure. Using timing information, the
energy analysis will be performed afterwards (thin dotted edges).</p>
        <p>The challenge for our power estimation tool is to provide a realistic power
budget for every component in the application. This tool gathers several
information in the system’s AADL specification at different places, here from the
process and thread descriptions, and from the processor specification. It also
uses binding information that may come from a preliminary scheduling analysis.
The tool uses the power model of the targeted hardware component (here a
processor) to compute the power budget for the (software) component. In fact, it
determines the input parameters of the model from the set of information it has
gathered. This process is repeated, not only for threads bound to processors, but
also for any possible binding of software components onto hardware components,
and that means: (i-) threads onto processors or FPGA, (ii-) processes and data
onto memories, and (iii-) inter-components connections onto buses.</p>
        <p>A process and a thread in the
component assembly model
Power
Models
Library</p>
        <p>Power
Estimation</p>
        <p>Tool
(1)
A processor in the
target platform model
process1
thread 1
PowerBudget =&gt; ?
EnergyBudget =&gt; ?
Once the power budgets have been computed for every component in the
application, the power analysis is performed. The power analysis tool retrieves
all the component power budgets, together with additional information from the
specification, and computes the power budget for every hardware component
in the system. Then it computes the power estimation for the whole system.
The result of the scheduling analysis (which gives the load of processors) is also
taken into account at this level. Indeed, whenever a processor is idle, its power
consumption is at the minimum level. Scheduling analysis is performed using
basic information on the threads properties defined as properties for each thread
implementation in the AADL component assembly model: dispatch protocol
(periodic, aperiodic, sporadic, background), period, deadline, execution time ...</p>
        <p>This paper will concentrate on the power estimation phase, no further details
will be given on the power analysis. Energy analysis will be finally performed
using information from the timing analysis tools currently being developed by
some of our partners in the SPICES project.
4</p>
      </sec>
      <sec id="sec-3-4">
        <title>Multi-level power models</title>
        <p>Our Power Estimation Tool, PET, is an evolution of the former SoftExplorer,
initially dedicated to power and energy consumption estimation for processors
(from simple RISC General Purpose Processors to very complex VLIW Digital
Signal Processors) [24]. This tool comes with a library of power models for every
hardware component on the platform.</p>
        <p>Our objective is to allow power estimation at different levels in the flow.
This involves the use of multi-level power models, which are models that can be
used with more or less information, depending on the refinement level. In fact,
while the specification is being refined, more information is available and power
estimations get more precise.</p>
        <p>Let’s consider a case study platform including one GPP (the PowerPC 405),
one DSP (the Texas Instrument C62), and one FPGA circuit (the Xillinx
Virtex 400E). The description of those components’ power models can be found
respectively in [25], [18], and [20]. Power models are built following our
Functional Level Power Analysis methodology [19]. The component’s architecture is
firstly analysed and relevant parameters regarding its power consumption are
identified. Then physical measurements are performed to assess the evolution of
the power consumption with the models’ input parameters (using little
benchmarking programs called ”scenario”), and finally power consumption laws are
established.
4.1</p>
        <sec id="sec-3-4-1">
          <title>A complex Digital Signal Processor</title>
          <p>The TI C62 processor has a complex architecture. It has a VLIW instructions
set, a deep pipeline (up to 15 stages), fixed point operators, and parallelism
capabilities (up to 8 operations in parallel). Its internal program memory can be
used like a cache in several modes, and an External Memory Interface (EMIF) is
used to load and store data and program from the external memory [26]. In the
case of the C62, the 6 following parameters are considered. The clock frequency
(F) and the memory mode (MM) are what we call architectural parameters. They
are directly related to the target platform and the hardware component, and can
be changed according to the users will. The influence of F is obvious. The C62
maximum frequency is 200MHz (it is for our version of the chip); the designer
can tweak this parameter to adjust consumption and performances.</p>
          <p>The remaining parameters are called algorithmic parameters; they directly
depend on the application code itself. The parallelism rate α assesses the flow
between the processor’s instruction fetch stages and its internal program memory
controller inside its IMU (Instruction Management Unit). The activity of the
processing units is represented by the processing rate β. This parameter links
the the IMU and the PU (Processing Unit). The activity rate between the IMU
and the MMU (Memory Management Unit) is expressed by the program cache
miss rate γ. The pipeline stall rate (PSR) counts the number of pipeline stalls
during execution. It depends on the mapping of data in memory and on the
memory mode.</p>
          <p>The memory mode MM illustrates the way the internal program memory is
used. Four modes are available. All the instructions are in the internal memory
in the mapped mode (M MM ). They are in the external memory in the bypass
mode (M MB). In the cache mode, the internal memory is used like a direct
mapped cache (M MC ), as well as in the freeze mode where no writing in the
cache is allowed (M MF ). Internal logic components used to fetch instructions
(for instance tag comparison in cache mode) actually depends on the memory
mode, and so the power consumption.</p>
          <p>A precise description of the C62 power model and its building may be found
in [18]. The variation of the power consumption with the input parameters, more
precisely the fact that the estimation is not equally sensitive to every parameter,
allows to use the model in three different situations.</p>
          <p>In the first situation, only the operating frequency is known. The tool returns
the average value of the power consumption, which comes from the minimum
and maximum values obtained when all the others parameters are being made
to vary. The designer can also ask for the maximum value if a higher bound is
needed for the power consumption.</p>
          <p>In the second situation, we suppose that the architectural parameters (here
F and MM) are known. We also assume that the code is not known and that the
designer is able to give some realistic values for every algorithmic parameter. If
not, default values are proposed, from the values that we have observed running
different representative applications on this DSP (see table 1).</p>
          <p>In the third situation, the source code is known. It is then parsed by our
power estimation tools: the value of every algorithmic parameter is computed
and the power consumption is estimated, using the power model and the values
enter by the user for the frequency and memory mode.
α β
LMSBV 1024 1 0,625
MPEG 1 0,687 0,435
MPEG 2 ENC 0,847 0,507
FFT 1024 0,5 0,39
DCT 0,503 0,475
FIR 1024 1 0,875
EFR Vocoder GSM 0,578 0,344
HISTO (image equalisation by histogram) 0,506 0,346
SPECTRAL (signal spectral power density estimation) 0,541 0,413
TREILLIS (Soft Dcision Sequential Decoding) 0,55 0,351
LPC (Linear Predictive Coding) 0,684 0,468
ADPCM (Adaptive Differential Pulse Code Modulation) 0,96 0,489
DCT 2 (imag 128x128) 0,991 0,709
EDGE DETECTION 0,976 0,838
G721 (Marcus Lee) 1 0,682</p>
          <p>The error introduced by our tool obviously differs in these three situations.
To calculate the maximum error, estimations are performed with given values
for the parameters known in the situation, and with all the possible values of
the remaining unknown parameters. The maximum error comes then from the
difference between the average and the maximum estimations. This is repeated
for every valid set of known input parameters. The final maximum error is the
maximum of the maximum errors. Table 2 gives the maximum error in the three
situations above, which correspond to three levels of the specification refinement.
Note that the maximal errors computed at level 2 are really pessimistic since we
assume here that the designer is completely (100%) wrong on his evaluation of
all the input parameters. If his evaluation of those parameters is only 50%, or
25% wrong, then the error introduced by our tool is reduced as well.</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>A more simple General Purpose Processor</title>
          <p>The PowerPC 405 is a light version of the IBM PowerPC, embedded in the
Xilinx VirtexII Pro FPGA. It includes one prefetch instruction unit that allows
to reduce the number of pipeline stalls due to instruction misses, two caches
(16KB each) (one for the data and the other for instructions). These two caches
can be involved separately and use LRU policy. They are two ways associative
with 32 bytes line size. And three TLB translating addresses from logical to
physical (2 shadow TLB - one for the instructions and one for data - are used
and coupled with an unified one).</p>
          <p>Measurements show that among the most important factors in the PowerPC
405 consumption are its frequency and the frequency of the bus to which it is
connected. The processor can be clocked at 100, 150, 200 or 300 MHz, and,
depending on the processor frequency, the bus (OCM or PLB) frequency can
take different values between 25 to 100 MHz. Another important parameter to
consider is the configuration of the memory hierarchy associated to the
processor’s core, and that means, which caches are used (data and / or instruction)
and where is located the primary memory (internal / external). Once again, the
component’s power model can be used at three refinement levels.</p>
          <p>At the first refinement level, our model gives a rough estimate of the power
consumption for the software component, only from the knowledge of the
processor and some basic information on its operating conditions. The only information
we need is the processor frequency and the frequency of the internal bus (OCM
or PLB) to which the processor is connected inside the FPGA. They are two
architectural parameters of the PowerPC 405. They will be defined as a
property of the AADL processor implementation of the PowerPC 405 in the AADL
specification. The maximum error we get here is 27%.</p>
          <p>At the second refinement level, we have to add some information about the
memories used. We have to indicate which caches will be used in the PowerPC
405 (data cache, instructions cache, or both), and if its primary memory is
internal (using the FPGA BRAM memory bank) or external (using a SDRAM
accessed through the FPGA I/O). Indeed, while building the power model for the
PowerPC 405, we have observed that it draws quite different power and energy
in those various situations [25]. Table 3 show the maximal errors we obtain here
for every valid set of known input parameters, the others being unknown. The
maximum error we obtain is 15,3% and the average error is 6,6%. The first line
indicates 0% because in this configuration, there are not remaining unknown
parameters that can change the power consumption of the processor.
At the lowest refinement level, the actual code of the software component
is parsed. In the case of the PowerPC 405, what is important is not exactly
what instruction is executed, but rather the type of instruction being executed.
We have indeed exhibited that the power consumption changes noticeably from
memory access instructions (load or store in memory), to calculation instructions
(multiplication or addition). As we have seen before, the place where the data is
stored in memory is also important, so the data mapping is also parsed here. The
average error we get at this level is 2%. The maximum error is 5%. Logically,
that corresponds to the max and average errors for the set of consumption laws
for the component.
4.3</p>
        </sec>
        <sec id="sec-3-4-3">
          <title>Field Programmable Gate Arrays</title>
          <p>FPGA (Field Programmable Gate Arrays) are now very common in electronic
systems. They are often used in addition to GPP (General Purpose Processors)
and / or DSP (Digital Signal Processors) to tackle data intensive dedicated
parts of an application. They act as hardware accelerators where and when the
application is very demanding regarding the performances, that typically for
signal or image processing algorithms. In this case again power estimation can
be performed at different refinement levels.</p>
          <p>At the highest levels, the code of the application is not known yet. The
designer needs however to quickly evaluate the application against power, energy
and / or thermal constraints. A fast estimation is necessary here, and a much
larger error is acceptable. The parameters we can use from the high-level
specifications are the frequency F and the occupation ratio β of the targeted FPGA
implementation, that we consider as architectural parameters, and the activity
rate α. The experienced designer is indeed able to provide, even at this very
high-level, a realistic guess of those parameters’ value. As explained before, to
obtain the model, i.e. the mathematical equation linking its output to the
parameters, we performed a set of different measurements on the targeted FPGA. For
different values of the occupation ratio, and for different values of the frequency,
we made the activity rate varying and measured the power consumption.</p>
          <p>At our first refinement level, only the frequency is known. Our power
estimation tool uses the model to estimate, at the given frequency, the power
consumption with α = β = 0,1 and with α = β = 0,9. Then it returns the
average value between those minimal and maximal values. The maximal errors
we obtain for F = 10MHz and F = 90MHz (upper bound for the Altera Stratix
EP1S80) are given table 4.</p>
          <p>At the next refinement level, the two architectural parameters F and β, are
known to the user. Like in the case of the former processor’s models, default
values are proposed for α and also β, coming from a set of representative
applications. The maximal error introduced in this case ranges from 6,9% to 44,8%.
To determine this error we compute the maximum and minimum estimations
for the four extreme (F , β) couples, and compare them to the estimations with
α default value.</p>
          <p>At the lowest refinement level, the source code (a synthesizable hardware
description of the component behaviour, may be written in VHDL or SystemC
...) is used. A High-Level Synthesis tool [27] permits to estimate the amount of
resources necessary to implement the application, and given the targeted circuit,
to obtain its occupation ratio (β) and its activity rate (α). Those two parameters
and the frequency are finally used with the model.
TI C62 property set
Processor_Frequency : aadlreal applies to (processor);
Processor_Memory_Mode : TIC62::Processor_Memory_Mode_Type applies to (processor);
Processor_Parallelism_Rate : aadlreal applies to (processor);
Processor_Processing_Rate : aadlreal applies to (processor);
Processor_Cache_Miss_Rate : aadlreal applies to (processor);
Processor_Pipeline_Stall_Rate : aadlreal applies to (processor);
Processor_Memory_Mode_Type : type enumeration (CACHE,FREEZE,BYPASS,MAPPED);
Processor_Parallelism_Rate_Default : constant aadlreal =&gt; 0,7549;
Processor_Processing_Rate_Default : constant aadlreal =&gt; 0,5298;
Processor_Cache_Miss_Rate_Default : constant aadlreal =&gt; 0,25;</p>
          <p>Processor_Pipeline_Stall_Rate_Default : constant aadlreal =&gt; 0,2919;</p>
          <p>As described section 3, the power estimation tool, when it is invoked, extracts
relevant information (a set of parameters) from the AADL specification, then
computes the components’ power consumption, and returns the results to fill
the power budget properties for the software components. The binding makes it
possible to put in relation components in the AADL component assembly model
with the power models of hardware components on the targeted platform.</p>
          <p>As we have just seen, depending on the information refinement, coarse or
fine precision power estimations will be performed. Given the refinement level,
information to be provided to the estimation tool depends on the selected target
(which component). The information is more general if the refinement level is
high, it will be more dedicated to the target if the refinement level is low. The
set of properties that are used by the estimation tool actually depends on the
PowerPC 405 property set
Processor_Frequency : aadlreal applies to (processor);
Processor_Bus_Frequency : aadlreal applies to (processor);
Processor_Primary_Memory : PPC405::Primary_Memory_Type applies to (processor);
Processor_Data_Cache : aadlboolean applies to (processor);
Processor_Instructions_Cache : aadlboolean applies to (processor);
Primary_Memory_Type : type enumeration (BRAM,SDRAM);
FPGA Altera Stratix EP1S80 property set
FPGA_Frequency : aadlreal applies to (fpga);
FPGA_Activity_Rate : TIC62::Processor_Memory_Mode_Type applies to (fpga);
FPGA_Occupation_Ratio : aadlreal applies to (fpga);
FPGA_Activity_Rate_Default : constant aadlreal =&gt; 0,4;</p>
          <p>FPGA_Occupation_Ratio_Default : constant aadlreal =&gt; 0,5;
component itself, and more precisely, on its power model. Even between two
components of the same type, another set of specific properties might be
necessary since another set of configuration parameters might apply. This is the case
here for the two processor components TI C62 and PowerPC405. The property
set of the processor comes finally as a part of its power model, and, as this, will
remain separated from the general property set associated to the current AADL
working project for the application being designed in the OSATE environment.
6</p>
        </sec>
      </sec>
      <sec id="sec-3-5">
        <title>Conclusion</title>
        <p>We have presented a method to perform power consumption estimations in the
component based AADL design flow. The power consumption of components in
the AADL component assembly model is estimated whatever the targeted
hardware resource, in the AADL target platform model, is: a DSP (Digital Signal
Processor), a GPP (General Purpose Processor), or a FPGA (Field Programmable
Gate Array). A power estimation tool has been developed with a library of
multilevel power models for those (hardware) components. These models can be used
at different levels in the AADL specification refinement process. We have
currently defined three refinement levels in the AADL flow. At the lowest level, level
3, the (software) component’s actual business code is considered and an accurate
estimation is performed. This code, written in C, or C++, for standard threads,
can also be written in VHDL or SystemC for hardware threads. At level 2, the
power consumption is only estimated from the component operating frequency,
and its architectural parameters (mainly linked to its memory configuration in
the case of processors). At level 1, the highest level, only the operating frequency
of the component is considered.</p>
        <p>Three power models have been presented for the TIC62 GPP, the
PowerPC405 GPP, and the Altera Stratix EP1S80 FPGA. The maximum errors
introduced by these models, at the three refinement levels, are given table 8.
(signalConditioning). The functions are allocated to the Infineon TC1766. In some
cases, a behavioural part may have several possible allocations onto different
resources. The allocation of a function onto a physical resource implies a temporal
cost (the WCET). This cost is a new information that appears on the allocation
diagram at the bottom part of each action.
Table 1 is a summary of the different temporal characteristics of the behavioural
parts. The period, offset and deadline values have been extracted from the behavioural
models.</p>
        <p>The period expressed initially with the time unit °CRK has been translated in second
by applying the time constraint relation on equation 1.</p>
        <p>The offset and deadline were expressed as relative dates in the behavioural model
with the time unit °CRK. We give in Table 1 the actual values of these parameters
obtained by calculating the corresponding values in milliseconds. The last row of the
table represents the duration of the deadline which depends on the offset and the
deadline date.</p>
        <p>WCET (on TC1766 in ms)
Period(°CRK - ms)
Offset date (°CRK - ms)
Deadline date (°CRK / ms)
Deadline duration (°CRK / ms)</p>
        <p>Knock
0,5
180 - 6,66
24 – 0,888
50 – 1,851
26 – 0,962</p>
        <p>Over Temp
0,2
180 - 6,66
0 - 0
50 – 1,851
50 – 1,851</p>
        <p>Wam-up
0,2
180 - 6,66
0 - 0
50 – 1,851
50 – 1,851
The WCETs are obtained either by emulating or profiling the tasks corresponding to
the correction controllers represented in the behavioural model. The time base in this
case is the time base of the processor on which these tasks will execute, i.e. the
physical clock. The value of the deadline has been extracted from the book [18] which
gives some actual parameters values of an ignition engine controller.</p>
        <p>The next step consists in exploiting these characteristics in the scheduling analysis
phase.
5 Scheduling analysis
5.1 Principles
The scheduling analysis may start as soon as the application models with its
associated timing model, and the architecture model, are available. Mainly, it consists
in exploiting the timing information, i.e. the temporal characteristics (periods and
WCETs, the latter depending on the allocation) as well as the timing constraints
(deadlines), attached to each temporally characterized function of the application
model. The allocation model allows the designer to determine the actual timing
characteristics which vary with the various possible computing resources each
application element can be allocated to. Multiple potential allocations can be
considered for a function when several computing resources are able to implement it.
In our case the computing resource is unique since we address uniprocessor
architecture. Nevertheless several versions of this unique processor may be
considered. In this case the schedulability analysis must be iterated according to the
different processors which induce different timing values of the WCET.
The processor of the execution platform is supposed to provide a RTOS, e.g. OSEK in
the case of our example. This RTOS is the standard for the automotive domain. The
schedulability analysis assumes also that the given RTOS supports the scheduling
policy the analysis is based on. This will guarantee that the real-time behaviour of the
application, running on the chosen architecture, will satisfy the real-time constraints.
Actually, this assumes that the WCET were carefully determined and enough
margins were taken to approximate the cost of the RTOS itself, i.e. the cost of the
scheduler including the cost of the preemption if it is allowed by the scheduling
policy.
5.2 Illustration on an ignition controller
In the ignition control system example, the three behaviours: Knock control
correction, Over temperature correction, and Warm-up correction are associated to a
couple of temporal characteristics (deadline, period), and a WECT that was
determined according to the execution platform resources it was allocated to. These
associations lead to a system of three periodic tasks: Knock, OverTemp, and
Warmup. Their periods initially expressed in °CRK have been translated in milliseconds
(from the idealClock) by applying the time constraints between the two clocks. The
resulting values are listed in Table 1.
With these data, it is possible to perform a scheduling analysis based on a fixed
priority scheduling policy. Every tasks is periodic, has a WCET, and a deadline. In
addition, preemption is allowed. As the deadline of each tasks is less than its periods,
the system of tasks can be scheduled according to the Deadline Monotonic (DM)
scheduling algorithm [19], i.e. the task with the smallest deadline has the highest
priority, assuming these tasks are independent. If they are not independent a more
complex schedulability analysis must be performed, but that does not change anything
to the proposed approach. We do not focus on the schedulability analysis itself but on
the way it is possible to perform it from the previous models, manually or possibly
automatically. In this context, the scheduling analysis using DM algorithm amounts to
verify the following sufficient condition. As mentioned before, to be consistent the
RTOS running on the considered uniprocessor must use also this algorithm.
The system is schedulable if:</p>
        <p>WCETi
i 1 Deadlinei
We chose here the DM fixed priority policy instead of the Earliest Deadline First
(EDF) variable priority policy because the scheduler is simpler, and thus its cost is
easier to approximate. This approximation is a fundamental hypothesis used in the
DM schedulability analysis. Usually the industrial designers take a margin equal up to
30% of the task WCET. With these assumptions our automotive example with the
three tasks mentioned above is schedulable if:</p>
        <p>WCETKC
deadlineKC</p>
        <p>WCETOT
deadlineOT</p>
        <p>WCETWUd
deadlineWU
1)
(4)
Considering the values of Table 1 we can conclude that this system of three tasks is
schedulable with the assumption of a MaxRPM equal to 4500. In this case, the left
part of the equation is equal to 0.735 &lt; 0.779. On the other hand, if we consider a
MaxRPM equal to 6000, the left part of the equation is equal to 0.980 &gt; 0.779 and the
system of tasks is not schedulable.</p>
        <p>Another constraint to be verified is the timedValueSpecification of the activity
diagram (CorrectionAdvanceControl). According to the timedDurationConstraints
expressed on Figure 4 the equation 5 must be verified.</p>
        <p>t</p>
        <p>IDP KC
This equation is also valid.</p>
        <p>WCET</p>
        <p>WCET</p>
        <p>OT</p>
        <p>WCET WdU
t
MIxTA
(5)
6 Conclusion
In order to cope with the complexity of real-time embedded systems, the Model
Based Design approach promotes a separation of concerns between the model of the
application (functions) and the model of the execution platform.</p>
        <p>UML and its profiles are largely used to model both parts. The recent standardization
of the UML MARTE profile extends UML with temporal information and physical
resources modelling capabilities. Applying this profile to a model based design makes
it possible to enrich the application and execution platform models with precise and
semantically well founded temporal information. As this information corresponds to
explicit model elements endowed with a clear semantic, they can be extracted from
the model. Consequently, MARTE models can be used as a starting point for methods
and tools intended for schedulability analysis. They take into account temporal
information and timing constraints for verifying deadline constraints.</p>
        <p>In this paper, we presented the MARTE model elements associated with the time
model package of MARTE, and we illustrated their uses on an automotive case study.
Four models were presented, the functional model, the time model, the allocation
model, and the execution platform model. While temporal information (periods) and
constraints (deadlines) are associated with the functional model and are independent
from the execution platform model, other timing information (WCET) are dependent
of the platform, i.e. the computing resources.</p>
        <p>In this approach, there are two notions of time, the time linked to the functional model
logical time and the physical time related to both the allocation model and the
execution platform model. We showed how to establish the link between logical time
and physical time through the allocation model.</p>
        <p>From these models we extracted the physical timing information and used them to
straightforwardly perform a schedulability analysis. We use an analysis based on the
DM algorithm, but others types of analyses are possible, which concludes that the
illustrative application, characterized with the temporal characteristics and
constraints, is schedulable onto the given execution platform.
1 MARTE OMG Specification. A UML Profile for MARTE, Beta 1.OMG Adopted
specification ptc/07-08-04. August 2007.
2 OMG. UML 2.1 Superstructure Specification, April 2006.OMG document number:
ptc/200604-02.
3 OMG. UML Profile for Schedulability, Performance, andTime Specification, January 2005.
OMG document number: formal/05-01-02 (v1.1).
4 Papyrus: graphical UML2 modelling editor, http://papyrusuml.org
5 Cheddar: http://beru.univ.brest.fr/~singhoff/cheddar
6 SynDEx: http://www-rocq.inria.fr/syndex/
7 T. Schattkowsky, W. Muller, Model-based design of embedded systems, IEEE symposium on
Object-Oriented real-time distributed computing, , pp113-128, Vienna May 2004
8 ITEA project. EAST-ADL: The EAST-EEA Architecture Description Language, June 2004.</p>
        <p>ITEA Project Version 1.02.</p>
        <p>A Reinterpretation of Patterns to Increase the</p>
        <p>Expressive Power of Model-Driven Engineering
Matteo Bordin1, Marco Panunzio2, Carlo Santamaria2, and Tullio Vardanega2
1 AdaCore
46 rue d’Amsterdam, 75009 Paris, France</p>
        <p>bordin@adacore.com
2 University of Padua, Department of Pure and Applied Mathematics</p>
        <p>via Trieste 63, 35121 Padova, Italy
{panunzio,tullio.vardanega}@math.unipd.it
Abstract. The model-driven engineering (MDE) paradigm wishes to raise the
abstraction level of the user design space, while resting on the automated
generation of all lower-level artifacts. Under the MDE approach the focus of verification
and validation increasingly verges on models. As a consequence, the expressive
power availed to the user is often considerably restricted to ensure that the models
are amenable to static analysis. Inherent tension thus arises in the very essence
of MDE between the restraints to be placed for a better good on the user-level
expressive power and the user need and expectation to be able to operate in a
modeling space delivered of platform dependences and constraints. In this paper
we contend that a new notion of modeling patterns may help resolve the
conflict and increase the expressive power in the user space without jeopardizing the
integrity and effectiveness of the transformation process.
1</p>
        <p>
          Introduction and related work
Motivation. Model-Driven Engineering [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] aims to decrease the time and cost of
software production and to increase quality, by leveraging on the factorization of best
practices in programming and implementation. The MDE paradigm strives to raise the
abstraction level of the user space and to generate all lower-level artifacts automatically,
source code, analysis models, and documentation alike. MDE wishes to deliver the
user from the burden of dealing with platform-specific implementation details, and to
concentrate instead on the (platform-independent) specification of the solution. The
assumption behind this vision is that the implementation may be largely if not completely
delegated to the automation capabilities of platform-specific development frameworks.
        </p>
        <p>At present however, the adoption of MDE for the high-integrity application domain
is still a challenge. Already in the general case in fact, it may be overly difficult to
provide sufficient assurance that all properties attached to the user model and validated at
that level of abstraction be correctly propagated throughout model transformations, and
preserved upon deployment and execution. This provision demands a level of control
over and proof of the production process, which is difficult to attain for the general user.</p>
        <p>
          Matteo Bordin, Marco Panunzio, Carlo Santamaria, and Tullio Vardanega
State of the art. The most prominent efforts in modeling languages for real-time
systems in the industrial landscape to date are AADL and MARTE. AADL [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] focuses
on the modeling of schedulable entities, of information and control flows, and on the
analysis thereof. AADL has also recently been augmented with specific annexes
targeting behavioral aspects and error treatment mechanisms. AADL conveys all user
concerns, such as scheduling, flow and behavioral modeling, into a single modeling view.
The main advantage of this choice is the comparative simplicity and cohesiveness of
the modeling language: the AADL syntax is particularly compact, and each semantic
concept can easily and directly be expressed with a single combination of syntactic
constructs. The single-view modeling of multiple concerns however limits the power of
abstraction considerably and pushes it down to the implementation level. The
abstraction level of AADL models thus gets downcast to that of the underlying implementation
as intended by the target execution platform and the accompanying theories of analysis,
which is quite contrary to the intention of MDE.
        </p>
        <p>
          MARTE [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] is an OMG effort to bridge schedulability-oriented modeling with
system-level aspects such as flow analysis and software/hardware interaction. (MARTE
can in fact be used in conjunction with SysML [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].) As of September 2008 the MARTE
specifications are in official beta status. MARTE suffers from the gigantism typical of
several OMG standards. As in UML, a MARTE model is comprised of several views,
the consistency of which is not assured by the underlying metamodel. Moreover, even
if vastly more expressive than AADL (especially for time-related semantics), numerous
syntactic constructs in MARTE insist on one and the same semantic concept and thus
overload it. These characteristics make MARTE models rather complex to understand.
Ultimately, the semantics expressible with MARTE is close to that assumed in common
scheduling analysis theories.
        </p>
        <p>While both AADL and MARTE provide platform-independent ways of modeling
software systems in manners amenable to static analysis, the abstraction level of their
modeling space is restrained by constraints arising from the execution semantics
intended for the target platform. In fact, in both AADL and MARTE the abstraction level
at PIM is almost equivalent to that at PSM. That closeness eases the preservation of
model attributes and properties across model transformations, but at the cost of
permitting only a shallow distance between PIM and PSM.</p>
        <p>
          The last modeling language we consider is HRT-UML/RCM [
          <xref ref-type="bibr" rid="ref5 ref6">5,6</xref>
          ], the authors’ own
proposal, an MDE infrastructure devised in the ASSERT project [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] to defy this
challenge especially. In ASSERT, HRT-UML/RCM was used for the development of an
industrial-scale real-time embedded system by one of the major prime contractors in
European space industry.
        </p>
        <p>
          HRT-UML/RCM aims to: (i) provide a design environment in which the user solely
operates in the PIM space, with the only exception of the specification of hardware
configuration and application deployment; and (ii) support an MDE methodology
characterized by principles of correctness by construction [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and of property preservation
[
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], across all model transformations including at run time.
        </p>
        <p>The HRT-UML/RCM model space does not allow any semantic variation points:
the run-time semantics expressed in its models thus always is completely defined.
A Reinterpretation of Patterns to Increase the Expressive Power of MDE</p>
        <p>In HRT-UML/RCM, the user specification of the PIM is declarative, while the
transformation process applied to it corresponds to an implementation designed to be
provably correct by construction. The resulting product consequently does not need to be
verified a posteriori on a per-system basis, but only requires a single per-platform
validation, with important cost savings for the developer.</p>
        <p>HRT-UML/RCM has for now elected to produce a single PSM from the PIM. space,
though other PSM may in principle be generated, for instance to address dependability,
safety and security concerns. In HRT-UML/RCM the PSM is also used as a
Schedulability Analysis Model (SAM). The SAM represents the semantics of the system model
to the extent of allowing static analysis of the feasibility and sensitivity of its timing
behavior. The SAM generated in HRT-UML/RCM is comprised of a set of comparatively
simple building blocks, which, through correct-by-construction composition, may
arrive at encompassing arbitrarily complex execution semantics.</p>
        <p>
          HRT-UML/RCM seamlessly integrates round-trip support for feasibility and
sensitivity analysis and makes it start and end at the PIM [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] (see figure 1). While the
analysis is of course made on the SAM, its results are propagated back to the PIM, which is
possible as the entire model transformation logic is deterministic and reversible, hence
it may be easily followed backwards.
        </p>
        <p>Fig. 1. Round-trip timing analysis in HRT-UML/RCM</p>
        <p>
          HRT-UML/RCM also supports automated generation of source code. This leg of the
transformation process starts from the SAM, which eases the provision of constructive
proofs that the system at run time does correspond to what was analyzed and deemed
feasible in the SAM. In the case in instance the complexity of the code generator is
modest since the SAM is very close to the system at run time and the code generation
engine makes extensive use of simple patterns [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>A factor that greatly facilitated our attainment of correct-by-construction
transformations was the decision to hoist the observance of the RCM constraints from the PSM
space up to the PIM. This decision however has the downside that it pushes back onto
the user the need to think in implementation terms (so that the RCM restrictions are not
violated) in contrast with the promise of delivery from those very concerns.</p>
        <p>Matteo Bordin, Marco Panunzio, Carlo Santamaria, and Tullio Vardanega
Contribution. The challenge we wish to defy is to raise the user space to a higher
level of abstraction (closer to the problem domain and more distant from the constraints
of implementation) while guaranteeing preservation and assurance of properties across
deeper model transformations. In this paper we discuss the role that MDE patterns may
play to this end.</p>
        <p>
          Previous work on the use of patterns in real-time systems [
          <xref ref-type="bibr" rid="ref12 ref13">12,13</xref>
          ] predominantly
if not exclusively considered design patterns in the way they were promoted by the
“Gang of Four” [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. We contend that such a view fails to take full advantage of the
emerging MDE paradigm. We show additional classes of potentially useful patterns,
and attempt an initial classification of them from the broader perspective of the
transformation space.
        </p>
        <p>The remainder of the paper is structured as follows: in section 2 we define what we
mean by ”expressive power” in the context of MDE; in section 3 we draw a tentative
classification of MDE patterns against the hierarchy of transformations implied in the
process; in section 4 we discuss a few example patterns in some details and finally we
draw some conclusions.
2</p>
        <p>
          Expressive power in Model-Driven Engineering
We consider the expressive power of a language to relate to its economy of
expressions: the more synthetic the language entities and the denser their semantic contents,
the greater the expressive power. By this definition, the keywords of a programming
language are much more expressive than the words in the instruction set of the
target processor. In the context of programming languages the transformation of a more
expressive program text into a lesser one is taken care of by the compiler. It is a
wellknow observations that the very existence of a compiler for a given target permits to
implement other programming languages equipped with greater expressive power [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>The expressive power thus is the capability of expressing, synthetically, high-level
concepts with a finite set of language terms with known meaning and with the guarantee
that they can be correctly translated into a semantically equivalent set of entities that
belong to an underpinning implementation language.</p>
        <p>This very principle lies at the heart of the MDE paradigm too. The PIM space
may exploit a dictionary of technology- and implementation-independent terms, which
model transformation translates to terms that belong to a specific execution platform.
Source code is only one of many possible PSM produced by a model transformation of
a PIM. In general it is possible to generate a set of PSM each of which serves a different
purpose. Each PSM may thus represent the implementation of the PIM at a distinct level
of abstraction, or viewpoint, each of which focusing only on the concerns of interest to
selected view-specific stakeholder.</p>
        <p>The expressive power of PSM is often constrained by the level of formalism
required to permit the sound application of domain-specific analysis techniques. For
example, constructs that incur non-determinism or unbounded execution behavior may be
removed from the PSM language in the prevailing interest of facilitating static analysis.</p>
        <p>In general, two opposite approaches can be pursued to increase the expressive power
availed to the PIM space:
A Reinterpretation of Patterns to Increase the Expressive Power of MDE
– granting the maximum possible (in principle, full) freedom of expression to the user
and then verifying a-posteriori whether the user model can be successfully turned
into a semantically-equivalent and legal PSM;
– capturing all of the constraints that propagate up from the PSM and striving to
relax all of them for which a transformation pattern may be devised which may be
proven correct a-priori and whose eventual overhead may be deemed acceptable to
the application.</p>
        <p>In the former approach the MDE infrastructure provides the designer with
expressive power as large as the user can have. In this case however there is no guarantee that
a legal PSM may be obtained from automated transformation of the user model. The
verification must be therefore made a posteriori and on a per-model basis.</p>
        <p>With the latter approach instead, the infrastructure clearly continues to restraint the
expressive power, but for the benefit of a-priori guarantees that any user model in the
PIM scope can be automatically transformed into a legal PSM. The metamodel is then
the key element to ensure that the user model space is exclusively populated with legal
entities, attributes and relations.</p>
        <p>
          In HRT-UML/RCM the PIM modeling space is directly restrained by the
bottomup propagation of semantic constraints from the underlying computational model, cast
onto the RCM metamodel. The RCM originates in language-neutral terms from the
Ravenscar Profile of the Ada programming language [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. The Ravenscar profile: (i)
forbids the use of language constructs that may incur non-determism or unbounded
execution time; (ii) only allows asynchronous one-way communications mediated by
shared resources equipped with a deterministic synchronization protocols, like the
Priority Ceiling Protocol (direct inter-task communication are thus prohibited). RCM
further requires threads to have a single suspension and a single (cyclic or sporadic) source
of activation events. All those restrictions are imposed on the PSM to ensure that all
models in that space be statically analyzable. We want to be able to turn every possible
user model in the PIM space into a semantically equivalent PSM, which also be correct
by construction against the RCM restrictions.
        </p>
        <p>When the applicable RCM constraints propagate up to directly restrict the user
modeling space, the expressive power of the PIM can only be marginally larger than a simple
bottom-up projection of the expressive power of the PSM (cf. figure 2.a).</p>
        <p>HRT-UML/RCM offers a set of declarative stereotypes to increase the abstraction
level in the PIM space. For example, provided services are decorated with attributes
that express their intended concurrent behavior and timing properties without the user
having to bother with how to implement them. Nonetheless, several restrictions of the
RCM (which thus pertain to the PSM) still directly apply to the PIM. For example, the
RCM forbids the creation of deferred operations with out parameters (i.e., operations
executed by a server-side thread which may return values to the caller) for that would
incur synchronous blocking semantics in violation of the Ravenscar restrictions.</p>
        <p>We believe that a more advanced use of MDE patterns may help us extend the
expressive power of the PIM. We want to attain the maximum possible increase while
maintaining the guarantee that all the user models expressible in the PIM space can
be represented as (arbitrarily complex and yet correct by construction) compositions of
legal PSM entities. A deterministic yet efficient function must exist to transform the</p>
        <p>Matteo Bordin, Marco Panunzio, Carlo Santamaria, and Tullio Vardanega
constructs that belong in the extended PIM space into a combination (thus, intuitively,
a composition pattern) of those primitively present in the PSM (see figure 2.b).</p>
        <p>Granted, the additional expressive power can only come at some cost: the larger the
distance between the declarative language of the PIM and the implementation language
of the PSM, the greater the time and space overhead of the model transformation, of its
code products and of and its verification effort.
3</p>
        <p>Classification of patterns
Let us first introduce a tentative classification of patterns in the MDE landscape. With
this classification we maintain that MDE patterns distinguish themselves by the
abstraction level at which they are applied. The abstraction level at which any given pattern is
considered to belong thus becomes the central element to decide their goodness of fit to
serve our objective.</p>
        <p>In our current classification, we recognize:
A Reinterpretation of Patterns to Increase the Expressive Power of MDE</p>
        <p>
          Determined patterns represent user-perceived solutions to recurrent problems in the
modeling space of the application. The user recognizes a specific problem in the
application requirements and manually augments or adapts the model to host an instantiation
of the desired pattern(s). The design patterns discussed in [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] clearly fall in this
category. It is important to notice that, following our definition, this kind of patterns do
not increase the expressive power of the MDE infrastructure, since they are user-level
constructions that result from the assembly of primitive entities. Consequently, only the
latter actually “exist” in the model. The user-level constructions instead have no direct
representation across model transformation since the MDE infrastructure is unaware of
their existence.
        </p>
        <p>Executive patterns encode traits of the implementation domain of interest. A
computational model, a set of constraints on thread activations and suspensions, archetypes
to factorize common behaviors, are all example of executive patterns. Executive patterns
distinguish themselves because they must necessarily be encoded in the metamodel and
fixed once and for all upon its creation. Since they are part of the metamodel, which
constrains the legal design space, they determine the PSM expressive power. As the
MDE paradigm wishes to lift the abstraction level of the user space away from
implementation details, the executive patterns need not be directly visible to the user.</p>
        <p>Declarative patterns are themselves solutions to a recurrent problem of the
application domain. More specifically, the designer recognizes a known problem in the system
specification and uses one of these patterns to solve it. Declared patterns must be
semantically understood by the user and recognized as solutions to specific problems in
the application space. They however require no implementation from the user, who just
need to cause them into existence. It is the process of model transformation in fact that
instantiates the required patterns and embeds them in the user model. For this reason
declarative patterns must pose no obstacle for the products of model transformation to
stay within a legal, correct by construction, design space. The most effective way to
achieve this is for declarative patterns to exist in the metamodel and consequently be
used as constructive elements of the model transformation logic. Following our
definition, declarative patterns augment the expressive power availed to the user, since
they result from constructs (e.g.: attributes) that model transformation resolves in a
predefined and correct by construction composition of primitive entities of the PSM.
Furthermore, the semantics of a declarative pattern is known and well defined and the
enforcement of legal conditions for its application can be assured by the MDE
infrastructure itself; both these characteristics are instead contemplated in the application of
determined patterns.</p>
        <p>One of the goals of this categorization is to let the reader appreciate how
important those patterns may be to the MDE process and how much they may influence the
expressive power availed to the user.</p>
        <p>As we have seen, executive patterns basically determine the expressive power
available at the PSM level, that is the (low-level) abstraction layer of the entities that populate
the system at run time.</p>
        <p>Determined patterns live entirely in the user-level design space; they are composed
of primitive entities that can legally populate the PIM and therefore have scarce
importance in the MDE process, since they do not augment the expressive power.</p>
        <p>Matteo Bordin, Marco Panunzio, Carlo Santamaria, and Tullio Vardanega</p>
        <p>Declarative patterns instead arguably are the most important category of patterns in
the context of an MDE process: they provide the user with a powerful abstraction that
is automatically instantiated by model transformations and therefore fit perfectly into
the model(s) product of the transformation.</p>
        <p>It is then clear that declarative patterns are a crucial instrument for us to increase
the expressive power we may avail to the user. Expressed graphically with reference to
figure 2, we are increasing the slope of the lines that join the PSM space to the PIM.</p>
        <p>One additional benefit of this classification is that makes it clear where the
instantiation of each pattern occurs. Figure 3 provides a diagrammatic representation of this
element of information: determined patterns clearly populate the application layer,
since they are composed of legal entities of the user-level design space; executive
patterns map or encode features of the execution environment, whether software (kernel
or middleware or both) or hardware; declarative patterns, finally, reside at PIM level,
but outside the direct projection of the PSM expressive power and thus require model
transformation to come in existence as a correct by construction assembly of legal PSM
entities and constructs.</p>
        <p>Fig. 3. Patterns in model-driven engineering. Determined and declarative patterns are avaliable
in the user modeling space. Determined patterns result from direct projections of the PSM
expressive power. Declarative patterns exist beyond the projection of the PSM expressive power
and require model transformation to be implemented in terms of legal PSM entities. Executive
patterns solely exist in the PSM level.</p>
        <p>Let us now determine whether and to what extent the user should be aware of
patterns, according to their class of belonging.</p>
        <p>Determined patterns are obviously known to the designer, who is the primary and
sole actor in their use. Conversely, executive patterns are intentionally hidden to the
user. The situation is not that clear instead for declarative patterns. Should the designer
be aware that the decoration of some model attributes triggers the activation of a
declarative pattern? Additionally, should the designer explicitly require the activation of a
pattern or else simply rely on the intelligent support of the framework to recognize
the conditions for its activation? Two analogies may help us illustrate the differences
between these two approaches.
A Reinterpretation of Patterns to Increase the Expressive Power of MDE</p>
        <p>One of the classical optimization performed by a compiler is to move the invariant
part of a loop outside the loop itself. The average programmer need not be aware that
this optimization is performed. Conversely, let’s imagine a programming language that
prescribes that remote operations be flagged with a remote keyword; in this case the
programmer intentionally determines the generation of stubs, skeletons, and all other
elements required to perform a remote invocation. At present we do not have a strong
position on the entire class of declarative patterns, and we are inclined to believe that
the issue should be evaluated on a case by case (i.e., per pattern) basis.</p>
        <p>Pattern catalogue
This section briefly discusses examples of patterns from the categorization we have
just proposed. The examples we have chosen reflect problems typical of the real-time
application domain. We concentrate on declarative and executive patterns, because
determined patterns should be well known to the reader.</p>
        <p>
          Partitions and communication filters
While the notions of partition and communication filters are not new to software
engineering, especially in the high-integrity domain (see for example the ARINC-653
standard [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]), they still have to find their place in a model-based development.
        </p>
        <p>The use of logical and physical partitions permit to attribute software and
hardware components to distinct levels of criticality so as to guarantee the required level of
isolation in time, space and communication among them.</p>
        <p>The notion of partition must be part of the metamodel itself, because the designer
must be able to consciously allocate multiple executable entities within given partitions.
In that manner, the entities included in a partition inherit the partition criticality and
benefit from the partition-level isolation mechanisms. Neither the causing of criticality
inheritance nor the modeling of isolation mechanisms, however, are to be explictly
performed by the user: they can instead be easily realized by way of model transformation.
The notion of partition does thus reflect a declarative pattern.</p>
        <p>Communication filters are tightly related to partitions. When a lower-criticality
partition establishes a communication link with a higher-criticality partition, the
integrity of the exchanged messages should be subject to verification. The execution of
the higher-criticality partition may in fact be affected by the computation required by
a lower-criticality partition. To only permit allowable communications (operation
requests or reports of results), filters are suitably interposed between communicating
partitions to perform all of the necessary verification on the permissibility of the
communication.</p>
        <p>Filters are part of the metamodel. However they are not to be used explicitly by the
user, who need not even be aware of their existence, but rather they are put into place by
the tail end of the model transformation process. The notion of filter does thus reflect
an executive pattern.</p>
        <p>Matteo Bordin, Marco Panunzio, Carlo Santamaria, and Tullio Vardanega
10
The purpose of the Callback pattern is to extend the PIM expressive power beyond the
projection of the PSM modeling space. The pattern can be categorized as both an
executive and declarative pattern. It does not require any specific action from the designer (as
thus equates to an executive pattern), but its application has consequences which must
be known to the user, as they impact the software architecture and the interpretation of
analysis results.</p>
        <p>Let us use a simple example to illustrate the Callback pattern, which we base
on the classical producer-consumer archetype. Both the Producer and the Consumer
have a single method, respectively produce and consume (cf. figure 4). The
operation of produce consists in (a) producing an item, (b) passing it to the consumer
and (c) adapting its own behavior according to the Consumer’s feedback. The
operation of consume consumes the item and returns its feedback via an out parameter,
Feedback.</p>
        <p>Fig. 4. Callback pattern: class diagram prior to the application of the pattern.</p>
        <p>Following the HRT-UML/RCM modeling semantics for components, we declare
the concurrent semantics on ports of provided services: the port providing produce is
marked ≪cyclic≫, meaning that a dedicated task with a constant periodic release calls
produce. The port providing consume is instead marked ≪sporadic≫, meaning
that its invocation by a caller causes a sporadic task to be released to actually execute
consume (cf. figure 5). Additional non-functional concerns, such as tasks priority and
period or minimum inter-arrival time, are addressed by decorating the provided port of
each component with specific attributes (which we omit in this discussion).</p>
        <p>It is worth noticing at this point that the semantics expressed in this user model is not
Ravenscar-compliant. The reason is that operation consume requires a synchronous
communication (since its profile includes an out parameter), but the corresponding
port is marked ≪sporadic≫, which makes it deferred in HRT-UML/RCM and thus
necessarily asynchronous. The semantics intended by the user model is that of a
synchronous deferred communication, which is forbidden in RCM. Interestingly however,
a simple stage of model validation can notice the problem and trigger appropriate
actions to resolve it.</p>
        <p>To solve the problem and thus satisfy the user without incurring violations of the
RCM restrictions, we first perform an automated model transformation on the class
A Reinterpretation of Patterns to Increase the Expressive Power of MDE</p>
        <p>Fig. 5. Callback pattern: system model prior to the application of the pattern.</p>
        <p>The concurrent semantics declared in the component ports remains the same for
those that provide consume and produce. The port providing consume callback
is instead marked ≪sporadic≫ so that a dedicated task may ensure a prompt response
to the invocation of the callback from consume.</p>
        <p>The resulting model is Ravenscar-compliant again as the services provided by the
ports marked as ≪sporadic≫ do not include out parameters (cf. figure 7) anymore.</p>
        <p>At this point an additional model transformation may generate the SAM and assign
the user-specific functional behavior to it. Dedicated tasks are created for produce
(cyclic), consume callback (sporadic) and consume (sporadic) and the
respective methods are allocated to the fully-legal main operation of the respective tasks.
Message queues protected against concurrent access are created for consume and
consume callback as the means to implement asynchronous communication
between the caller and the sporadic task on the side of the callee. A further shared
re</p>
        <p>Matteo Bordin, Marco Panunzio, Carlo Santamaria, and Tullio Vardanega
source may be automatically generated to safeguard the concurrent access the tasks
behind produce and consume callback to application data that the split of the
single user-level operation should not duplicate.</p>
        <p>Source code is finally generated from the transformed model and from the SAM; no
code is instead generated from the original user model.
5</p>
        <p>Discussion
By the introduction of the executive and declarative pattern categories, we achieve two
results which are beyond the reach of classical determined patterns.</p>
        <p>Executive patterns, like the Filter pattern, relieve the designer the need to specify
complex yet recurrent parts of the software behavior, and rather introduce them directly
in the PSM (and, possibly in the source code too).</p>
        <p>Declarative patterns instead, serve two distinct purposes. First of all, they reduce the
amount of modeling effort required for the designer to express the semantics needed to
solve an application-level problem: this is for example the case of the Partition pattern.</p>
        <p>Declarative patterns may also help extend the expressive power availed at PIM well
beyond the perimeter permissible to the PSM: this is the case of the Callback pattern.</p>
        <p>In fact, we see declarative patterns as a most promising pattern category for
highintegrity systems in general and real-time systems in particular. They have the potential
to release the user-level modeling process from (some of) the restraints that are
propagated upwards from the underlying analysis theories, and thus extend the expressive
power availed to the user. In practice the restraints that may be lifted are those for which
declarative patterns exist which permit model transformations that provably preserve
the properties of interest down to the final implementation.</p>
        <p>Interestingly enough, declarative and executive patterns are both realized by means
of property preserving model transformations and take the form of correct by
construction aggregates of primities entities of the metamodel. While those patterns are not
primitive entities themselves, they do exist in the metamodel space because it is their
very existence under the semantic constraints enforced by the metamodel that asserts
their legality. A key implication of this stipulation is that the compositional logic of the
model transformations must be defined as an integral element of the metamodel itself.
A Reinterpretation of Patterns to Increase the Expressive Power of MDE</p>
        <p>What we currently see as the main limitation to the full application of declarative
patterns is the impact they may have on the functional (and not only architectural)
specification of the system. We have seen a glimpse of this problem in the application of
the Callback pattern, which caused us to break a single functional operation into two
distinct parts.</p>
        <p>To date, mainstream modeling technologies have failed to provide a full and usable
representation of action semantics in the metamodel space. This limitation constitutes
a technological (though not conceptual) hurdle to our endeavor.</p>
        <p>In several, perhaps most cases, declarative patterns cannot be silently applied by
the modeling infrastructure as they may considerably increase the distance between the
user space and the part of the PSM that is the product of automated model
transformation. The larger the distance the more complex to understand the end results of the
transformation, in both qualitative and quantitative terms. For example, the application
of the Callback pattern not only modifies the functional specification provided by the
user, but also impacts the concurrent and synchronization properties of the system by
creating an additional task and an additional shared resource. The modeling
infrastructure should thus justify all model transformations and provide evidence of traceability
between levels of abstraction.</p>
        <p>Conclusions
The production of formal analysis models is a complex task which requires to verify
that (i) the designed model does not evade the boundaries of the semantics permitted by
the underlying analysis theories; and (ii) the semantics is preserved at each abstraction
level crossed by transformation.</p>
        <p>The preferred way to satisfy both requirements is to constraint the modeling space
by directly projecting the expressive power of the PSM onto the PIM modeling space
(with the side benefit of easing the automated production of the PSM).</p>
        <p>
          We introduced two new categories of patterns specific of the MDE context to add
to the well-know category of determined patterns that stem from [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Executive and
declarative patterns are meant to address other issues than those targeted by classical
design patterns.
        </p>
        <p>Executive patterns are expected to deliver the user from the need to specify parts of
the architecture and functional behavior of the application by embedding consolidated
solutions directly in the PSM (possibly the source code).</p>
        <p>Declarative patterns instead extend the expressive power of the PIM by relaxing
semantic constraints that hold on the PSM. The implementation of the semantics implied
by declarative patterns is naturally realized by automated model transformation.</p>
        <p>We discussed three specific instances of patterns in the above categories, which
address problems recurrent the real-time systems domain.</p>
        <p>The introduction of those new categories of patterns promises to increase the
expressive power attainable at user level and an interesting new dimension of automation
in the model-driven engineering of real-time systems. The full exploitation of those
patterns requires a highly integrated modeling infrastructure, which includes a
mod</p>
        <p>Matteo Bordin, Marco Panunzio, Carlo Santamaria, and Tullio Vardanega</p>
        <p>References</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>D.C.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Model-Driven Engineering</surname>
          </string-name>
          . IEEE Computer
          <volume>39</volume>
          (
          <issue>2</issue>
          ) (
          <year>2006</year>
          )
          <fpage>25</fpage>
          -
          <lpage>31</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. SAE:
          <article-title>Architecture Analysis and Design Language</article-title>
          . http://la.sei.cmu.edu/aadl/ currentsite/aadlstd.html.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. OMG:
          <article-title>UML profile for MARTE</article-title>
          . (
          <year>2007</year>
          ) http://www.omg.org/cgi-bin/doc? ptc/2007-08-04.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. OMG:
          <article-title>SysML specification</article-title>
          . (
          <year>2007</year>
          ) http://www.omg.org/cgi-bin/doc? formal/2007-09-01.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Bordin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vardanega</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Correctness by Construction for High-Integrity Real-Time Systems: a Metamodel-driven Approach</article-title>
          . In: Reliable Software Technologies - Ada-Europe.
          <article-title>(</article-title>
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Panunzio</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vardanega</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>A Metamodel-driven Process Featuring Advanced Modelbased Timing Analysis</article-title>
          .
          <source>In: Reliable Software Technologies - Ada-Europe</source>
          .
          <article-title>(</article-title>
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>ASSERT</surname>
          </string-name>
          <article-title>: www.assert-project</article-title>
          .
          <source>net (2004-7)</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Chapman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Correctness by Construction: a Manifesto for High Integrity Software</article-title>
          . In: ACM International Conference Proceeding Series; Vol.
          <volume>162</volume>
          . (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Vardanega</surname>
          </string-name>
          , T.:
          <article-title>A Property-Preserving Reuse-Geared Approach to Model-Driven Development</article-title>
          .
          <source>In: 12th IEEE Int. Conf. on Embedded and Real-Time Computing Systems and Applications</source>
          . (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Bordin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Panunzio</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vardanega</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Fitting Schedulability Analysis Theory into ModelDriven Engineering</article-title>
          .
          <source>In: Proc. of the 20th Euromicro Conference on Real-Time Systems</source>
          . (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Pulido</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>de la Puente</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hugues</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bordin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vardanega</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Ada 2005 Code Patterns for Metamodel-based Code Generation</article-title>
          .
          <source>Ada Letters XXVII(2)</source>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Zalewski</surname>
          </string-name>
          , J.:
          <article-title>Real-time Software Design Patterns</article-title>
          .
          <source>In: 9th Conference on Real-Time Systems</source>
          . (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Sanz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zalewski</surname>
          </string-name>
          , J.:
          <source>Pattern-based Control Systems Engineering. IEEE Control Systems Magazine</source>
          <volume>23</volume>
          (
          <issue>3</issue>
          ) (
          <year>2003</year>
          )
          <fpage>43</fpage>
          -
          <lpage>60</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Gamma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Helm</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , Johnson, R.,
          <string-name>
            <surname>Vlissides</surname>
          </string-name>
          , J.: Design Patterns:
          <article-title>Elements of Reusable Object-Oriented Software</article-title>
          . Addison - Wesley (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Pratt</surname>
            ,
            <given-names>T.W.: Programming</given-names>
          </string-name>
          <string-name>
            <surname>Languages. Design</surname>
          </string-name>
          and
          <string-name>
            <surname>Implementation (Second Edition). Prentice-Hall</surname>
          </string-name>
          (
          <year>1984</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Burns</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dobbing</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vardanega</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Guide for the Use of the Ada Ravenscar Profile in High Integrity Systems</article-title>
          .
          <source>Technical Report YCS-2003-348</source>
          , University of York (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <source>ARINC: Avionics Application Software Standard Interface: ARINC Specification 653-1</source>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>