<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards Automating the Analysis of Integrity Constraints in Multi-level Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Esther Guerra</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juan de Lara</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Esther.Guerra</institution>
          ,
          <addr-line>Juan.deLara</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universidad Auot ́noma de Madrid</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <fpage>63</fpage>
      <lpage>72</lpage>
      <abstract>
        <p>Multi-level modelling is a technology that promotes an incremental refinement of meta-models in successive meta-levels. This enables a flexible way of modelling, which results in simpler and more intensional models in some scenarios. In this context, integrity constraints can be placed at any meta-level, and need to indicate at which meta-level below they should hold. This requires a very careful design of constraints, as constraints defined at different meta-levels may interact in unexpected ways. Unfortunately, current techniques for the analysis of the satisfiability of constraints only work in two meta-levels. In this paper, we give the first steps towards the automation of mechanisms to check the satisfiability of integrity constraints in a multi-level setting, leveraging on “off-the-shelf” model finders.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Multi-level modelling [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is a promising technology that enables a flexible way
of modelling by allowing the use of an arbitrary number of meta-levels, instead
of just two. This results in simpler models [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], typically in scenarios where the
type-object pattern or some variant of it arises.
      </p>
      <p>
        While multi-level modelling has benefits, it also poses some challenges that
need to be addressed in order to foster a wider adoption of this technology [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
One of these challenges is the definition and analysis of constraints in multi-level
models. In a two-level setting, constraints are placed in the meta-models and
evaluated in the models one meta-level below. This enables the use of
“off-theshelf” model finders [
        <xref ref-type="bibr" rid="ref1 ref12 ref13 ref16 ref6">1, 6, 12, 13, 16</xref>
        ] to reason about correctness properties, like
satisfiability ( is there a valid model that satisfies all constraints? ). However,
constraints in multi-level models can be placed at any meta-level and be evaluated
any number of meta-levels below, which may cause unanticipated effects. This
makes the design and reasoning on the validity of constraints more intricate.
      </p>
      <p>
        In this paper, we present the first steps towards a systematic method for the
analysis of a basic quality property in multi-level modelling: the satisfiability of
integrity constraints. We base our approach on the use of “off-the-shelf” model
finders, which are able to perform a bounded search of models conforming to
a given meta-model and satisfying a set of OCL constraints. Since the
stateof-the-art model finders only work in a two-level setting, we need to “flatten”
the multiple levels in a multi-level model to be able to use the finders for our
purposes. This process has two orthogonal dimensions, which account for the
number of meta-levels provided to, and searched by, the finder. Thus, we discuss
alternative flattening algorithms for different analysis scenarios. As a proof of
concept, we illustrate our method through the analysis of MetaDepth
multilevel models [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] using the USE Validator [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] for model finding.
      </p>
      <p>Paper organization. Section 2 introduces multi-level modelling as designed
in the MetaDepth tool. Section 3 presents properties and scenarios in the
analysis of multi-level models. Section 4 discusses strategies for flattening
multilevel models for their analysis with standard model finders. Section 5 describes
the use of a model finder to analyse MetaDepth models. Last, Section 6 reviews
related research and Section 7 draws some conclusions and future works.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Multi-level modelling</title>
      <p>
        We will illustrate our proposal using a running example in the area of
domainspecific process modelling, while the introduced multi-level concepts are those
provided by the MetaDepth tool [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The example is shown in Fig. 1 using
MetaDepth syntax (left) and a graphical representation (right). For the textual
syntax, we only show the two upper meta-levels of the solution.
      </p>
      <p>
        The main elements in a multi-level solution are models, clabjects, fields,
references and constraints. All of them have a potency, indicated with the @ symbol.
The potency is a positive number (or zero) that specifies in how many meta-levels
an element can be instantiated. It is automatically decremented at each deeper
meta-level, and when it reaches zero, the element cannot be instantiated in the
meta-levels below. If an element does not define a potency, it receives the potency
from its enclosing container, and ultimately from the model. Hence, the potency
of a model is similar to the notion of level in other multi-level approaches [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>As an example, the upper model in Fig. 1 contains a clabject Task with
potency 2, thus allowing the creation of types of tasks in the next meta-level (e.g.,
Coding), and their subsequent instantiation into concrete tasks in the bottom
meta-level (e.g., c1). Task defines two fields: name has potency 1 and therefore
it receives values in the intermediate level, while startDay has potency 2 and is
used to set the start day of specific tasks in the lowest meta-level.</p>
      <p>In MetaDepth, references with potency 2 (like next) need to be instantiated
at potency 1 (e.g., nextPhase), to be able to instantiate these latter instances at
level 0. The cardinality of a reference constrains the number of instances at the
meta-level right below. Thus, the cardinality of next controls the instantiations
at level 1, and the cardinality of nextPhase the ones at level 0.</p>
      <p>Constraints can be declared at any meta-level. In this case, the potency
states how many meta-levels below the constraint will be evaluated. Constraint
C1, which ensures uniqueness of task names, has potency 1, and therefore it will
be evaluated one meta-level below. Constraint C2 has potency 2, and hence it
states that two meta-levels below, the start day of a task must be less than the
start day of any task related to it by next references. C2 needs to refer to the
instances of the instances of the reference next, two levels below, but the (direct)
C1@1: $ Task.allInstances()−&gt;excluding(self)−&gt;</p>
      <p>forAll( t | t.name&lt;&gt;self.name ) $
C2: $ self.references(’next’)−&gt;forAll( r |
self.value(r)−&gt;forAll( n |</p>
      <p>self.startDay &lt; n.startDay )) $</p>
      <p>Task
name@1: String[0..1] *
startDay: Integer next
C1@1: Task.allInstances()-&gt;excluding(self)-&gt;forAll( t |</p>
      <p>t.name &lt;&gt; self.name )
C2: self.references('next')-&gt;forAll( r |
self.value(r)-&gt;forAll( n|
self.startDay &lt; n.startDay ) )</p>
      <p>SoftwareEngineeringTask: Task
nextPh1a..s*e: final: boolean=false C3: self.startDay&gt;0
next nextPhase:</p>
      <p>Coding: Task Testing: Task next
name=‘Coding' name=‘Testing' *
C4: self.final=false</p>
      <p>C5: self.final implies</p>
      <p>self.nextPhase-&gt;size()=0
C6: Coding.allInstances()-&gt;exists(c |</p>
      <p>c.startDay = self.startDay )
c1: Coding
final=false
startDay=1
t1: Testing
final=false
startDay=1
c2: Coding
final=false
startDay=5
t2: Testing
final=true
startDay=5
type of the instances two levels below is unknown beforehand because it depends
on the elements created at level 1. In the example, it means that C2 cannot make
use of nextPhase to constrain the models at level 0. Instead, to allow the access
to indirect instances of a given reference, MetaDepth offers two operations:
– references returns the name of the references that instantiate a given one. For
example, self.references(’next’) evaluated at clabject c1 yields Set{ ’nextPhase’} ,
as the type of c1 defines the reference nextPhase as an instance of next.
– value returns the content of a reference with the given name. For example,
self.value(’nextPhase’) evaluated in clabject c1 yields Set{ c2,t2} .</p>
      <p>In the intermediate meta-level, constraints C3, C4, C5 and C6 have potency 1,
and thus they need to be satisfied by models at the subsequent meta-level. The
purpose of C3 is to enforce positive starting days, where note that the feature
startDay is defined one meta-level above. C4 ensures that Coding tasks are not
final, while C5 requires final Testing tasks to have no subsequent tasks. Finally,
C6 is an attempt to enable some degree of parallelization of tasks, where the
modeller wanted to express that any coding task should have a testing task
starting the same day.</p>
      <p>The lower meta-level in the figure is an attempt (with no success) to
instantiate the model with potency 1. The modeller started adding the coding tasks c1
and c2 at days 1 and 5. Then, to satisfy the constraint C6, he added two testing
tasks starting at days 1 and 5 as well. Coding tasks must be followed by some
other task to avoid they are left untested (controlled by the cardinality 1..* of
nextPhase), and the start day of consecutive tasks must be increasing (controlled
by constraint C2). Thus, the modeller connected the tasks as shown in the figure
to satisfy these constraints. However, then, he realised that the coding task c2
needed to be followed by some other task. Connecting c2 with t2 is not a valid
solution because this would violate constraint C2. Therefore, how can he connect
the testing tasks to the coding tasks to satisfy all constraints? The next section
explains how model finders can help in this situation.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Analysis of multi-level models: properties and scenarios</title>
      <p>
        A meta-model should satisfy some basic properties, like the possibility of creating
a non-empty instance that does not violate the integrity constraints. Several
works [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] rely on model finding techniques to check correctness properties of
meta-models in a standard two meta-level setting, like:
– Strong satisfiability: There is a meta-model instance that contains at least
one instance of every class and association.
– Weak satisfiability: There exists a non-empty instance of the meta-model.
– Liveliness of a class c: There exists some instance of the meta-model that
contains at least one instance of c.
      </p>
      <p>Model finding techniques can also be helpful in a multi-level setting. If we
consider level 0 in Fig. 1, a model finder can help model developers by providing
a suitable model completion, or indicating that no such completion exists. At
level 1, it can help to check the consistency of the integrity constraints at levels
1 and 2 through the analysis of the abovementioned correctness properties. At
level 2, it can provide example instantiations for levels 1 and 0, to ensure that
potencies of the different elements at the top-most level work as expected.</p>
      <p>However, model finders work in a two-level setting (i.e., they receive a
metamodel and produce an instance). To enable their use with several meta-levels,
we need flattening operations that merge several meta-levels into one, which can
then be the input to the finder. The flattening operations must take into account
how many meta-levels are going to be used in the analysis (depth of model), as
well as the number of meta-levels in the generated snapshot (height of snapshot).</p>
      <p>Fig. 2 shows the different scenarios we need to solve for the analysis of
multilevel models. Fig. 2(a) is the most usual case, where only the definition of the
topmost model is available, and we want to check whether this can be instantiated
at each possible meta-level below (2 in the case of having an upper model with
potency 2). Thus, in the figure, the depth of the model to be used in the analysis
is 1, while the height of the searched snapshot is 2. As standard model finders
depth of
model:1
height of
snapshot:2
(a)
depth of
model:2
height of
snapshot:1
depth of
model:1
height of
snapshot:1
depth of
model:1
gap: 1
height of
snapshot:1
only provide snapshots of models residing in one meta-level, we will need to
emulate the generation of several meta-levels within one.</p>
      <p>In Fig. 2(b), the models of several successive meta-levels are given, and the
purpose is checking whether there is an instance at the next meta-level (with
potency 0) satisfying all integrity constraints in the provided models. This situation
arises when there is the need to check the correctness of the constraints
introduced at a meta-level (e.g., @1) with respect to those in the meta-levels above
(e.g., @2). This scenario would have helped in the analysis of the constraints at
levels 2 and 1 in Fig. 1. In Fig. 2(b), the depth of the model to be used in the
analysis is 2. Thus, we will need to flatten these two models into a single one,
which can be fed into a model finder for standard snapshot generation.</p>
      <p>Fig. 2(c) corresponds to the scenario that standard model finders are able to
deal with, where a model is given, and its satisfiability is checked by generating
an instance of it. However, the meta-model to be fed into the solver still needs
to be adjusted, removing constraints with potency bigger than 1.</p>
      <p>Finally, in Fig. 2(d), only the top-most model is available, and the designer
is interested just in the analysis of the lowest meta-level. This can be seen as
a particular case of scenario (a), where after the snapshot generation, the
intermediate levels are removed. This scenario is of particular interest to verify
the existence of instances at the bottom level with certain characteristics, like
a given number of objects of a certain type, or to assess whether the designed
potencies for attributes work as expected.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Flattening multi-level models for analysis</title>
      <p>In this section, we use the running example to show how to flatten the depth of
models to be used in the search, and how to deal with the height of the searched
snapshot. Scenarios where both the height and depth are bigger than one are
also possible, being resolved by combining the flattenings we present next.
4.1</p>
      <sec id="sec-4-1">
        <title>Depth of analysed model</title>
        <p>To analyse a model that is not at the top-most meta-level (like in Fig. 2(b),
where the goal is analysing level 1), we need to merge it with all its type models
at higher levels. This permits considering all constraints and attributes defined
at such higher levels. For simplicity, we assume the merging of just two
metalevels. Fig. 3(a) shows this flattening applied to the running example: levels 1
and 2 are merged, as the purpose is creating a regular instance at level 0.</p>
        <p>First, the flattening handles the top level. All its clabjects ( Task in the
example) are set to abstract to disable their instantiation. All references are deleted
(next), as only the references defined at level 1 ( nextPhase) can be instantiated
at level 0. All attributes are kept, as if their potency is bigger or equal than the
depth (2) plus the height (1), they still can receive a default value. Constraints
with potency different from 2 (C1) are deleted as they do not constrain the level
we want to instantiate (level 0). As the figure shows, the notion of potency does
not appear in the flattened model. This can be interpreted as all elements having
potency 1.</p>
        <p>Then, the model at potency 1 is handled. For clabjects, the instantiation
relation is changed by inheritance. In this way, SoftwareEngineeringTask is set to
inherit from Task instead of being an instance of it. This is not required for Coding
and Testing, as they already inherit from SoftwareEngineeringTask. This flattening
strategy allows clabjects in level 1 to naturally define all attributes that were
assigned potency 2 at level 2 (startDay), and receive a value at level 0. For attributes
that receive a value at level 1, we need to emulate their value using constraints.
Thus, in the example, we substitute the slot name from Coding and Testing, by
constraints C7 and C8. The attributes, references, and constraints defined by the
clabjects at level 1 are kept. Finally, we generate two operations references and
value to emulate the homonym MetaDepth built-in operations by collecting
the knowledge about reference instantiation statically. That is, they encode that
nextPhase in Coding and Testing are instances of next. As a result of this flattening,
we can use a model finder to check whether there is a valid instance at level 0.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Height of snapshot generation</title>
        <p>In this scenario, we need to emulate the search of a set of models spawning
several meta-levels. For simplicity, we assume the scenario in Fig. 2(a), aimed at
generating two models in consecutive levels from a meta-model with potency 2.</p>
        <p>A possible strategy is to split each clabject C with potency 2 into two classes
CType and CInstance holding the attributes, references and constraints with
potency 1 and 2, respectively, and related by a reference type. However, this
solution gets cumbersome if C is related or inherits from other clabjects, and requires
rewriting the constraints in terms of the newly introduced types and relations.</p>
        <p>Another possibility is to proceed in two steps: first a model of potency 1 is
generated, which is promoted into a meta-model that can be instantiated into a
model of potency 0. However, this solution may require rewriting the constraints
with potency 2 in terms of the types generated at potency 1. Moreover, it does
not consider all constraints at a time, which may result in different attempts
before two valid models at potencies 1 and 0 are obtained.</p>
        <p>Instead, we propose the flattening in Fig. 3(b), which adds a parent abstract
class Clabject that makes explicit typical clabject features, like ontological typing
and potency. All constraints are kept, but they need to be changed slightly to
take into account their potency. Thus, C1 is added the premise self.potency=1
implies... so that it gets only applicable to tasks with potency 1, and similar for
constraint C2 for tasks at potency 0. To emulate the potency of attributes, they are</p>
        <p>Task C2:
sntaamrteD:aSy:trIinngte[0g.e.1r] seslefs.lrefe.lvffa.eslrtueaenr(tcrDe)-as&gt;(y'fno&lt;ernAxt.ls'l)(t-a&gt;nrf|toDraAyll)()r | instanc*e
type 0..1</p>
        <p>Clabject
potency: int</p>
        <p>C9: self.potency&gt;=0 and self.potency&lt;=1
C10: not self.type.oclIsUndefined() implies</p>
        <p>self.potency = self.type.potency - 1
C11: (self.potency=0 implies self.instance-&gt;size() = 0) and
(self.potency&lt;1 implies not self.type.oclIsUndefined())
nextPhas*e finSaol:ftbwoaoreleEanng=ifnaelseeringCT3a:sseklf.startDay&gt;0</p>
        <p>nextPhase</p>
        <p>Coding Testing *
C4: self.final=false C5: self.final implies
C7: self.nextPhase-&gt;size()=0
self.name= 'Coding' C6: Coding.allInstances()-&gt;exists(c |</p>
        <p>c.startDay = self.startDay )
C8: self.name=‘Testing'
operation references (r:String) : Set(String) =
if (r='next') then Set{'nextPhase'}
else Set{} endif
operation value (r:String) : Set(Task) =
if (r='nextPhase') then
self.nextPhase
else Set{} endif
(a) Scenario with depth=2 and height=1.</p>
        <p>C1: self.potency=1 implies
* Task ( Tafsokr.aAllllI(ntst|atn.nceasm()e-&gt;&lt;e&gt;xcsleuldf.innagm(seel)f))-&gt;
next name: String[0..1]
startDay: Integer[0..1] C2: self.potency=0 implies
self.references('next')-&gt;forAll( r |
self.value(r)-&gt;forAll( n|
self.startDay &lt; n.startDay ) )
C12: self.potency &lt; 1 implies self.name.oclIsUndefined()
C13: self.next-&gt;forAll( n |
(self.potency = n.potency) and
((not self.type.oclIsUndefined())
implies self.type.oclAsType(Task).next-&gt;
exists(ntype | ntype = n.type)))
operation references (r:String) : Set(String)=
if (r='next') then Set{'next'} else Set{} endif
operation value (r:String) : Set(Task)=
if (r='next') then self.next else Set{} endif
(b) Scenario with depth=1 and height=2.
set to optional (cardinality [0..1]), and we add constraints ensuring that the
attributes are undefined in the meta-levels where they cannot be instantiated. For
example, name has originally potency 1, and hence it can only receive a value in
tasks of potency 1 (constraint C12). Constraint C13 ensures that references (like
next) do not cross meta-levels and are correctly instantiated
at every meta-level. The latter means that, if two tasks at
potency 0 are related by a next reference, then their types
must be related via a next reference as well. While this does
not fully captures the instantiation semantics as there is no
explicit “instance-of” relation between references with
different potencies, it suffices our purposes. Finally, constraints C9
to C11 ensure correct potency values for types and instances.</p>
        <p>As an example, the figure to the right shows a snapshot with
height 2, generated by USE from the definition in Fig. 3(b).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5 Automating the analysis of constraints</title>
      <p>
        Next, we illustrate our method by checking the satisfiability of MetaDepth
multi-level models with the USE Validator [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] for model finding. The checking
includes the following steps: (1) flattening of multi-level model according to the
selected scenario; (2) translation of flattened model into the input format of USE;
(3) generation of snapshot with the USE Validator tool; and (4) translation of the
results back to MetaDepth. We will demonstrate these steps for the running
example, considering the scenario in Fig. 2(b), i.e., we start from models at
potency 2 and 1, and check their instantiability at potency 0.
      </p>
      <p>Fig. 3(a) shows the merging of levels 1 and 2 for the running example, while
part of its translation into the input format of USE is listed below. The USE
Validator does not currently support solving with arbitrary strings, but they
must adhere to the format ’string&lt;number&gt;’. Thus, we translate strings to this
format, where ’next’ is substituted by ’string0’ (lines 12 and 27), ’nextPhase’ by
’string1’ (lines 28 and 31), and so on.</p>
      <p>If we try to find a valid instance of this definition, we discover that the only
model satisfying all constraints is the empty model. Thus, the model at level 1 is
neither weak nor strong satisfiable. Revising the constraints at level 1, we realise
that C6 does not express what the designer had in mind (that for any coding
task, there should be a testing task starting the same day), but it expresses the
converse (i.e., for each testing class, a coding class exists). One solution is moving
constraint C6 from class Testing to Coding, modified to iterate on all instances of
Testing (i.e., Testing.allInstances()...). If we perform this change, the resulting model
becomes satisfiable, and the USE Validator generates the snapshots in Fig. 4.</p>
      <p>Weak satisfiability is checked by finding a valid non-empty model. The USE
Validator allows configuring the minimum and maximum number of objects and
references of each type in the generated model. If we set a lower bound 1 for
class Testing, we obtain the instance model shown in the left of Fig. 4. Strong
Coding.allInstances()−&gt;exists(c1,c2|</p>
      <p>Testing.allInstances()−&gt;exists(t1,t2|
c1.final = false and
c2.final = false and
t1.final = false and
t2.final = true and
c1.startDay = 1 and
c2.startDay = 5 and
t1.startDay = 1 and
t2.startDay = 5 and
c1.nextPhase−&gt;includes(c2,t2) and
t1.nextPhase−&gt;includes(t2) ))
satisfiability is checked by finding a model that contains an instance of every class
and reference. By assigning a lower bound 1 to all types, the USE Validator finds
the model to the right of Fig. 4.</p>
      <p>If the scenario to solve is completing a partial model, like the one at the
bottom level of Fig. 1, we need to provide a seed model for the search. This can
be emulated by an additional constraint demanding the existence of the starting
model structure. Fig. 5 shows the OCL constraint representing our example
model at level 0 (left), as well as the complete valid model found by USE (right).
6</p>
    </sec>
    <sec id="sec-6">
      <title>Related work</title>
      <p>
        Some multi-level approaches have an underlying semantics based on constraints,
like Nivel [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which is based on WCRL. This allows some decidable, automated
reasoning procedures on Nivel models, but they lack support for integrity
constraints beyond multiplicities.
      </p>
      <p>
        There are several tools to validate the satisfiability of integrity constraints in
a two-level setting. We have illustrated our method with the USE Validator [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ],
which translates a UML model and its OCL constraints into relational logic, and
uses a SAT solver to check its satisfiability. UML2Alloy [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] follows a similar
approach. Instead, UMLtoCSP [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and EMFtoCSP [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] transform the model into a
constraint satisfaction problem (CSP) to check its satisfiability, and ocl2smt [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]
translates it into a set of operations on bit-vectors which can be solved by SMT
solvers. The approach of Queralt [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] uses resolution and Clavel [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] maps a subset
of OCL into first-order logic and employs SMT solvers to check unsatisfiability.
HOL-OCL [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is a theorem proving environment for OCL. In contrast to the
enumerated tools, it does not rely on bounded model finding, but it is able of
proving complex properties of UML/OCL specifications. All these works
consider two meta-levels, and could be used to solve the multi-level scenarios in
Section 3, once they have been translated into a two-level setting. Other works
use constraint solving for model completion [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], also in a two-level setting.
      </p>
      <p>Altogether, the use of model finders to verify properties in models is not
novel. However, to the best of our knowledge, ours is the first work targeting the
analysis of integrity constraints in multi-level models.</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusions and future work</title>
      <p>In this paper, we have proposed a method to check the satisfiability of constraints
in multi-level models using “off-the-shelf” model finders. To this aim, the method
proposes flattenings that depend on the number of levels fed to the finder and
the height of the generated snapshot. The method has been illustrated using
MetaDepth and the USE Validator.</p>
      <p>
        Currently, we are working towards a tighter integration of the USE validator
with MetaDepth, providing commands to e.g., complete a given model. We
also plan to analyse other properties, like independence of constraints [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
Acknowledgements. This work has been funded by the Spanish Ministry of
Economy and Competitivity with project “Go Lite” (TIN2011-24139).
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>K.</given-names>
            <surname>Anastasakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bordbar</surname>
          </string-name>
          , G. Georg,
          <string-name>
            <surname>and I. Ray.</surname>
          </string-name>
          <article-title>UML2Alloy: a challenging model transformation</article-title>
          .
          <source>In MoDELS</source>
          , volume
          <volume>4735</volume>
          <source>of LNCS</source>
          , pages
          <fpage>436</fpage>
          -
          <lpage>450</lpage>
          . Springer,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>T.</given-names>
            <surname>Asikainen</surname>
          </string-name>
          and T. Ma¨nnisot¨.
          <article-title>Nivel: a metamodelling language with a formal semantics</article-title>
          .
          <source>Software and Systems Modeling</source>
          ,
          <volume>8</volume>
          (
          <issue>4</issue>
          ):
          <fpage>521</fpage>
          -
          <lpage>549</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>C.</given-names>
            <surname>Atkinson</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Ku</surname>
          </string-name>
          <article-title>¨hne. The essence of multilevel metamodeling</article-title>
          .
          <source>In UML</source>
          , volume
          <volume>2185</volume>
          <source>of LNCS</source>
          , pages
          <fpage>19</fpage>
          -
          <lpage>33</lpage>
          . Springer,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>C.</given-names>
            <surname>Atkinson</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Ku</surname>
          </string-name>
          <article-title>¨hne. Reducing accidental complexity in domain models</article-title>
          .
          <source>Software and Systems Modeling</source>
          ,
          <volume>7</volume>
          (
          <issue>3</issue>
          ):
          <fpage>345</fpage>
          -
          <lpage>359</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Brucker</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Wolff</surname>
          </string-name>
          .
          <article-title>HOL-OCL: a formal proof environment for UML/OCL</article-title>
          . In FASE, volume
          <volume>4961</volume>
          <source>of LNCS</source>
          , pages
          <fpage>97</fpage>
          -
          <lpage>100</lpage>
          . Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>J.</given-names>
            <surname>Cabot</surname>
          </string-name>
          , R. Clarios´, and
          <string-name>
            <given-names>D.</given-names>
            <surname>Riera</surname>
          </string-name>
          .
          <article-title>UMLtoCSP: a tool for the formal verification of UML/OCL models using constraint programming</article-title>
          .
          <source>In ASE</source>
          , pages
          <fpage>547</fpage>
          -
          <lpage>548</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>J.</given-names>
            <surname>Cabot</surname>
          </string-name>
          , R. Clarios´, and
          <string-name>
            <given-names>D.</given-names>
            <surname>Riera</surname>
          </string-name>
          .
          <article-title>On the verification of UML/OCL class diagrams using constraint programming</article-title>
          .
          <source>Journal of Systems and Software</source>
          ,
          <volume>93</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>M.</given-names>
            <surname>Clavel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Egea</surname>
          </string-name>
          , and
          <string-name>
            <surname>M. A. G. de Dios</surname>
          </string-name>
          .
          <article-title>Checking unsatisfiability for OCL constraints</article-title>
          .
          <source>Electronic Communications of the EASST</source>
          ,
          <volume>24</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>J. de Lara</surname>
            and
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Guerra</surname>
          </string-name>
          .
          <article-title>Deep meta-modelling with MetaDepth</article-title>
          .
          <source>In TOOLS</source>
          , volume
          <volume>6141</volume>
          <source>of LNCS</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          . Springer,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>J. de Lara</surname>
            , E. Guerra,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Cobos</surname>
          </string-name>
          , and J.
          <string-name>
            <surname>Moreno-Llorena</surname>
          </string-name>
          .
          <article-title>Extending deep metamodelling for practical model-driven engineering</article-title>
          . Comput. J.,
          <volume>57</volume>
          (
          <issue>1</issue>
          ):
          <fpage>36</fpage>
          -
          <lpage>58</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>M. Gogolla</surname>
            , L. Hamann, and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Kuhlmann</surname>
          </string-name>
          .
          <article-title>Proving and visualizing OCL invariant independence by automatically generated test cases</article-title>
          .
          <source>In TAP</source>
          , volume
          <volume>6143</volume>
          <source>of LNCS</source>
          , pages
          <fpage>38</fpage>
          -
          <lpage>54</lpage>
          . Springer,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>A</article-title>
          . Gonaz´lez, F. Bu¨ttner, R. Clarios´, and
          <string-name>
            <surname>J. Cabot.</surname>
          </string-name>
          <article-title>EMFtoCSP: a tool for the lightweight verification of EMF models</article-title>
          .
          <source>In FormSERA</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>M. Kuhlmann</surname>
            , L. Hamann, and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Gogolla</surname>
          </string-name>
          .
          <article-title>Extensive validation of OCL models by integrating SAT solving into USE</article-title>
          .
          <source>In TOOLS (49)</source>
          , volume
          <volume>6705</volume>
          <source>of LNCS</source>
          , pages
          <fpage>290</fpage>
          -
          <lpage>306</lpage>
          . Springer,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>A.</given-names>
            <surname>Queralt</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Teniente</surname>
          </string-name>
          .
          <article-title>Verification and validation of UML conceptual schemas with OCL constraints</article-title>
          .
          <source>TOSEM</source>
          ,
          <volume>21</volume>
          (
          <issue>2</issue>
          ):
          <fpage>13</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>S.</given-names>
            <surname>Sen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Baudry</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Vangheluwe</surname>
          </string-name>
          .
          <article-title>Towards domain-specific model editors with automatic model completion</article-title>
          .
          <source>Simulation</source>
          ,
          <volume>86</volume>
          (
          <issue>2</issue>
          ):
          <fpage>109</fpage>
          -
          <lpage>126</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>M. Soeken</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Wille</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Kuhlmann</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Gogolla</surname>
            , and
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Drechsler</surname>
          </string-name>
          .
          <article-title>Verifying UML/OCL models using boolean satisfiability</article-title>
          .
          <source>In DATE</source>
          , pages
          <fpage>1341</fpage>
          -
          <lpage>1344</lpage>
          . IEEE,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>