<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Decomposition of Test Cases in Model-Based Testing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marcel Ibe</string-name>
          <email>marcel.ibe@tu-clausthal.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Clausthal University of Technology Clausthal-Zellerfeld</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>40</fpage>
      <lpage>47</lpage>
      <abstract>
        <p>For decades software testing is a fundamental part in software development. In recent years, model-based testing is becoming more and more important. Model-based testing approaches enable the automatic generation of test cases from models of the system to build. But manually derived test cases are still more e cient in nding failures. To reduce the e ort but also keep the advantages of manually derived test cases a decomposition of test cases is introduced. This decomposition has to be adapted to the decomposition of the system model. The objective of my PhD thesis is to analyse these decompositions and develop a method to transfer them to the test cases. That allows the reusing of manually derived test cases at di erent phases of a software development project.</p>
      </abstract>
      <kwd-group>
        <kwd>model-based testing</kwd>
        <kwd>model decomposition</kwd>
        <kwd>test case decomposition</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        During a software development project, testing is one of the most important
activities to ensure the quality of a software system. About 30 to 60 per cent
of the total e ort within a project is spent for testing [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. This value
did not change during the last three decades. Even though testing is a key
aspect of research and there are constantly improving methods and tools that
can be applied. One fundamental problem during testing is the fact, that it is
not possible to show completely absence of errors in a software system.
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Nevertheless by executing enough test cases a certain level of correctness
can be ensured. But the number of test cases must not be too large otherwise it
would not be e cient to test the systems any longer. One of the most important
challenges is to create a good set of test cases. That means the number of test
cases should be minimal but it should also test as much as possible of the systems
behaviour.
      </p>
      <p>
        Model-based testing is one technique that addresses this problem. A potential
in nite set of test cases is generated from the test model, an abstract model
of the system to construct. Based on a test case speci cation a nite set of
these generated test cases can be selected [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. These test cases can be executed
manually or automatically.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] a distinction is made between four levels of testing: acceptance testing,
system testing, integration testing and component or unit testing. The focus
here is integration testing and unit testing. Furthermore testing approaches can
be divided by the kind of model from which the test cases are generated [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Tretmans describes an approach to generate test cases from labelled transition
systems [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. He introduces the ioco-testing theory. By this theory it is possible
to de ne for example when a test case has passed. An algorithm is introduced
that allows generating test cases from labelled transition systems. Several tools
implement the ioco-testing theory. For example TorX [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], TestGen [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] or the
Agedis Tool [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        Ja uel and Legeard presented an approach in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] that generates test cases for
functional testing. The test model is described by the B-notation [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Di erent
coverage criteria allow the selection of test cases.
      </p>
      <p>
        Another approach was described in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] by Katara and Kervinen. It bases on so
called action machines and re nement machines. These are also labelled
transitions systems with keywords as labels. Keyword based scenarios are de ned by
use cases. Then they are mapped to the action machines and detailed by the
re nement machines.
      </p>
      <p>
        An approach that generates test cases for service oriented software systems from
activity diagrams is introduced in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Test stories are derived from activity
diagrams. These user stories are the basis for generating test code. Several coverage
criteria can be checked by constraints.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] Ogata and Matsuura describe an approach that is also based on activity
diagrams. It allows the creation of test cases for integration testing. Use cases
from use case diagrams are re ned by activity diagrams. For every system or
component that is involved in an activity diagram there is an own partition.
So it is possible to select only these actions from the diagram, which de ne the
interface between the systems or the components. Now the test cases can be
generated from these actions.
      </p>
      <p>
        Blech et al. describe an approach in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] which allows the reusing of test cases in
di erent levels of abstractions. For that purpose relations between more abstract
and more concrete models are introduced. After that they try to prove, that the
more concrete model is in fact a re nement of the more abstract model. That
approach is based on the work of Aichernig [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. He used the re nement calculus
from Back and von Wright [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] to create test cases from requirements speci
cations by abstraction.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] Briand et al. introduce an approach to make an impact analyse. So it
is possible to determine which test cases are a ected by changes of the model.
The test cases are divided into di erent categories and can be handled
respectively. That approach was developed to determine the reusability of test cases
for regression testing during changes within the speci cation.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Problem</title>
      <p>Nowadays there are a lot of approaches that are able to generate test cases for
di erent kinds of tests from a model of a system automatically. The advantage
of these approaches is that the e ort for the test case generation is low.
Furthermore a test speci cation can ensure that the test cases meet several criteria
of test coverage. What is not considered there is the e ciency of the test cases.
That means a set of generated test cases can more or less test the whole system.
But therefor maybe a huge number of test cases is necessary. However, manually
derived test cases can be much more e cient. Because of its experience a test
architect is able to derive such test cases that test the most error-prone parts
of a system. So a smaller set of test cases can cover a big and important part
of the system. But such a manual test case derivation is more expensive than
an automatic generation. This addition e ort cannot be balanced by the less
number of test cases that has to be executed.</p>
      <p>During a software development project there are di erent kinds of tests that test
the system or parts of it. For this purpose for every kind of test there are new
test cases necessary. Starting with the creation of test cases for system testing
from the requirements to creation of test cases for integration and unit testing
from the architecture, at every test case creation there is the possibility to choose
the manual test case derivation or the automatic test case generation with all
its advantages and disadvantages. The more complex the model of the system
gets the more the automatic generation is in advantage over the manual
derivation because there is a point where a model is not manageable for a person any
longer. Generally the requirements model is much smaller than the architecture
because the latter contains much more additional information about the inner
structure and behaviour of the system. Therefore, a manual test case derivation
is more reasonable for system testing than for integration or unit testing. But
the advantages of the manually derived test cases are limited to system testing.
The approach that is introduced in the following section should automatically
transfer the advantages of manually derived test cases for system testing to test
cases for integration and unit testing. This is done by decomposing the test cases.
In this way the information that were added to the test cases during derivation
can be reused for further test cases but without the e ort for another manual
test case derivation.</p>
      <p>The question that should be answered in the doctoral thesis is: Can the
advantages of manually derived test cases over automatically generated ones be
transferred to another level of abstraction by an automatic decomposition of
these test cases?
4</p>
    </sec>
    <sec id="sec-4">
      <title>Proposed Solution</title>
      <p>To use the advantages of manually derived test cases for at least one time in the
project a set of test cases has to be derived manual. As stated above, suitable
for this are the test cases for system testing. They can be derived from the
requirements model. It does not contain details about the system like the internal
structure or behaviour. So the test architect can aim solely at the functions of
the complete system. Hence one can get a set of test cases to test the system
against its requirements. Based on the requirements an architecture of the
system is created after that. This architecture is getting more and more re ned and
decomposed. For example the system itself can be decomposed into several
components that can be decomposed into subcomponents again. The functions can
be decomposed analogue into subfunctions which are provided by the
components. To test the particular subfunctions and the interaction of the components
integration and unit test are executed. Therefore, test cases are required again.
They could be derived from the architecture. But that would entail much
additional e ort. Another option is an automatic generation. But that would mean
to lose the advantages of manually derived test cases. A third option is to reuse
the manually derived test cases from system testing. To do this the following
problem has to be solved. Meanwhile there are additional information added to
the architecture for example information about the internal composition or
subfunctions of the system. The test cases also need these information. For instance
it is not possible to test a function if there is no test case that has information
about the existence of that function. Hence, the re nements and decompositions
that were made at the architecture must also be made at the test cases. That
means the test cases also has to be decomposed. After that the test cases from
system testing can be used as basis for test cases for integration and unit testing.
A manual re-deriving of test cases is not necessary any longer. Figure 1 shows
this process schematically.</p>
      <p>
        To illustrate how such a decomposition of test cases could look like it is shown
at the Common Component Modelling Example (CoCoME) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. CoCoME is
the component based model of a trading system of a supermarket. Here we
focus only on the CashDesk component of the trading system. The requirements
model contains the trading system itself and other systems and actors of its
environment. Besides the models that describe the static structure of the system
the behaviour is described by usecases. Such a usecase is for example the handling
of the express mode of the cash desk. Under certain conditions a cash desk can
switch into the express mode. That means that a customer can buy a maximum
of eight products at that cash desk and has to pay cash. Card payment is not
allowed any longer. The cashier can always switch o the express mode at his
cash desk. Figur 2 shows the system environment and an excerpt of the usecase
Manage Express Checkout. A test case that tests the management of the express
mode could consists of the follwoing three steps:
1. The cashier presses the button DisableExpressMode at his cash desk.
2. The cash desk ensures that the colour of the light display changes to black.
3. The cash desk ensures that the card reader accepts credit card again and
card payment is allowed.
      </p>
      <p>In the next step the complete system is decomposed into several
components. One of these components is the CashDeskLine that also contains a set of
CashDesk sub components. Also the description of the behaviour, in this case the
management of the express mode, is decomposed into smaller steps (see gure
3).</p>
      <p>Similarly, the test case that was de ned above has to be decomposed to test
the new subfunctions. After the decomposition it would look as follows:
1. The cashier presses the button DisableExpressMode at his cash desk.
(a) The cashier presses the button DisableExpressMode th his cash box.
(b) The cash box sends an ExpressModeDisableEvent to the cash box
controller.
(c) The cash box controller sends an ExpressModeDisableEvent to the cash
desk application.
2. The cash desk ensures that the colour of the light display changes to black.
(a) The cash desk application send an ExpressModeDisabledEvent to the
light display controller.
3. The cash desk ensures that the card reader accepts credit card again and
card payment is allowed.
(a) The cash desk application sends an ExpressModeDisabledEvent to the
card reader controller.</p>
      <p>The steps at the ordinary level are identic to that from the original test case.
Because of the information about the additional components and the
communication between them, there are a few new test steps at the second level necessary.
New these new components and their communication can also be tested by this
test case. So the test case for system testing that was created from the
requirements can also be used for integration and unit testing.</p>
      <p>To do that test case decomposition the following challenges has to be addressed:
{ De nition of associations between requirements or the rst architecture and
the manually derive test cases. This is necessary to transfer the
decompositions that are made at the architecture to the test cases.
{ Tracing of the elements of the architecture during the further development.</p>
      <p>So it can be decided which elements of the requirements or architecture are
decomposed.
{ De nition of the decomposition of the test cases. Now that it has been
established how the elements of the architecture are decomposed and the
corresponding test cases can be identi ed it can be analysed how the test cases
has to be decomposed according to the decomposition of the architecture
elements.
{ Automatic transfer of the decomposition steps from the architecture to the
test cases. Therefore all the possible decomposition steps have to be
analysed and classi ed. After that they can be detected automatically and the
corresponding test cases can be decomposed respectively.</p>
    </sec>
    <sec id="sec-5">
      <title>Contribution, Evaluation and Validation</title>
      <p>The objective of this PhD thesis is to develop an approach for decomposing test
case analogous to the decomposition of the corresponding system to test. That
approach is based upon ndings about the decomposition of a system in uences
the corresponding test cases. In another step the approach shall be implemented
as a prototype. After that the prototype can be evaluated and compared to other
implementations of model-based testing approaches.</p>
      <p>Within the next year the decomposition steps of a system and their in uence to
the corresponding test cases shall be analysed. For this the changes of individual
model elements during detailed design have to be traced. Especially how they are
extended with information about their interior structure and behaviour. Another
important fact is the relation between model elements and test steps. With this
knowledge it is possible to adapt the test cases after a decomposition of the
system in a way that the test cases can cover also the added information about
structure and behaviour.</p>
      <p>In the six following months a rst prototype shall be implemented. It is intended
to evaluate this prototype within a student project. To see how e cient the test
cases are that were derived with this approach a set of manually derived test
cases are compared with a set of automatically generated ones. After this the
manually derived test cases are decomposed. In the next step the decomposed
test cases are compared with new automatically generated test cases. In each
case the average number of failures that are detected by the test cases and how
serious these failures for the function of the system are compared. The ndings
from this rst evaluation are integrated in the approach and the prototype during
the next six months. After that nalisation the new prototype should be set up
in an industrial project and compared with other model-based tools in use.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abrial</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abrial</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          :
          <string-name>
            <surname>The B-Book</surname>
          </string-name>
          : Assigning Programs to Meanings. Cambridge University Press (
          <year>Nov 2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Aichernig</surname>
            ,
            <given-names>B.K.</given-names>
          </string-name>
          :
          <article-title>Test-design through abstraction - a systematic approach based on the re nement calculus</article-title>
          .
          <source>j-jucs 7(8)</source>
          ,
          <volume>710</volume>
          { 735 (Aug
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Back</surname>
            ,
            <given-names>R.J.R.</given-names>
          </string-name>
          :
          <article-title>Re nement Calculus: A Systematic Introduction</article-title>
          . Springer (Jan
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Blech</surname>
            ,
            <given-names>J.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mou</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ratiu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Reusing test-cases on di erent levels of abstraction in a model based development tool</article-title>
          .
          <source>arXiv e-print 1202.6119 (Feb</source>
          <year>2012</year>
          ), http: //arxiv.org/abs/1202.6119, EPTCS 80,
          <year>2012</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>27</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Briand</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Labiche</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soccar</surname>
          </string-name>
          , G.:
          <article-title>Automating impact analysis and regression test selection based on UML designs</article-title>
          .
          <source>In: International Conference on Software Maintenance</source>
          ,
          <year>2002</year>
          . Proceedings. pp.
          <volume>252</volume>
          {
          <issue>261</issue>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Dias</given-names>
            <surname>Neto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.C.</given-names>
            ,
            <surname>Subramanyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Vieira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Travassos</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.H.:</surname>
          </string-name>
          <article-title>A survey on modelbased testing approaches: a systematic review</article-title>
          .
          <source>In: Proceedings of the 1st ACM international workshop on Empirical assessment of software engineering languages and technologies: held in conjunction with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE</source>
          )
          <year>2007</year>
          . p.
          <fpage>3136</fpage>
          . WEASELTech '07,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2007</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/1353673. 1353681
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Dijkstra</surname>
            ,
            <given-names>E.W.:</given-names>
          </string-name>
          <article-title>The humble programmer</article-title>
          .
          <source>Commun. ACM</source>
          <volume>15</volume>
          (
          <issue>10</issue>
          ),
          <volume>859866</volume>
          (Oct
          <year>1972</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/355604.361591
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Felderer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>ChimiakOpoka</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Breu</surname>
          </string-name>
          , R.:
          <article-title>Modeldriven system testing of service oriented systems</article-title>
          .
          <source>In: Proc. of the 9th International Conference on Quality Software</source>
          (
          <year>2009</year>
          ), http://www.dbs.ifi.lmu.de/~fiedler/publication/FZFCB09.pdf
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Hartman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nagin</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>The AGEDIS tools for model based testing</article-title>
          .
          <source>In: Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis</source>
          . p.
          <fpage>129132</fpage>
          . ISSTA '04,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2004</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/1007512.1007529
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>He</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turner</surname>
            ,
            <given-names>K.J.</given-names>
          </string-name>
          :
          <article-title>Protocol-inspired hardware testing</article-title>
          . In: Csopaki,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Dibuz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Tarnay</surname>
          </string-name>
          ,
          <string-name>
            <surname>K</surname>
          </string-name>
          . (eds.)
          <source>Testing of Communicating Systems</source>
          , pp.
          <volume>131</volume>
          {
          <fpage>147</fpage>
          . No. 21
          <source>in IFIP The International Federation for Information Processing</source>
          , Springer US (
          <year>Jan 1999</year>
          ), http://link.springer.com/chapter/10.1007/978-0-
          <fpage>387</fpage>
          -35567-
          <issue>2</issue>
          _
          <fpage>9</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Herold</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klus</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Welsch</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deiters</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rausch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reussner</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krogmann</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koziolek</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mirandola</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hummel</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>CoCoME-the common component modeling example</article-title>
          .
          <source>In: The Common Component Modeling Example</source>
          , p.
          <fpage>1653</fpage>
          . Springer (
          <year>2008</year>
          ), http://link.springer.com/chapter/10.1007/ 978-3-
          <fpage>540</fpage>
          -85289-
          <issue>6</issue>
          _
          <fpage>3</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Ja uel</surname>
          </string-name>
          , E.,
          <string-name>
            <surname>Legeard</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>LEIRIOS test generator: automated test generation from b models</article-title>
          .
          <source>In: Proceedings of the 7th international conference on Formal Speci cation and Development in B</source>
          . p.
          <fpage>277280</fpage>
          . B'
          <volume>07</volume>
          , Springer-Verlag, Berlin, Heidelberg (
          <year>2006</year>
          ), http://dx.doi.org/10.1007/11955757_
          <fpage>29</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Katara</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kervinen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Making model-based testing more agile: A use case driven approach</article-title>
          . In: Bin,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Ziv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Ur</surname>
          </string-name>
          , S. (eds.)
          <source>Hardware and Software, Veri - cation and Testing</source>
          , pp.
          <volume>219</volume>
          {
          <fpage>234</fpage>
          . No. 4383
          <source>in Lecture Notes in Computer Science</source>
          , Springer Berlin Heidelberg (Jan
          <year>2007</year>
          ), http://link.springer.com/chapter/10. 1007/978-3-
          <fpage>540</fpage>
          -70889-6_
          <fpage>17</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Myers</surname>
            ,
            <given-names>G.J.</given-names>
          </string-name>
          , ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Thomas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.M.</given-names>
            ,
            <surname>Sandler</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          :
          <article-title>The art of software testing</article-title>
          . John Wiley &amp; Sons, Hoboken,
          <string-name>
            <surname>N.J.</surname>
          </string-name>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Ogata</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matsuura</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A method of automatic integration test case generation from UML-based scenario</article-title>
          .
          <source>WSEAS Trans Inf Sci Appl</source>
          <volume>7</volume>
          (
          <issue>4</issue>
          ),
          <volume>598607</volume>
          (
          <year>2010</year>
          ), http://citeseerx.ist.psu.edu/viewdoc/download?doi
          <source>=10.1.1. 175.5822&amp;rep=rep1&amp;type=pdf</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Pretschner</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Philipps</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>10 methodological issues in model-based testing</article-title>
          . In: Broy,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Jonsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Katoen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.P.</given-names>
            ,
            <surname>Leucker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Pretschner</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (eds.)
          <source>ModelBased Testing of Reactive Systems</source>
          , pp.
          <volume>281</volume>
          {
          <fpage>291</fpage>
          . No. 3472
          <source>in Lecture Notes in Computer Science</source>
          , Springer Berlin Heidelberg (Jan
          <year>2005</year>
          ), http://link.springer. com/chapter/10.1007/11498490_
          <fpage>13</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Tretmans</surname>
          </string-name>
          , J.:
          <article-title>Model based testing with labelled transition systems</article-title>
          . In: Hierons,
          <string-name>
            <given-names>R.M.</given-names>
            ,
            <surname>Bowen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.P.</given-names>
            ,
            <surname>Harman</surname>
          </string-name>
          , M. (eds.)
          <source>Formal Methods and Testing</source>
          , pp.
          <volume>1</volume>
          {
          <fpage>38</fpage>
          . No. 4949
          <source>in Lecture Notes in Computer Science</source>
          , Springer Berlin Heidelberg (Jan
          <year>2008</year>
          ), http://link.springer.com/chapter/10.1007/978-3-
          <fpage>540</fpage>
          -78917-
          <issue>8</issue>
          _
          <fpage>1</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Tretmans</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brinksma</surname>
          </string-name>
          , E.:
          <article-title>TorX: automated model-based testing</article-title>
          . pp.
          <volume>31</volume>
          {
          <fpage>43</fpage>
          . Nuremberg, Germany (Dec
          <year>2003</year>
          ), http://doc.utwente.nl/66990/
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Utting</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Legeard</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Practical Model-Based Testing: A Tools Approach</article-title>
          . Morgan Kaufmann (Jul
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>van Veenendaal</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>Standard glossary of terms used in software testing</article-title>
          .
          <source>International Software Testing Quali cations Board</source>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>