<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ANALYSIS OF PROPOS ALS TO GENERATION OF SYSTEM TEST CASES FROM SYSTEM REQUISITES</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>J. J. Gutiérrez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M. J. Escalona</string-name>
          <email>escalona@lsi.us.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M. Mejías</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>J. Torres</string-name>
          <email>jtorres@lsi.us.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department de Lenguajes y Sistemas Informáticos University of Sevilla (</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>System test cases allow to verify the functionality of a software system. System testing is a basic technique to guarantee quality of software systems. This work describes, analyzes and compares five proposals to generate test cases from functional requirements in a systematic way. Test cases generated will verify the adequate implementation of those functional requirements. The objective of this analysis is to determine the grade of mature of those proposals, evaluating if they can be applied in real projects and identifying which aspects needs to me improved.</p>
      </abstract>
      <kwd-group>
        <kwd />
        <kwd>Test case</kwd>
        <kwd>system test</kwd>
        <kwd>use cases</kwd>
        <kwd>functional requirements</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION.</title>
      <p>System testing phase begins when the building of the software system is finished.
The objectives of system testing phase are to test the system in depth and verify its
global functionality and integrity, running the system in an environment as similar as
the final production environment. This verificatio n is based on observation of a
controlled set of executions called testing cases. Test process can be expressed like a
searching problem. Its main objective is to discover and to correct most bugs as soon
as possible [Binder1999].</p>
      <p>Nowadays, use cases are the most used tool to express functional requirements.
Use cases are also a good artefact to generate system tests cases [Fröhlich2000].
Actually, there are several proposals that describe how to generate test cases from use
cases. This work resumes a comparative analysis among five proposals.</p>
      <p>This comparative analysis started in March of 2004 and it is still active. Today this
comparative includes 12 proposals. The objective of this comparative is to evaluate in
depth the state of the art in system test cas e generation and to build the basis of a new
proposal.</p>
      <p>Some results from this comparative analysis have been included in
[Gutierrez2004]. This work includes a different set of proposals and more
comparative characteristics. Conclusions in this work are more precise due this work
includes more proposals.</p>
      <p>section
section
three
analyzed
proposals
are
introduced.
In
the
state
of
the
art
is
described.
section
4
results
from
the
analysis
a r e
s h o w e d .</p>
    </sec>
    <sec id="sec-2">
      <title>STATE OF THE ART.</title>
      <p>Briefly,
process
of
system
test
generation
from
functional
requirements
consists
of
build
d a t a ,
a
system
model
from
functional
requirements.</p>
      <p>From
that
system
model,
input
e v e n t s
a n d
e x p e c t e d
r e s u l t s
a r e
g e n e r a t e d
[Jacobs2004].</p>
      <p>T h i s
p r o c e s s
i s
described
in</p>
      <p>SCENT [Ryser2003] is a methodological proposal divided in two blocks. In first
block, SCENT describes a process to define use scenarios. In second block, SCENT
describes how to systematically generate system-testing cases from scenarios obtained
in block one. Generation of test cases accomplishes intervening a three steps process.
In first step, each test case is defined, indicating what it goes to test. After that, test
cases are generated from the distinct paths that can be gone over in the state diagram.
Finally, test cases obtained are refined and completed with more test cases developed
by classical methods, like stress tests, user interface tests, etc.</p>
      <p>Test Cases from Use Cases (TCUC) [Heumann2002] develops a method to obtain
a set of system test cases from use cases in three steps. First, all possible path of
execution are generated from every use case. Every possible execution path generates
a test case. Finally, test values for every test case are identified. Test values include
valid and invalid values and outputs expected.</p>
      <p>AGEDIS [Hartman2004] is an investigation project financed by the European
Union concluded at the beginning of 2,004. AGEDIS main objective has been the
development of a set of tools for the automatic generation and exe cution of tests to
verify systems based in distributed components. Although AGEDIS can be applied to
any kind of system, better results are obtained applying AGEDIS to control systems,
as communication protocols, than to information transformation systems, like
compilers. AGEDIS focuses in two products: A system model written in a modelling
language called IF, and a set of UML class and state diagrams. These products allow
automatic generation of sets of tests and groups of test case objects to link system
model and its implementation. This one allows executing tests with system model and
system implementation and comparing outputs from model with outputs from
implementation.</p>
      <p>Use Case Path Analysis (UCPA) [Ahlowalia2002] describes a process composed
by 5 steps. The starting point is a textual description of a use case. A flow chart is
built from the use case. Using path analysis process a set of test cases is generated.</p>
      <p>Requirements by Contract (RBC) [Nebut2003] is divided in two blocks. In first
block, this proposal shows how to extend UML use case diagrams adding pre
conditions, post-conditions and parameters. In second block this proposal describes
several algorithms to generate test cases from extended use case diagrams. At the end
of this process, a set of test objectives is obtained. A test objective is a sequence of
instantiated use cases. A test case generator has to be used in order to produce
concrete test cases from those test objectives.</p>
    </sec>
    <sec id="sec-3">
      <title>3. ANALYSIS OF PROPOSAL.</title>
    </sec>
    <sec id="sec-4">
      <title>3.1. Comparative analysis.</title>
      <p>Eleven factors were evaluated for each proposal. Table 1 shows the most relevant
factors and next paragraphs describe those factors.</p>
      <p>New notation indicates if a proposal proposes its own notation or diagrams.
SCENT introduces a proprietary usage diagram and UCPA a proprietary notation for
flow diagram. RBC introduces a proprietary notation to extend use case diagrams .
AGESIS uses IF Language to model the system. TCUC uses natural language only .</p>
      <p>Full systematized indicates if a proposal describe how to perform all steps
indicates. If proposal is not fully systema tized some steps, like build system model
form requirements, are not systematically detailed. Only AGEDIS if fully
systematized and has a complete set of tools to perform the whole generation process.</p>
      <p>Practical cases indicate if there are real project reports in which a proposal has
been applied. Only SCENT and AGEDIS includes references to real projects.</p>
      <p>Automated level measures the grade in which that proposal can be implanted in
software tool. AGEDIS is the only proposal with full level.</p>
      <p>Use of standards indicates if a proposal is based in diagrams largely used like
UML diagrams. Only AGEDIS and RBC use UML diagrams.</p>
      <p>Supporting tools indicate if, nowadays, there are tools to support the proposal.</p>
      <p>Difficulty of implantation is based in quantity and diffic ulty of transformations to
realize in each step. A low difficulty indicates a simple proposal to realize, without
specific preparation. A medium difficulty indicates that there is new notation or some
process that needs a previous preparation. A high difficulty indicates that a proposal
cannot be applied without a depth study of its elements.</p>
      <p>Application examples indicate if a proposal includes examples, aside from
practical cases. All proposals, except RBT, include a practical case.</p>
      <p>Coverage criterion indicates the method to generate test cases from system model.
Several means that proposal exposes different coverage criterions.</p>
      <p>Test values indicate if proposal exposes how to select test values for test cases
generated.</p>
      <p>Test case optimization indicates which proposal describes how to select some of
test cases generated without losing of quality or coverage.</p>
      <p>Multiple use cases indicate if proposal can generate test cases that involve more
than one use case or proposal can only generate test cases from one use case in
isolation.</p>
      <p>Test case order indicates if proposal describes the order to execute test cases
generated.</p>
    </sec>
    <sec id="sec-5">
      <title>3.2. Strong and weak points.</title>
      <p>This section describes briefly main strong and weak point of each proposal.</p>
      <p>SCENT offers a detailed method to manipulate and organize use scenarios. It
includes two references to real projects where it has been successfully applied.
However, it is necessary to make a very drawn-out job, 16 steps, with scenarios
before generating test cases.</p>
      <p>TCUC works with use cases written in natural language, instead of formal use
cases . This makes suitable to rapidly obtain test cases, but difficult the automated of
process by tools.</p>
      <p>UCPA proposes a technique to determine which execution paths are most
frequent and critical and which execution path are useless for testing purposes. This
technique allows decreasing number of test without affect quality. Weak points are:
flow chart notation is simple and hard to apply over complex requirements, and
UCPA does not detail how to build test cases once identifies execution paths.</p>
      <p>AGEDIS is the most complete proposal. AGEDIS includes generation and
execution of test cases. This proposal provides references to five real successful
projects. It has a complete tool kit to support all steps of the process. However,
AGEDIS cannot be applied to all kind of projects, just only projects that flow controls
is more important that information transforming. Tool kit is free for educational
purposes only. AGEDIS obligates to adapt to AGEDIS tools.</p>
      <p>RBC is supported by a prototype open-source tool called UCTSystem
[Generating2003]. RBC generates test cases that include several use cases and
describe several coverage criterions to concrete types of system. However, RBC
extends use cases ignoring extension rule s proposed by UML Consortium. RBC treats
use case as black boxes. This means that this proposal cannot generate test cases to
verify the whole set of interactions in a use case in isolation.</p>
    </sec>
    <sec id="sec-6">
      <title>4. CONCLUSIONS.</title>
      <p>Comparative analysis shows that AGEDIS is the most modern and complete
proposal but, due its weak points, it is not the definitive solution. None of proposals
are definitive; all proposals have some advantage over the rest and some weak points.
There are main elements which are not described with enough detail to be applied in
practice. Examples are: coverage criterions, or how to use storage requirement to
derive test values. None of analyzed proposals describe clearly the result of the
generation process. None of analyzed proposals describe de grade of detail of results.
All proposals generate test but: how test cases are expre ssed? can they be directly
implemented?. Examples included into proposals describe test cases as tables, but
none of proposals defines clearly how to express a test case. Except AGEDIS, none of
proposals defines detail level of generated test cases. They also do not generate test
cases, only test descriptions that must be refined an implemented.</p>
      <p>Actually we are working in answer these questions. Our goal is not to define a
new proposal, but takes all elements in exiting proposals and complete their lacks,
like several coverage criterions, a formal specification to a test cases, how to generate
test code from that specification, etc.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Ahlowalia2002] Ahlowalia, Naresh.
          <year>2002</year>
          .
          <article-title>Testing From Use Cases Using Path Analysis Technique</article-title>
          .
          <source>International Conference on Software Testing Analysis &amp; Review.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [Bertolino2004]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bertolino</surname>
          </string-name>
          , E. Marchetti,
          <string-name>
            <given-names>H.</given-names>
            <surname>Muccini</surname>
          </string-name>
          .
          <year>2004</year>
          .
          <article-title>Introducing a Reasonably Complete and Coherent Approach for Model-based</article-title>
          . Electronic Notes in Theoretical Computer Science.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Binder1999] Binder,
          <string-name>
            <surname>Rober V.</surname>
          </string-name>
          <year>1999</year>
          .
          <article-title>Testing Object-Oriented Systems</article-title>
          . Addison Wesley.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>[Fröhlich2000] Fröhlich</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Link</surname>
            <given-names>J.</given-names>
          </string-name>
          <year>2000</year>
          .
          <source>Automated Test Case Generation from Dynamic Models. ECOOP'00. Sophia Antipolis and Cannes</source>
          , France.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [Generating2003]
          <article-title>Generating tests from requirements tool</article-title>
          . http://www.irisa.fr/triskell/results/ISSRE03/UCTSystem/
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [Gutierrez2004]
          <string-name>
            <surname>Gutiérrez</surname>
            ,
            <given-names>J.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Escalona</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mejías</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Torres</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>Comparative Analysis of Methodological proposes to systematic generation of system test cases from systems requirements</article-title>
          .
          <source>Proceeding of the 3rd workshop on System Testing and Validation</source>
          . pp
          <fpage>151</fpage>
          -
          <lpage>160</lpage>
          . Paris, France.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>[Hartman2004] Hartman</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nagin</surname>
            <given-names>A.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>The AGEDIS Tools for Model Based Testing</article-title>
          .
          <source>ISSTA04</source>
          . Boston, Massachusetts .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [Heumann2002] Heumann , Jim,
          <year>2002</year>
          .
          <article-title>Generating Test Cases from Use Cases</article-title>
          .
          <source>Journal of Software Testing Professionals.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Jacobs2004]
          <string-name>
            <surname>Jacobs</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>Automatic generation of test cases from use cases</article-title>
          .
          <source>ICSTEST'04</source>
          .
          <string-name>
            <surname>Bilbao</surname>
          </string-name>
          . Spain.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [Nebut2003]
          <string-name>
            <surname>Nebut</surname>
            ,
            <given-names>C.F.</given-names>
          </string-name>
          , et-al.
          <year>2003</year>
          .
          <article-title>Requirements by contract allow automated system testing</article-title>
          .
          <source>Procedings of the 14th International symposium of Software Reliability Engineering (ISSRE'03)</source>
          . Denver, Colorado. EEUU.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Offutt1999]
          <string-name>
            <surname>Offut</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et-al.
          <year>1999</year>
          .
          <article-title>Criteria for Generating Specification-based Tests</article-title>
          .
          <source>ICECCS '99</source>
          .
          <string-name>
            <surname>Las</surname>
            <given-names>Vegas</given-names>
          </string-name>
          , Nevada.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [Ryser2003]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ryser</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Glinz</surname>
          </string-name>
          <year>2003</year>
          .
          <article-title>SCENT: A Method Employing Scenarios to Systematically Derive Test Cases for System Test</article-title>
          .
          <source>Technical Report</source>
          <year>2000</year>
          /03, Institut für Informatik, Universität Zürich.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>