<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A TEST CASE GENERATION TECHNIQUE AND PROCESS Nicha Kosindrdecha and Jirapun Daengdej</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Autonomous System Research Laboratory Faculty of Science and Technology Assumption University</institution>
          ,
          <country country="TH">Thailand</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>It has been proven that the software testing phase is one of the most critical and important phases in the software development life cycle. In general, the software testing phase takes around 40-70% of the effort, time, and cost. This area is well researched over a long period of time. Unfortunately, while many researchers have found methods of reducing time and cost during the testing process, there are still a number of important related issues that need to be researched. This paper introduces a new high level test case generation process with a requirement prioritization method to resolve the following research problems: unable to identify suitable test cases with limited resources, lack of an ability to identify critical domain requirements in the test case generation process and ignore a number of generated test cases. Also, this paper proposes a practical test case generation technique derived from use case diagram.</p>
      </abstract>
      <kwd-group>
        <kwd>- test generation</kwd>
        <kwd>testing and quality</kwd>
        <kwd>test case generation</kwd>
        <kwd>test generation technique and generate tests</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        Software testing is known as a key critical phase in the
software development life cycle, which account for a
large part of the development effort. A way of
reducing testing effort, while ensuring its
effectiveness, is to generate test cases automatically
from artifacts used in the early phases of software
development. Many test case generation techniques
have been proposed [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ],
[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ], [
        <xref ref-type="bibr" rid="ref47">47</xref>
        ], [
        <xref ref-type="bibr" rid="ref50">50</xref>
        ], mainly random,
pathoriented, goal-oriented and model-based approaches.
Random techniques determine a set of test cases based
on assumptions concerning fault distribution.
Pathoriented techniques generally use control flow graph to
identify paths to be covered and generate the
appropriate test cases for those paths. Goal-oriented
techniques identify test cases covering a selected goal
such as a statement or branch, irrespective of the path
taken. There are many researchers and practitioners
who have been working in generating a set of test
cases based on the specifications. Modeling languages
are used to get the specification and generate test
cases. Since Unified Modeling Language (UML) is the
most widely used language, many researchers are
using UML diagrams such as state diagrams, use-case
diagrams and sequence diagrams to generate test cases
and this has led to model-based test case generation
techniques. In this paper, an approach with additional
requirement prioritization step is proposed toward test
cases generation from requirements captured as use
cases [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. A use case is the specification of
interconnected sequences of actions that a system can
perform, interacting with actors of the system. Use
cases have become one of the favorite approaches for
requirements capture. Test cases derived from use
cases can ensure compliance of an application with its
functional requirements. However, one difficulty is
that there are a large number of functional
requirements and use cases. A second research
challenge is to ensure that test cases are able to
preserve and identify critical domain requirements [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Finally, a third problem is to minimize a number of
test cases while preserving an ability to reveal faults.
For example, there are a lot of functional requirements
in the large software development. Software test
engineers may not be able to design test cases to cover
important requirements and generate a minimum set of
test cases. Therefore, test cases derived from large
requirements or use cases are not effective in the
practical large system. This paper presents an
approach with additional requirement prioritization
process for automated generation of abstract
presentation of test purposes called test scenarios. This
paper also introduces a new test case generation
process to support and resolve the above research
challenges. We overcome the problem of large
numbers of requirements and use cases. This allows
software testing engineer to prioritize critical
requirements and reasonably design test cases for
them. Also, this allows us to be able to identify a high
percentage of each test case’s critical domain
coverage.
      </p>
      <p>The rest of the paper is organized as follow. Section 2
discusses the comprehensive set of test case
generation techniques. Section 3 proposes the
outstanding research challenges that motivated this
study. Section 4 introduces a new test generation
process and technique. Section 5 describes an
experiment, measurement metrics and results. Section
6 provides the conclusion and research directions in
the test case generation field. The last section
represents all source references used in this paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. LITERATURE REVIEW</title>
      <p>
        Model-based techniques are popular and most
researchers have proposed several techniques. One of
the reasons why those model-based techniques are
popular is that wrong interpretations of complex
software from non-formal specification can result in
incorrect implementations leading to testing them for
conformance to its specification standard [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ]. A
major advantage of model-based V&amp;V is that it can be
easily automated, saving time and resources. Other
advantages are shifting the testing activities to an
earlier part of the software development process and
generating test cases that are independent of any
particular implementation of the design [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The
model-based techniques are method to generate test
cases from model diagrams like UML Use Case
diagram [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ], UML Sequence diagram [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
and UML State diagram [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ], [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ],
[
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. There are many researchers who investigated
in generating test cases from those diagrams. The
following paragraphs show examples of model-based
test generation techniques that have been proposed for
a long time.
      </p>
      <p>
        Heumann [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] presented how using use cases to
generate test cases can help launch the testing process
early in the development lifecycle and also help with
testing methodology. In a software development
project, use cases define system software
requirements. Use case development begins early on,
so real use cases for key product functionality are
available in early iterations. According to the Rational
Unified Process (RUP), a use case is used to describe
fully a sequence of actions performed by a system to
provide an observable result of value to a person or
another system using the product under development."
Use cases tell the customer what to expect, the
developer what to code, the technical writer what to
document, and the tester what to test. He proposed
three-step process to generate test cases from a fully
detailed use case: (a) for each use case, generate a full
set of use-case scenarios (b) for each scenario, identify
at least one test case and the conditions that will make
it execute and (c) for each test case, identify the data
values with which to test. Ryser [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] raised the
practical problems in software testing as follows: (1)
Lack in planning/time and cost pressure, (2) Lacking
test documentation, (3) Lacking tool support, (4)
Formal language/specific testing languages required,
(5) Lacking measures, measurements and data to
quantify testing and evaluate test quality and (6)
Insufficient test quality. They proposed their approach
to resolve the above problems. Their approach is to
derive test case from scenario / UML use case and
state diagram. In their work, the generation of test
cases is done in three processes: (a) preliminary test
case definition and test preparation during scenario
creation (b) test case generation from Statechart and
from dependency charts and (c) test set refinement by
application dependent strategies.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. RESEARCH CHALLENGES</title>
      <p>
        This section discusses the details of research issues
related to test case generation techniques and research
problems, which are motivated this study. Every test
case generation technique has weak and strong points,
as addressed in the literature survey. In general,
referring to the literature review, the following lists
major outstanding research challenges. The first
research problem is that existing test case generation
methods are lack of ability to identify domain specific
requirements. The study [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] shows that domain
specific requirements are some of the most critical
requirements required to be captured for
implementation and testing, such as constraints
requirements and database specific requirements.
Existing approaches ignore an ability to address
domain specific requirements. Consequently, software
testing engineers may ignore the critical functionality
related to the critical domain specific requirements.
Thus, this paper introduces an approach to priority
those specific requirements and generates an effective
test case. The second problem is that existing test case
generation techniques aim to generate test cases which
maximize cover for each scenario. Sometimes, they
generate a huge number of test cases which are
impossible to execute given limited time and
resources. As a result, those unexecuted test cases are
useless. The last problem is to unable to identify
suitable test cases in case that there are limited
resources (e.g. time, effort and cost). The study reveals
that existing techniques aim to maximum and generate
all possible test cases. This can lead to unable to select
necessary test cases to be executed during software
testing activities, in case that there are limited
resources.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. PROPOSED METHOD</title>
      <p>
        This section presents a new high-level process to
generate a set of test cases introduced by using the
above comprehensive literature review and previous
works [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ].
      </p>
      <p>Figure 1 A Proposed Process to Generate Test Cases
From the above figure, the left-hand side process
is a general waterfall process. We propose to add two
additional processes: (a) requirement prioritization
and (b) test case generation.</p>
      <p>
        The requirement prioritization process aims to be
able to effectively handle with a large number of
requirements. The objective of this process is to
prioritize and organize requirements in an appropriate
way in order to effectively design and prepare test
cases [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ]. There are two sub-processes:
(a) classify requirements and (b) prioritize
requirements.
      </p>
      <p>
        The classify requirement process primarily
divides and classifies requirements into four groups
[
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]: (a) “Must-Have” (b) “Should-Have” (c)
“CouldHave” and (d) “Wish”. The “Must-Have”
requirements are mandatory requirements that need to
be implemented in the system. The “Should-Have”
requirements are requirements that should be
implemented if there are available resources. The
“Could-Have” requirements are additional
requirements that are able to be implemented if there
are adequate resources. The “Wish” requirements are
“would like to have in the future” requirements that
may be ignored if there are inadequate resources. This
paper introduces five factors to classify the above
requirements, as follows:
      </p>
      <p>From the above table, the following shortly
describes a meaning of the above factors:
• Time – The requirement must be implemented in
the current version or release of software.
• Cost – There is an available of budget or fund to
implement the requirement.
• People – There is an available of human
resources to develop and test the requirement.
• Scope – The requirement can be removed out of
the current version or release of software.
• Success – The success of system development
rely on the requirement.</p>
      <p>In addition, this paper secondary divides those
requirements into two groups: (a) functional and (b)
non-functional. The functional requirements can be
categorized into two groups: (a) domain specific
requirements and (b) non- domain specific
requirements. The domain specific requirements are
able to identify as database specific and constraints
requirements. For example, database connection
specific requirements and requirements for an
interface with other systems. The non-functional
requirements can be vary, such as performance,
security, operability and maintainability requirements.
The following displays the classify requirement tree:</p>
      <p>
        From the above figure, we propose a ranking
number for each requirement. This paper prioritizes
“Must-Have” requirements as top three ranking and
“Wish” requirements as last three ranking. The study
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] reveals that domain specific requirements should
have higher priority than both of behavioral and
nonfunctional requirements.
      </p>
      <p>However, when the requirement is already
classified, the next process is to prioritize those
requirements. In the requirement prioritization
process, this paper proposes to use a cost-value
approach to weight and prioritize requirements. This
paper also proposes to use the following formula:
P(Req) = (Cost * CP) (1)
Where:
• P is a prioritization value.
• Req is a requirement required to be prioritized.
• Cost is a total estimated cost of coding and
testing for each requirement.
• CP is an user-defined customer priority value.</p>
      <p>This value is in the range between 1 and 10. 10 is
the highest priority and 1 is the lowest priority.
This value aims to allow customers to identify
how important of each requirement is from their
perspective.</p>
      <p>To compute the above cost for coding and testing, this
paper proposes to apply the following formula:
Cost= (ECode*CostCode)+(ETest*CostTest) (2)
Where:
• Cost is a total estimated cost.
• ECode is an estimated effort of coding for each
requirement. The unit is man-hours.
• CostCode is a cost of coding that is charged to
customers. This paper applies the cost-value
approach to identify the cost of coding for each
requirement group (e.g. “Must-Have”,
“ShouldHave”, “Could-Have” and “Wish”). The unit is
US dollar.
• ETest is an estimated effort of testing for each
requirement. The unit is man-hours.
• CostTest is a cost of testing that is charged to
customers. The approach to identify this value is
similar to CostCode’s approach. The unit is US
dollar.</p>
      <p>In this paper, we assumed the following in order to
calculate CostCode and CostTest. Also, this paper
assumes that a standard cost for both activities is $100
per man-hours.
• A value is 1.5 of (“Must-Have”, “Should-Have”)
– this means that “Must-Have” requirements
have one and half times cost value than
“ShouldHave” requirements.
• A value is 3 of (“Must-Have”, “Could-Have”) –
this means that “Must-Have” requirements have
three times cost value than “Could-Have”
requirements.
• A value is 2 of (“Should-Have”, “Could-Have”)
– this means that “Should-Have” requirements
have two times cost value than “Could-Have”
requirements.
• A value is approximately 3 of (“Could-Have”,
“Wish”) – this means that “Could-Have”
requirements have three times cost value than
“Wish” requirements.</p>
      <p>Therefore, the procedure of requirement
prioritization process can be shortly described below:
1. Provide estimated efforts of coding and testing
for each requirement.
2. Assign cost value for each requirement group
based on the previous requirement classification
(e.g. “Must-Have”, “Should-Have”,
“CouldHave” and “Wish”).
3. Calculate a total estimated cost for coding and
testing, by using the formula (2).
4. Define a customer priority for each requirement.
5. Compute a priority value for each requirement by
using the formula (1).
6. Prioritize requirements based on the higher
priority value.</p>
      <p>Once the requirements are prioritized, the next
proposed step is to generate test scenario and prepare
test case.</p>
      <p>
        This section presents an automated test scenario
generation derived from UML Use Case diagram. Our
approach is built based on Heumann’s algorithm [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
The limitation of our approach is to ensure that all use
cases are fully dressed. The fully dressed use case is a
use case with the comprehensive of information, as
follows: use case name, use case number, purpose,
summary, pre-condition, post-condition, actors,
stakeholders, basic events, alternative events, business
rules, notes, version, author and date.
      </p>
      <p>The proposed method contains four steps, as
follows: (a) extract use case diagram (b) generate test
scenario (c) prepare test data and prepare other test
elements. These steps can be shortly described as
follows:
1.</p>
      <p>The first step is to extract the following
information from fully dressed use cases: (a)
use case number (b) purpose (c) summary (d)
pre-condition (e) post-condition (f) basic
event and (g) alternative events. This
information is called use case scenario in this
paper. The example fully dressed use cases of
ATM withdraw functionality can be found as
follows:</p>
      <p>Table 2 Example Fully Dressed Use Case
Use Use Summary Basic Event Alternativ Business
Case Id Case e Events Rules</p>
      <p>Name
UC-001 Withd To allow 1. Insert 1. Select (a) Input
raw bank's Card Inquiry amount
customers 2. Input PIN 2. Select &lt;=
to 3. Select A/C Type Outstandi
withdraw Withdraw 3. Check ng
money 4. Select Balance Balance
from ATM A/C Type (b) Fee
machines 5. Input charge if
anywhere Balance using
in 6. Get different
Thailand. Money ATM</p>
      <p>7. Get Card machines
UC-002 Trans To allow 1. Insert 1. Select Amount
fer users to Card Inquiry &lt;=
transfer 2. Input PIN 2. Select 50,000
money to 3. Select A/C Type baht
other Transfer 3. Check
banks in 4. Select Balance
Thailand bank
from all 5. Select
ATM "To"
machines account
6. Select
A/C Type
7. Input
Amount
8. Get
Receipt
9. Get Card</p>
      <p>The above use cases can be extracted into the
following use case scenarios:
Scenario-004
TS-001
TS-002
TS-003</p>
      <p>
        The second step is to automatically generate
test scenarios from the previous use case
scenarios [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. From the above table, we
automatically generate the following test
scenarios:
      </p>
      <p>To allow users 1. Insert Card
to transfer 2. Input PIN
money to other 3. Select Inquiry
banks in 4. Select A/C Type
Thailand from 5. Check Balance
all ATM 6. Select Transfer
machines 7. Select bank
8. Select "To" account
9. Select A/C Type
10. Input Amount
11. Get Receipt
12. Get Card
3. The next step is to prepare test data. This step
allows to manually prepare an input data for
each scenarios.</p>
      <p>The last step is to prepare other test elements, such
as expected output, actual output and pass / fail status.</p>
    </sec>
    <sec id="sec-5">
      <title>5. EVALUATION</title>
      <p>The section describes the experiments
measurement metrics and results.
design,</p>
    </sec>
    <sec id="sec-6">
      <title>5.1. Experiments Design</title>
      <p>A comparative evaluation method has proposed in this
experiment design. The high-level overview of this
experiment design can be found as follows:
1. Prepare Experiment Data. Before evaluating
the proposed methods and other methods,
preparing experiment data is required. In this
step, 50 requirements and 50 use case scenarios
are randomly generated.</p>
    </sec>
    <sec id="sec-7">
      <title>2. Generate Test Scenario and Test Case. A</title>
      <p>
        comparative evaluation method has been made
among the proposed test generation algorithm,
Heumann’s technique Jim [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], Ryser’s method
[
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], Nilawar’s algorithm [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] and the proposed
method presented in the previous section.
3. Evaluate Results. In this step, the comparative
generation methods are executed by using 50
requirements and 50 use case scenarios. These
methods are also executed for 10 times in order
to find out the average percentage of critical
domain requirement coverage, a size of test cases
and total generation time. In total, there are 500
requirements and 500 use case scenarios executed
in this experiment.
      </p>
      <p>The following tables present how to randomly
generate data for requirements and use case scenarios
respectively.</p>
      <p>Table 5 Generate Random Requirements
Attribute Approach
Requirement ID Randomly generated from the following
combination: Req + Sequence Number.</p>
      <p>Type of
Requirement</p>
      <p>For example, Req1, Req2, Req3, …,
ReqN.</p>
      <p>Randomly selected from the following
values: Functional AND
NonIs it a critical
requirement
(Y/N)?</p>
      <p>Functional.</p>
      <p>Randomly selected from the following
values: Must Have (M), Should Have
(S), Could Have (C) and Won’t Have
(W)
Randomly selected from the following
values: True (Y) and False (N)</p>
      <p>Table 6 Generate Random Use Case Scenario
Attribute Approach
Use case ID Randomly generated from the
following combination: uCase +
Sequence Number. For example,
uCase1, uCase2, …, uCasen.</p>
      <p>Purpose Randomly generated from the
following combination: Pur +
Sequence Number same as Use case
ID. For example, Pur1, Pur2, …,</p>
      <p>Purn.</p>
      <p>Basic Scenario Randomly generated from the
following combination: uCase +
Sequence Number. For example,
basic1, basic2, …, basicn.</p>
    </sec>
    <sec id="sec-8">
      <title>5.2. Measurement Metrics</title>
      <p>The section lists the measurement metrics used in the
experiment. This paper proposes to use three metrics,
which are: (a) size of test cases (b) total time and (c)
percentage of critical domain requirement coverage.
The following describe the measurement in details.</p>
    </sec>
    <sec id="sec-9">
      <title>1. A Number of Test Cases: This is the total</title>
      <p>number of generated test cases, expressed as a
percentage, as follows:
% Size = (# Size / # of Total Size)*100 (3)
Where:
• % Size is a percentage of the number of test
cases.
• # of Size is a number of test cases.
• # of Total Size is the maximum number of test
cases in the experiment, which is assigned 1,000.</p>
    </sec>
    <sec id="sec-10">
      <title>2. A Domain Specific Requirement Coverage:</title>
      <p>
        This is an indicator to identify the number of
requirements covered in the system, particularly
critical requirements, and critical domain
requirements [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Due to the fact that one of the
goals of software testing is to verify and validate
requirements covered by the system, this metric
is a must. Therefore, a high percentage of critical
requirement coverage is desirable.
      </p>
      <p>It can be calculated using the following formula:
% CRC = (# of Critical / # of Total)*100 (4)
Where:
• % CRC is the percentage of critical requirement
coverage.
• # of Critical is the number of critical
requirements covered.
• # of Total is the total number of requirements.
3. Total Time: This is the total number of times the
generation methods are run in the experiment.
This metric is related to the time used during the
testing development phase (e.g. design test
scenario and produce test case). Therefore, less
time is desirable.</p>
      <p>It can be calculated using the following formula:</p>
      <p>Total = PTime + CTime + RTime (5)
Where:
• Total is the total amount of times consumed by
running generation methods.
• PTime is the total amount of time consumed by
preparation before generating test cases.
• CTime is the time to compile source code / binary
code in order to execute the program.
• RTime is the total time to run the program under
this experiment.</p>
    </sec>
    <sec id="sec-11">
      <title>5.3. Results and Discussion</title>
      <p>This section discusses an evaluation result of the
above experiment. This section presents a graph that
compares the above proposed method to other three
existing test case generation techniques, based on the
following measurements: (a) size of test cases (b)
critical domain coverage and (c) total time. Those
three techniques are: (a) Heumman’s method (b)
Ryser’s work and (c) Nilawar’s approach. There are
two dimensions in the following graph: (a) horizontal
and (b) vertical axis. The horizontal represents three
measurements whereas the vertical axis represents the
percentage value.
The above graph shows that the above proposed
method generates the smallest set of test cases. It is
calculated as 80.80% where as the other techniques is
computed over 97%. Those techniques generated a
bigger set of test cases, than a set generated by the
proposed method. The literature review reveals that
the smaller set of test cases is desirable. Also, the
graph shows that the proposed method consumes the
least total time during a generation process,
comparing to other techniques. It used only 30.20%,
which is slightly less than others. Finally, the graph
presents that the proposed method is the best
techniques to coverage critical domains. Its
percentage is much greater than other techniques’
percentage, over 30%.</p>
    </sec>
    <sec id="sec-12">
      <title>6. CONCLUSION</title>
      <p>This paper concentrates on resolving the following
research problems: (a) an inefficient test case
generation method with limited resources (b) a lack of
ability to identify and coverage the critical domain
requirements and (c) an ignorance of a size of test
cases. Furthermore, this paper proposes an effective
test case generation process by adding additional
prioritization process. The new process aims to
improve the ability to: (a) generate test cases with
limited resources (b) include more critical domain
specific requirements and (c) minimize a number of
test cases. Also, this paper introduces an automated
test scenario generation technique to address critical
domain specific requirements. This paper proposes to
compare to other three test case generation
techniques, which are: Heummann’s work, Ryser’s
method and Nilawar’s technique. As a result, this
study found that the proposed method is the most
recommended method to generate the smallest size of
test cases with the maximum of critical domain
specific requirement coverage and the least time
consumed in the test case generation process.</p>
    </sec>
    <sec id="sec-13">
      <title>7. REFERENCES</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ahl</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <article-title>“An Experimental Comparison of Five Prioritization Methods”</article-title>
          ,
          <source>Master's Thesis</source>
          , School of Engineering, Blekinge Institute of Technology, Ronneby, Sweden,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Alessandra</given-names>
            <surname>Cavarra</surname>
          </string-name>
          , Charles Crichton, Jim Davies, Alan Hartman,
          <article-title>Thierry Jeron and Laurent Mounier, “Using UML for Automatic Test Generation”</article-title>
          , Oxford University Computing Laboratory,
          <article-title>Tools and Algorithms for the Construction and Analysis of Systems</article-title>
          , TACAS'
          <year>2000</year>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Amaral</surname>
            , “
            <given-names>A.S.M.S.</given-names>
          </string-name>
          <article-title>Test case generation of systems specified in Statecharts”</article-title>
          , M.S. thesis - Laboratory of Computing and Applied Mathematics,
          <string-name>
            <surname>INPE</surname>
          </string-name>
          , Brazil,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Annelises</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Andrews</surname>
          </string-name>
          , Jeff Offutt and Roger T. Alexander, “Testing Web Applications”,
          <source>Software and Systems Modeling</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Avik</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <source>Ph.D and Dr. Carol S. Smidts, “Domain Specific Test Case Generation Using Higher Ordered Typed Languages fro Specification” Ph. D. Dissertation</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bertolino</surname>
          </string-name>
          , “
          <source>Software Testing Research and Practice”</source>
          , 10th International Workshop on Abstract State Machines (ASM'
          <year>2003</year>
          ), Taormina, Italy,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.Z.</given-names>
            <surname>Javed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.A.</given-names>
            <surname>Strooper</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.N.</given-names>
            <surname>Watson</surname>
          </string-name>
          . “
          <source>Automated Generation of Test Cases Using</source>
          Model-Driven Architecture”,
          <source>Second International Workshop on Automation of Software Test (AST'07)</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Beck</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Andres</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , “
          <article-title>Extreme Programming Explained: Embrace Change”</article-title>
          , 2nd ed. Boston, MA: Addison-Wesley,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Boehm</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Ross</surname>
            ,
            <given-names>R..</given-names>
          </string-name>
          “
          <string-name>
            <surname>Theory-W Software Project</surname>
          </string-name>
          <article-title>Management: Principles and Examples”</article-title>
          ,
          <source>IEEE Transactions on Software Engineering</source>
          <volume>15</volume>
          ,
          <issue>4</issue>
          :
          <fpage>902</fpage>
          -
          <lpage>916</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B.M.</given-names>
            <surname>Subraya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Subrahmanya</surname>
          </string-name>
          , “
          <article-title>Object driven performance testing in Web applications”</article-title>
          ,
          <source>in: Proceedings of the First Asia-Pacific Conference on Quality Software (APAQS'00)</source>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>26</lpage>
          ,
          <string-name>
            <surname>Hong</surname>
            <given-names>Kong</given-names>
          </string-name>
          , China,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Chien-Hung</surname>
            <given-names>Liu</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>David C.</given-names>
            <surname>Kung</surname>
          </string-name>
          , Pei Hsia and ChihTung Hsu, “
          <article-title>Object-Based Data Flow Testing of Web Applications”</article-title>
          ,
          <source>Proceedings of the First Asia-Pacific Conference on Quality Software (APAQS'00)</source>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>16</lpage>
          ,
          <string-name>
            <surname>Hong</surname>
            <given-names>Kong</given-names>
          </string-name>
          , China,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.H.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.C.</given-names>
            <surname>Kung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hsia</surname>
          </string-name>
          , C.T. Hsu, “
          <article-title>Structural testing of Web applications”</article-title>
          ,
          <source>in: Proceedings of 11th International Symposium on Software Reliability Engineering (ISSRE</source>
          <year>2000</year>
          ), pp.
          <fpage>84</fpage>
          -
          <lpage>96</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , “
          <article-title>The Art of Requirements Triage”</article-title>
          ,
          <source>IEEE Computer 36</source>
          , 3 p:
          <fpage>42</fpage>
          -
          <lpage>49</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <source>“Just Enough Requirements Management: Where Software Development Meets Marketing”</source>
          , New York: Dorset
          <source>House (ISBN 0-932633-64-1)</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>David</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Kung</surname>
          </string-name>
          ,
          <string-name>
            <surname>Chien-Hung Liu</surname>
          </string-name>
          and Pei Hsia, “
          <article-title>An Object-Oriented Web Test Model for Testing Web Applications”</article-title>
          ,
          <source>In Proceedings of the First Asia-Pacific Conference on Quality Software (APAQS'00)</source>
          , page 111, Los Alamitos, CA,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Donald</surname>
            <given-names>Firesmith</given-names>
          </string-name>
          , “Prioritizing Requirements”,
          <source>Journal of Object Technology</source>
          , Vol.
          <volume>3</volume>
          ,
          <issue>No8</issue>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Harel</surname>
          </string-name>
          , “
          <article-title>On visual formalisms”</article-title>
          ,
          <source>Communications of the ACM</source>
          , vol.
          <volume>31</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>514</fpage>
          -
          <lpage>530</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Harel</surname>
          </string-name>
          , “
          <article-title>Statecharts: A Visual Formulation for Complex System”</article-title>
          ,
          <source>Sci.Comput. Program</source>
          .
          <volume>8</volume>
          (
          <issue>3</issue>
          ):
          <fpage>232</fpage>
          -
          <lpage>274</lpage>
          ,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Flippo</given-names>
            <surname>Ricca</surname>
          </string-name>
          and Paolo Tonella, “
          <article-title>Analysis and Testing of Web Applications”</article-title>
          ,
          <source>Proc. of the 23rd International Conference on Software Engineering</source>
          , Toronto, Ontario, Canada. pp.
          <fpage>25</fpage>
          -
          <lpage>34</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Harel</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , “
          <article-title>Statecharts: a visual formalism for complex system”</article-title>
          ,
          <source>Science of Computer Programming</source>
          , v.
          <volume>8</volume>
          , p.
          <fpage>231</fpage>
          -
          <lpage>274</lpage>
          ,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Hassan</surname>
            <given-names>Reza</given-names>
          </string-name>
          ,
          <article-title>Kirk Ogaard and Amarnath Malge, “A Model Based Testing Technique to Test Web Applications Using Statecharts”</article-title>
          ,
          <source>Fifth International Conference on Information Technology</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Ibrahim</surname>
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>El-Far</surname>
          </string-name>
          and
          <article-title>James A</article-title>
          . Whittaker, “Modelbased Software Testing”,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Jim</given-names>
            <surname>Heumann</surname>
          </string-name>
          ., “Generating Test Cases From Use Cases”, Rational Software,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Johannes</given-names>
            <surname>Ryser</surname>
          </string-name>
          and Martin Glinz, “
          <article-title>SCENT: A Method Employing Scenarios to Systematically Derive Test Cases for System Test”</article-title>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Karl</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Wiegers</surname>
          </string-name>
          , “First Things First: Prioritizing Requirements”,
          <source>Published in Software Development</source>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Karlsson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <source>“Software Requirements Prioritizing”, Proceedings of the Second International Conference on Requirements Engineering (ICRE'96)</source>
          .
          <source>Colorado Springs, CO, April 15-18</source>
          ,
          <year>1996</year>
          . Los Alamitos, CA: IEEE Computer Society, p
          <fpage>110</fpage>
          -
          <lpage>116</lpage>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Karlsson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , “
          <article-title>Towards a Strategy for Software Requirements Selection</article-title>
          .
          <source>Licentiate”, Thesis</source>
          <volume>513</volume>
          , Linköping University,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Karlsson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Ryan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <article-title>“A Cost-Value Approach for Prioritizing Requirements”</article-title>
          ,
          <source>IEEE Software September/October</source>
          , p67-
          <fpage>75</fpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Leffingwell</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Widrig</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , “
          <article-title>Managing Software Requirements: A Use Case Approach”</article-title>
          , 2nd ed. Boston, MA: Addison-Wesley,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Leslie</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Tierstein</surname>
          </string-name>
          , “
          <article-title>Managing a Designer / 2000 Project”</article-title>
          ,
          <source>NYOUG Fall'97 Conference</source>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>L.</given-names>
            <surname>Brim</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Cerna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Varekova</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Zimmerova</surname>
          </string-name>
          , “
          <article-title>Component-interaction automata as a verification oriented component-based system specification”</article-title>
          ,
          <source>In: Proceedings (SAVCBS'05)</source>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>38</lpage>
          , Lisbon, Portugal,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Mahnaz</surname>
            <given-names>Shams</given-names>
          </string-name>
          ,
          <article-title>Diwakar Krishnamurthy and Behrouz Far, “A Model-Based Approach for Testing the Performance of Web Applications”</article-title>
          ,
          <source>Proceedings of the Third International Workshop on Software Quality Assurance (SOQUA'06)</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>Manish</given-names>
            <surname>Nilawar</surname>
          </string-name>
          and Dr. Sergiu Dascalu, “
          <article-title>A UMLBased Approach for Testing Web Applications”, Master of Science with major in Computer Science</article-title>
          , University of Nevada, Reno,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>Moisiadis</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , “Prioritising Scenario Evolution”,
          <source>International Conference on Requirements Engineering (ICRE</source>
          <year>2000</year>
          ),
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <surname>Moisiadis</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , “A Requirements Prioritisation Tool”, 6th Australian Workshop on Requirements Engineering (AWRE
          <year>2001</year>
          ). Sydney, Australia,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <surname>M. Prasanna S.N. Sivanandam R.Venkatesan R.Sundarrajan</surname>
          </string-name>
          , “
          <article-title>A Survey on Automatic Test Case Generation”</article-title>
          ,
          <source>Academic Open Internet Journal</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <surname>Nancy</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Mead</surname>
          </string-name>
          , “Requirements Prioritization Introduction”, Software Engineering Institute, Carnegie Mellon University,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Port</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; &amp;
          <string-name>
            <surname>Boehm</surname>
            <given-names>B.</given-names>
          </string-name>
          , “
          <article-title>Supporting Distributed Collaborative Prioritization for Win-Win Requirements Capture</article-title>
          and Negotiation 578-584”,
          <source>Proceedings of the International Third World Multiconference on Systemics, Cybernetics and Informatics (SCI'99</source>
          ) Vol.
          <volume>2</volume>
          . Orlando, FL,
          <source>July 31-August 4</source>
          ,
          <year>1999</year>
          . Orlando, FL: International Institute of Informatics and
          <source>Systemic (IIIS)</source>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <surname>Rajib</surname>
          </string-name>
          , “Software Test Metric”, QCON,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <surname>Robert</surname>
            <given-names>Nilsson</given-names>
          </string-name>
          ,
          <source>Jeff Offutt and Jonas Mellin, “Test Case Generation for Mutation-based Testing of Timeliness”</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <surname>Saaty</surname>
            ,
            <given-names>T. L.</given-names>
          </string-name>
          , “The Analytic Hierarchy Process”, New York, NY:
          <string-name>
            <surname>McGraw-Hill</surname>
          </string-name>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <surname>Shengbo</surname>
            <given-names>Chen</given-names>
          </string-name>
          , Huaikou Miao, Zhongsheng Qian, “
          <article-title>Automatic Generating Test Cases for Testing Web Applications”</article-title>
          ,
          <source>International Conference on Computational Intelligence and Security Workshops</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <surname>Valdivino</surname>
            <given-names>Santiago</given-names>
          </string-name>
          , Ana Silvia Martins do Amaral,
          <string-name>
            <given-names>N.L.</given-names>
            <surname>Vijaykumar</surname>
          </string-name>
          , Maria de Fatima, Mattiello-Francisco,
          <source>Eliane Martins and Odnei Cuesta Lopes, “A Practical Approach for Automated Test Case Generation using Statecharts”</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <surname>Vijaykumar</surname>
            ,
            <given-names>N. L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Carvalho</surname>
            ,
            <given-names>S. V.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Abdurahiman</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , “
          <article-title>On proposing Statecharts to specify performance models”</article-title>
          ,
          <source>International Transactions in Operational Research</source>
          ,
          <volume>9</volume>
          ,
          <fpage>321</fpage>
          -
          <lpage>336</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <surname>Wiegers</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , “E. Software Requirements”, 2nd ed. Redmond, WA: Microsoft Press,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [46]
          <string-name>
            <surname>Xiaoping</surname>
            <given-names>Jia</given-names>
          </string-name>
          , Hongming Liu and Lizhang Qin, “
          <article-title>Formal Structured Specification for Web Application Testing”</article-title>
          .
          <source>Proc. of the 2003 Midwest Software Engineering Conference (MSEC'03)</source>
          . Chicago, IL, USA. pp.
          <fpage>88</fpage>
          -
          <lpage>97</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [47]
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>J.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>F.J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Chu</surname>
          </string-name>
          , W.C., “
          <article-title>Constructing an object-oriented architecture for Web application testing”</article-title>
          ,
          <source>Journal of Information Science and Engineering</source>
          <volume>18</volume>
          ,
          <fpage>59</fpage>
          -
          <lpage>84</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>Ye</given-names>
            <surname>Wu</surname>
          </string-name>
          and Jeff Offutt, “Modeling and Testing Webbased Applications”,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          [49]
          <string-name>
            <surname>Ye</surname>
            <given-names>Wu</given-names>
          </string-name>
          , Jeff Offutt and Xiaochen, “
          <article-title>Modeling and Testing of Dynamic Aspects of Web Applications, Submitted for publication</article-title>
          .
          <source>Technical Report ISE-TR-04-01</source>
          , www.ise.gmu.edu/techreps/,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          [50]
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , Hall,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>May</surname>
          </string-name>
          , J.,
          <source>“Software Unit Test Coverage and Adequacy”</source>
          ,
          <source>ACM Comp. Survey</source>
          <volume>29</volume>
          (
          <issue>4</issue>
          ), pp
          <fpage>366</fpage>
          ~
          <fpage>427</fpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>