<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>MaRTS: A Model-Based Regression Test Selection Approach</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Mohammed Al-Refai Computer Science Department Colorado State University Fort Collins</institution>
          ,
          <addr-line>CO</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-Models can be used to plan the evolution and runtime adaptation of a software system. Regression testing of the evolved and adapted models is important to ensure that the previously tested functionality is not broken. Regression testing is performed with limited time and resource constraints. Thus, regression test selection (RTS) techniques are needed to reduce the cost of regression testing. Existing model-based RTS approaches cannot detect all types of fine-grained changes that can be made at a low level of abstraction, and they do not consider the impact of inheritance hierarchy changes on the selection of test cases. We propose a model-based RTS approach called MaRTS that classifies test cases based on changes performed to UML class and activity diagrams. It supports both fine-grained and inheritance hierarchy changes. We compared MaRTS with two code-based RTS approaches using four applications. MaRTS achieved results comparable to a dynamic code-based RTS approach (DejaVu), and outperformed a static code-based RTS approach (ChEOPSJ). The fault detection ability of the selected test cases was equal to that of the baseline test cases. Index Terms-inheritance hierarchy, model-based adaptation, model-based regression test selection, UML activity diagram, UML class diagram</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        Regression testing is one of the most expensive activities
performed during the lifecycle of a software system [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Regression test selection (RTS) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is an approach that
improves regression testing efficiency and reduces regression
testing time by selecting a subset of the original test set for
regression testing [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        RTS approaches can be based on the analysis of code or
model-level changes of a software system. Model-based RTS
has some advantages over code-based RTS. First, it enables
early estimation of the effort required for regression testing [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Second, it can scale up better than code-based RTS approaches
for large scale software systems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Third, model-based RTS
techniques can be more convenient for approaches that already
apply evolution/adaptation at the model level because both
the evolution/adaptation and test selection processes can be
performed at the same level of abstraction [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Existing model-based RTS approaches suffer from the
following limitations. First, they cannot detect all types of
finegrained changes from UML class, sequence, and state machine
diagrams used in these approaches [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. An example of
such a change is a modification to an operation implementation
that does not affect the operation’s signature and contract [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Fine-grained changes are those that can be made at a low
level of abstraction, such as changes to a statement inside
a method implementation. Second, they do not support the
identification of changes to inherited and overridden operations
along the inheritance hierarchy [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which leads to
situations where relevant test cases that traverse such inherited
and overridden methods are not selected for regression testing.
      </p>
      <p>We propose a model-based RTS approach called MaRTS
to be used for regression testing of unanticipated fine-grained
adaptations performed at the model level. MaRTS uses UML
design class and activity diagrams to represent behaviors of
a software system and its test cases. MaRTS is based on
(1) static analysis of the UML class diagram to identify the
changes in the inheritance hierarchy, (2) fine-grained model
comparison to identify changes performed to UML class and
activity diagrams, and (3) dynamic analysis of the test case
execution at the model level to determine the coverage for
each test case.</p>
      <p>We evaluated MaRTS on four applications, and compared
it with two code-based RTS approaches. We also evaluated
the fault detection ability of the reduced test sets achieved by
MaRTS.</p>
    </sec>
    <sec id="sec-2">
      <title>II. APPROACH</title>
      <p>
        MaRTS classifies the test cases as obsolete, retestable
or reusable. Obsolete test cases are invalid and cannot be
executed on the modified version of the software system.
Retestable test cases exercise the modified parts of the
software system, and need to be selected for regression testing.
Reusable test cases only exercise unmodified parts of the
system, and they do not need to be re-executed for safe
regression testing [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. A safe RTS technique must select all
modification-traversing test cases for regression testing [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
A test case is considered to be modification-traversing for a
program  if it executes changed code in  , or if it formerly
executed code that had been deleted in  [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        In a prior work [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], we applied MaRTS within the context
of a Fine Grained Adaptation (FiGA) framework [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]
that uses UML diagrams to support unanticipated and
finegrained adaptations on running Java software systems. FiGA
uses Reverse R[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to extract UML class and activity diagrams
from Java source code, and JavAdaptor [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] to update a
running Java program without stopping it. In FiGA, each
individual method is represented as an activity diagram. The UML
activity diagram elements that are supported are initial and
final nodes, action nodes, call behavior nodes, and decision and
merge nodes. An activity diagram generated using Reverse R
is executable, where each action node in the activity diagram
has a code snippet associated with it, and Java statements are
contained inside the code snippet. When the model execution
flow reaches an action node, then the code snippet associated
with the action node is executed. Additionally, Reverse Rmaps
a code-level method invocation statement to a call to the
correspoding activity diagram. When the model execution flow
reaches such a call, the called activity diagram is executed [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ],
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>In MaRTS, each method of the software system is
represented as a UML activity diagram. The same thing applies
to each test case. These activity diagrams are executable.
We exploit the Rational software architect (RSA) simulation
toolkit 9.01 to execute test cases at the model level.</p>
      <p>MaRTS consists of the following five steps:
1) Extract operations-table from the original class diagram.
2) Calculate the traceability matrix.
3) Identify model changes.
4) Extract operations-table from the adapted class diagram.
5) Classify test cases.</p>
      <p>MaRTS can scale up to large programs because all of its
steps are automated. MaRTS requires the UML models used
with it to be detailed and executable in order to obtain the
coverage of test cases at the model level. Therefore, MaRTS
is not applicable to model-driven development approaches
that use models at a high level of abstraction and lack
traceability links between the code-level test cases and the
models representing the software system.</p>
      <sec id="sec-2-1">
        <title>A. Extraction of the Operations-Table from the Original Class</title>
      </sec>
      <sec id="sec-2-2">
        <title>Diagram</title>
        <p>This step is performed before developers adapt the models.
An operations-table is extracted from the class diagram. This
table stores for each class the operations that are declared and
inherited by the class. For each operation, the operations-table
stores the operation’s declaring class, name, formal parameter
types, and return type. For each class in the table, the name
of its superclass is also stored.</p>
      </sec>
      <sec id="sec-2-3">
        <title>B. Traceability Matrix Calculation</title>
        <p>This step is performed before developers adapt the models.
The activity diagrams representing the test cases are executed
with the activity diagrams representing the program methods
in order to obtain the coverage of test cases at the model level.</p>
        <p>During model execution, four types of coverage information
are collected for each test case: (1) what activity diagrams
are executed by the test case, (2) what activity diagrams are
directly called by the test case, (3) what is the receiver object
type for each executed activity diagram, and (4) which flows
1http://www-03.ibm.com/software/products/en/ratisoftarchsimutool
in each activity diagram are executed. This information is used
to obtain the activity-level and flow-level traceability matrices
that relate each test case to the activity diagrams and their
flows that were traversed by the test case.</p>
      </sec>
      <sec id="sec-2-4">
        <title>C. Model Change Identification</title>
        <p>MaRTS uses RSA model comparison to identify the model
changes after developers adapt the class and activity
diagrams. The class diagram changes that can be identified
are addition/deletion/modification of interfaces, classes, class
attributes, operations, and generalization and realization
relations. The activity diagram changes that can be identified are
addition/deletion/modification of nodes, transition flows, code
stored in a code snippet associated with an action node, and
the boolean expression associated with a transition flow.</p>
      </sec>
      <sec id="sec-2-5">
        <title>D. Extraction of the Operations-Table from the Adapted Class</title>
      </sec>
      <sec id="sec-2-6">
        <title>Diagram</title>
        <p>When developers adapt the class diagram, the declared and
inherited operations in each class might change. Therefore, an
operations-table is extracted from the adapted class diagram.
The information stored in the operations-tables that are
extracted from the original and adapted class diagrams are used
to determine changes to inherited or overridden operations in
each class.</p>
      </sec>
      <sec id="sec-2-7">
        <title>E. Test Case Classification</title>
        <p>We proposed a classification algorithm that takes the
following inputs: (1) the operations-tables extracted from the
original and adapted class diagrams, (2) the identified model
differences, (3) the flow-level and activity-level traceability
matrices, (4) the set of UML activity diagrams representing
the methods of the software system, and (5) the set of activity
diagrams representing the baseline test cases. The algorithm
classifies the test cases as obsolete, retestable, or reusable.</p>
        <p>Initially, all the test cases are assumed to be reusable. The
algorithm compares the operations-tables to identify which
operations were changed along the inheritance hierarchy. The
activity-level traceability matrix is used to determine each test
case that is affected by those changes. The following rules are
applied:
1) If an operation op is initially declared or inherited by a
class C, and is now neither declared nor inherited by C,
then, find each test case that traverses op on a receiver of
type C. If a found test case directly calls op on a receiver
of type C, then flag the test case as obsolete. Otherwise,
flag the test case as retestable.
2) If an operation op is
a) initially inherited by a class C from an ancestor class
B, and is now overridden by C, or is inherited by C
from one of its ancestors other than B, or
b) initially declared by a class C, and is now inherited
by C from one of its ancestors.
then, flag any reusable test case that traverses op on a
receiver of type C as retestable.</p>
        <p>Once the algorithm completes iterating over all entries of
the operations-tables, the test cases that are still flagged as
reusable are classified based on the identified model
differences. If such a test case traverses deleted or modified
transition flows and/or nodes, then the test case is flagged as
retestable.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>III. CASE STUDY</title>
      <p>
        The goals of the evaluation were to (1) compare the
inclusiveness and precision of MaRTS with that of two
codebased RTS approaches that support changes to the
inheritance hierarchy, and (2) evaluate the fault detection ability
of the retestable test set with that of the original test set.
Inclusiveness measures the extent to which a regression test
selection technique selects modification-traversing test cases
for regression testing, and precision measures the extent to
which a regression test selection approach excludes test cases
that are non-modification-traversing [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        We compared MaRTS with DejaVu [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and ChEOPSJ [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
DejaVu detects fine-grained changes at the statement level, and
ChEOPSJ detects fine-grained changes to method invocations.
Both tools support the identification of changes to the
inheritance hierarchy, and support RTS for Java software systems.
We did not compare MaRTS with the existing model-based
RTS approaches because they lack tool support (or tools are
unavailable).
      </p>
      <sec id="sec-3-1">
        <title>A. Subject Programs and their Adaptations</title>
        <p>We used four subject programs: (1) graph package of the
Java Universal Network/Graph Framework (JUNG)2, (2)
Siena3, (3) XML-security4, and (4) chess program, which is a
classroom project that only supports the functionality to create
a chessboard and move chess pieces. These programs were
implemented using Java 6 and 7. They do not use generic types
and multithreaded programming. Table I summarizes the data
for the original versions of each subject.</p>
        <p>
          We used EvoSuite [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] to generate JUnit test cases for
each of these versions. For JUNG, 188 test cases that achieve
81% statement coverage were generated. For Siena, 107 test
cases that achieve 89% statement coverage were generated.
For chess, 130 test cases that achieve 96% statement coverage
were generated. The XML-security package has JUnit test
suite that comes with it and achieves 31% statement coverage.
The generated test cases for XML-security did not improve
the coverage of the existing test suite. Therefore, we excluded
2http://jung.sourceforge.net/download.html
3http://sir.unl.edu/portal/bios/siena.php
4http://sir.unl.edu/portal/bios/xml-security.php
the generated test cases for XML-security from this study,
and only considered the existing test cases that come with the
application.
        </p>
        <p>We extracted class and activity diagrams from the original
version of each subject program and its test cases. Then, we
adapted the class and activity diagrams from one version to
the following version in a systematic way. First, we identified
the code-level differences between the two versions. Second,
we manually applied these differences at the model level. The
changes at the model level involved additions and deletions of
classes, interfaces, operations, generalization and realization
relations, and modifications to method implementations by
modifying the activity diagrams representing these methods.
Table II summarizes the changes performed on models.</p>
        <p>After the model-level adaptation process was completed, we
applied MaRTS to classify test cases at the model level, and
applied DejaVu and ChEOPSJ at the code level.</p>
      </sec>
      <sec id="sec-3-2">
        <title>B. Inclusiveness and Precision Results</title>
        <p>Table III shows the results of running the three RTS
approaches. For example, MaRTS and DejaVu classified all
the 188 test cases of JUNG as retestable, and ChEOPSJ
classified 178 of out of the 188 test cases as retestable. For
the XML-security subject, MaRTS classified 10 out of 94
test cases as obsolete, and classified the remaining 84 test
cases as retestable. We found that the 10 obsolete test cases
contain calls to deleted operations. DejaVu and ChEOPSJ do
not address the identification of obsolete test cases. DejaVu
classified all the 94 test cases as retestable. Therefore, we
excluded the 10 obsolete test cases from the calculations of
the inclusiveness, precision, false positives, and false negatives
for the three RTS tools.</p>
        <p>We did not get RTS results for ChEOPSJ when we ran it
on the XML-security subject because of a bug in ChEOPSJ.
It did not detect code changes that it is supposed to detect,
and did not produce results. Table III and Table IV do not
show results for ChEOPSJ with respect to the XML-security
subject.</p>
        <p>Table IV shows the number of false positives and false
negatives for each of the studied RTS approaches. DejaVu
is a safe tool and classifies all modification-traversing test
cases as retestable, and therefore, its inclusiveness was 100%
for all the subject programs. The same set of test cases that
was classified as retestable by DejaVu was also classified as
retestable by MaRTS for all the subject programs (excluding
the 10 obsolete test cases for XML-security). Therefore, the
inclusiveness of MaRTS was also 100%. ChEOPSJ missed
some modification-traversing test cases, and its inclusiveness
was 94% for JUNG, 96% for Chess, 92% for Siena version
1.12, and 88% for version 1.14. The reason is that ChEOPSJ
only records changes to method invocations, but not to other
types of statements in method bodies.</p>
        <p>The precision was 100% for MaRTS and DejaVu because
neither classified any non modification-traversing test case as
retestable for each subject program. The precision of ChEOPSJ
was 100% for JUNG and Chess, 62% for Siena version 1.12,
and 60% for version 1.14. The reason is that ChEOPSJ is based
on static analysis of dependencies between modified code
and test cases, which leads to classifying non
modificationtraversing test cases as retestable.</p>
      </sec>
      <sec id="sec-3-3">
        <title>C. Fault Detection Ability Results</title>
        <p>The results for MaRTS showed a reduction in the number of
selected test cases only for the Siena subject for the adaptation
from version 1.8 to 1.12, and from 1.8 to 1.14. We used
mutation testing to evaluate the fault detection ability of these
reduced test sets. We excluded the XML-security subject from
the fault detection ability evaluation because all of its test cases
were selected by MaRTS (excluding the 10 test cases that were
classified as obsolete by MaRTS).</p>
        <p>There are no tools (to the best of our knowledge) that
support systematic generation of mutations at the model level.
Therefore, we used a code-level mutation testing tool. In
particular, we used PIT5 to apply first-order method-level
mutation operators to the code-level versions 1.12 and 1.14.
The applied mutation operators6 were (1) Conditionals
Boundary Mutator, (2) Increments Mutator, (3) Invert Negatives
5http://pitest.org
6http://pitest.org/quickstart/mutators/
Mutator, (4) Math Mutator, (5) Negate Conditionals Mutator,
and (6) Void Method Calls Mutator. We configured PIT to
only mutate the adapted methods. We ran PIT with both the
original and retestable test sets on both the versions.</p>
        <p>We identify several threats to validity of the results of our
case study.</p>
        <p>External validity. It is difficult to generalize from a study
of only four subject programs. However, we selected program
versions that incorporate various types of modifications, such
as changes to classes, methods, inheritance hierarchy, and class
attributes.</p>
        <p>Internal validity. The unknown factors that might affect
the outcome of the analyses are possible errors in our
algorithm implementation, and that the test cases were generated
only using one test case generation tool. To control the first
factor, we tested the implementation of MaRTS on different
change scenarios. We also compared the results achieved by
MaRTS for the case studies with those of DejaVu.</p>
        <p>We used EvoSuite to generate JUnit test cases for the subject
programs. The results could change if other test generation
tools were used or test sets with different coverage numbers
were used. Additionally, the test cases generated for the Siena
subject achieved low mutation scores. The fault detection
ability results could change if other test sets that achieve
different mutation scores were used. We plan to evaluate the
proposed approach on additional test suites generated by other
test case generation tools.</p>
        <p>Another threat is that the same person selected the subject
programs, generated the test cases, reverse engineered the
models, performed the model-level adaptations, and executed
the RTS tools. There is a potential for getting different results
if different people worked on these steps. The test generation
process and RTS approaches were automated, and thus, having
other people perform those steps would not make a difference
if they used the same tool configurations. The adaptations are
manual, which can lead to different modifications. However,
since we started from a particular version of code and finished
at a well-defined version of code, the differences are not likely
to be significant.</p>
        <p>Construct validity. We used inclusiveness and precision
to evaluate MaRTS. However, there are other metrics that can
be used to evaluate an RTS approach, such as its efficiency in
terms of reducing regression testing time. We plan to evaluate
the efficiency of MaRTS in the future.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>IV. RELATED WORK</title>
      <p>
        The RTS problem has been studied for over three
decades [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Most of the existing approaches are
codebased [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], and little work exists in
the literature on model-based RTS. We summarize the existing
model-based RTS approaches and compare them with MaRTS.
      </p>
      <p>
        Chen et al. [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] use UML activity diagrams to perform
specification-based black-box RTS. In their approach, an
activity diagram represents the requirements of a system. In
contrast, MaRTS uses activity diagrams to represent fine-grained
behaviors of a software system. Korel et al. [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] use control
and data dependencies in an extended finite state machine to
identify the impact of model changes and perform RTS. This
approach does not support changes to the inheritance hierarchy
because it does not use UML class diagram.
      </p>
      <p>
        Farooq et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] use UML class and state machine models
for RTS. This approach does not support the identification of
(1) the addition and deletion of the generalization relations,
and (2) the overridden and inherited operations along the
inheritance hierarchy.
      </p>
      <p>
        Briand et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] present an RTS approach based on UML
use case models, class models, and sequence models. Zech
et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] present a generic model-based RTS platform, which
is based on the model versioning tool, MoVE. The approach
consists of the three phases that are controlled by OCL queries,
namely, change identification, impact analysis, and test case
selection. The approaches of Briand et al. and Zech et al. can
identify the addition and deletion of generalization relations
between classes. However, they do not identify the impact of
such changes to the inherited and overridden operations along
the inheritance hierarchy, which can result in missing some
retestable test cases.
      </p>
      <p>In contrast to the above mentioned model-based RTS
approaches, MaRTS can identify changes along the inheritance
hierarchy and classify test cases accordingly.</p>
    </sec>
    <sec id="sec-5">
      <title>V. CONCLUSIONS AND FUTURE WORK</title>
      <p>In this work, we presented a model-based RTS approach that
supports fine-grained changes in method implementation and
changes to the inheritance hierarchy, and takes into account
the impact of such changes on the selection of test cases.
MaRTS was evaluated on four subjects and compared with two
code-based RTS approaches, DejaVu and ChEOPSJ, which
consider changes to the inheritance hierarchy and support
Java software. MaRTS outperformed ChEOPSJ and achieved
comparable results to DejaVu in terms of inclusiveness and
precision. MaRTS was able to identify a certain type of
obsolete test cases. DejaVu and ChEOPSJ do not address the
identification of obsolete test cases. The retestable test sets
obtained by MaRTS achieved the same fault detection ability
that was achieved by the full test sets.</p>
      <p>We will evaluate the inclusiveness and precision of MaRTS
on additional subject programs, and evaluate its efficiency in
terms of reducing regression testing time.</p>
      <p>This material is based upon work supported by the National
Science Foundation under Grant No. CNS 1305381.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Rothermel and M. J. Harrold</surname>
          </string-name>
          , “A Safe,
          <source>Efficient Regression Test Selection Technique,” ACM Transactions on Software Engineering and Methodology</source>
          , vol.
          <volume>6</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>210</lpage>
          , Apr.
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Harrold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Orso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pennings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Spoon</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Gujarathi</surname>
          </string-name>
          , “
          <article-title>Regression Test Selection for Java Software</article-title>
          ,”
          <source>in Proceedings of the 16th Conference on ObjectOriented Programming</source>
          , Systems, Languages, and
          <string-name>
            <surname>Applications</surname>
          </string-name>
          (OOPSLA'01), J. Vlissides, Ed. Tampa, FL, USa: ACM, Oct.
          <year>2001</year>
          , pp.
          <fpage>312</fpage>
          -
          <lpage>326</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Harrold</surname>
          </string-name>
          , “Testing Evolving Software,
          <source>” Journal of Systems and Software</source>
          , vol.
          <volume>47</volume>
          , no.
          <issue>2-3</issue>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>181</lpage>
          , Jul.
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Briand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Labiche</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          , “
          <source>Automating Regression Test Selection Based on UML Designs,” Journal on Information and Software Technology</source>
          , vol.
          <volume>51</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>16</fpage>
          -
          <lpage>30</lpage>
          , Jan.
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yoo</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Harman</surname>
          </string-name>
          , “
          <article-title>Regression Testing Minimization, Selection and Prioritization: A Survey,”</article-title>
          <source>Journal of Software Testing, Verification and Reliability</source>
          , vol.
          <volume>22</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>67</fpage>
          -
          <lpage>120</lpage>
          , Mar.
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Al-Refai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          , and W. Cazzola, “
          <article-title>Model-based Regression Test Selection for Validating Runtime Adaptation of Software Systems,”</article-title>
          <source>in Proceedings of the 9th IEEE International Conference on Software Testing, Verification and Validation (ICST'16)</source>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Briand</surname>
          </string-name>
          and S. Khurshid, Eds. Chicago, IL, USA: IEEE,
          <fpage>10th</fpage>
          -15th
          <source>of Apr</source>
          .
          <year>2016</year>
          , pp.
          <fpage>288</fpage>
          -
          <lpage>298</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Q</surname>
          </string-name>
          .-u.
          <article-title>-a.</article-title>
          <string-name>
            <surname>Farooq</surname>
            ,
            <given-names>M. Z. Z.</given-names>
          </string-name>
          <string-name>
            <surname>Iqbal</surname>
            ,
            <given-names>Z. I</given-names>
          </string-name>
          <string-name>
            <surname>Malik</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Riebisch</surname>
          </string-name>
          , “
          <article-title>A ModelBased Regression Testing Approach for Evolving Software Systems with Flexible Tool Support</article-title>
          ,”
          <source>in Proceedings of the 17th IEEE International Conference and Workshops on Engineering of Computer-Based Systems (ECBS'10)</source>
          . Oxford, UK: IEEE,
          <string-name>
            <surname>Mar</surname>
          </string-name>
          .
          <year>2010</year>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Zech</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Felderer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kalb</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Breu</surname>
          </string-name>
          , “
          <article-title>A Generic Platform for Model-Based Regression Testing</article-title>
          ,”
          <source>in Proceedings of the 5th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation (ISoLA'12)</source>
          , ser. Lecture Notes in Computer Science 7609,
          <string-name>
            <given-names>T.</given-names>
            <surname>Margaria</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Steffen</surname>
          </string-name>
          , Eds. Heraclion, Crete: Springer, Oct.
          <year>2012</year>
          , pp.
          <fpage>112</fpage>
          -
          <lpage>126</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H. K. N.</given-names>
            <surname>Leung</surname>
          </string-name>
          and
          <string-name>
            <surname>L. J. White,</surname>
          </string-name>
          “Insights into Regression Testing,”
          <source>in Proceedings of Conference on Software Maintenance. Miami</source>
          , FL, USA: IEEE, Oct.
          <year>1989</year>
          , pp.
          <fpage>60</fpage>
          -
          <lpage>69</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Rothermel and M. J. Harrold</surname>
          </string-name>
          , “Analyzing Regression Test Selection Techniques,
          <source>” IEEE Transactions on Software Engineering</source>
          , vol.
          <volume>22</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>529</fpage>
          -
          <lpage>551</lpage>
          , Aug.
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>W.</given-names>
            <surname>Cazzola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Rossini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Al-Refai</surname>
          </string-name>
          , and R. B. France, “
          <article-title>FineGrained Software Evolution using UML Activity and</article-title>
          Class Models,”
          <source>in Proceedings of the 16th International Conference on Model Driven Engineering Languages and Systems (MoDELS'13)</source>
          , ser. Lecture Notes in Computer Science 8107,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moreira</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Schätz</surname>
          </string-name>
          , Eds. Miami, FL, USA: Springer, Sep.
          <year>2013</year>
          , pp.
          <fpage>271</fpage>
          -
          <lpage>286</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>W.</given-names>
            <surname>Cazzola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Rossini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Pradeep</given-names>
            <surname>Mandalaparty</surname>
          </string-name>
          , and R. B. France, “Fine-Grained
          <string-name>
            <surname>Semi-Automated Runtime</surname>
            <given-names>Evolution</given-names>
          </string-name>
          ,” in MoDELS@
          <article-title>Run-Time, ser</article-title>
          . Lecture Notes in Computer Science 8378,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bencomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chang</surname>
          </string-name>
          , R. B. France, and U. Aßmann, Eds. Springer, Aug.
          <year>2014</year>
          , pp.
          <fpage>237</fpage>
          -
          <lpage>258</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>W.</given-names>
            <surname>Cazzola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ghoneim</surname>
          </string-name>
          , and G. Saake, “
          <string-name>
            <surname>Co-Evolving Application</surname>
          </string-name>
          Code and
          <article-title>Design Models by Exploiting Meta-Data,”</article-title>
          <source>in Proceedings of the 22nd Annual ACM Symposium on Applied Computing (SAC'07)</source>
          . Seoul, South Korea: ACM Press,
          <year>Mar</year>
          .
          <year>2007</year>
          , pp.
          <fpage>1275</fpage>
          -
          <lpage>1279</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pukall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Grebhahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schröter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kästner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Cazzola</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Götz</surname>
          </string-name>
          , “
          <article-title>JavAdaptor: Unrestricted Dynamic Software Updates for Java,”</article-title>
          <source>in Proceedings of the 33rd International Conference on Software Engineering (ICSE'11)</source>
          . Waikiki, Honolulu,
          <source>Hawaii: IEEE, on 21st-28th of May</source>
          <year>2011</year>
          , pp.
          <fpage>989</fpage>
          -
          <lpage>991</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pukall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kästner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Cazzola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Götz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Grebhahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schöter</surname>
          </string-name>
          , and G. Saake, “JavAdaptor - Flexible
          <source>Runtime Updates of Java Applications,” Software-Practice and Experience</source>
          , vol.
          <volume>43</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>153</fpage>
          -
          <lpage>185</lpage>
          , Feb.
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Al-Refai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Cazzola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          , and R. France, “Using Models to Validate Unanticipated,
          <string-name>
            <surname>Fine-Grained Adaptations</surname>
          </string-name>
          at Runtime,”
          <source>in Proceedings of the 17th IEEE International Symposium on High Assurance Systems Engineering (HASE'16)</source>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Waeselynck</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Babiceanu</surname>
          </string-name>
          , Eds. Orlando, FL, USA: IEEE,
          <fpage>7th</fpage>
          -9th
          <source>of Jan</source>
          .
          <year>2016</year>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Q. D.</given-names>
            <surname>Soetens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Demeyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zaidman</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Pérez</surname>
          </string-name>
          , “
          <article-title>ChangeBased Test Selection: An Empirical Evaluation,” Empirical Software Engineering</article-title>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>43</lpage>
          , Nov.
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Arcuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Campos</surname>
          </string-name>
          , and G. Fraser, “
          <article-title>Unit Test Generation During Software Development: EvoSuite Plugins for Maven, IntelliJ</article-title>
          and Jenkins,”
          <source>in Proceedings of the 9th IEEE International Conference on Software Testing, Verification and Validation (ICST'16)</source>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Briand</surname>
          </string-name>
          and S. Khurshid, Eds. Chicago, IL, USA: IEEE, Apr.
          <year>2016</year>
          , pp.
          <fpage>401</fpage>
          -
          <lpage>408</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>E.</given-names>
            <surname>Engström</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Runeson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Skoglund</surname>
          </string-name>
          , “
          <source>A Systematic Review on Regression Test Selection Techniques,” Information and Software Technology</source>
          , vol.
          <volume>52</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>14</fpage>
          -
          <lpage>30</lpage>
          , Jan.
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L. J.</given-names>
            <surname>White</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Abdullah</surname>
          </string-name>
          , “
          <article-title>A Firewall Approach for Regression Testing of Object-Oriented Software</article-title>
          ,”
          <source>in Proceedings of the 10th International Software Quality Week (QW'97)</source>
          , San Francisco, CA, USA, May
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Kung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hsia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Toyoshima</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Chen</surname>
          </string-name>
          , “
          <article-title>On Regression Testing of Object-Oriented Programs</article-title>
          ,
          <source>” Journal of Systems and Software</source>
          , vol.
          <volume>32</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>40</lpage>
          , Jan.
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Skoglund</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Runeson</surname>
          </string-name>
          , “
          <article-title>Improving Class Firewall Regression Test Selection by Removing the Class Firewall</article-title>
          ,”
          <source>International Journal of Software Engineering and Knowledge Engineering</source>
          , vol.
          <volume>17</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>359</fpage>
          -
          <lpage>378</lpage>
          , Jun.
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Probert</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Sims</surname>
          </string-name>
          , “
          <article-title>Specification-Based Regression Test Selection with Risk Analysis,” in Proceedings of the Conference of the Centre for Advanced Studies on Collaborative Research (CASCON'02</article-title>
          ),
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Stewart</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , Eds. IBM Press,
          <year>Sep</year>
          .
          <year>2002</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>B.</given-names>
            <surname>Korel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Tahat</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Vaysburg</surname>
          </string-name>
          , “
          <article-title>Model Based Regression Test Reduction Using Dependence Analysis,”</article-title>
          <source>in Proceedings of the International Conference on Software Maintenance (ICSM'02)</source>
          , G. Antoniol and
          <string-name>
            <given-names>I. D.</given-names>
            <surname>Baxter</surname>
          </string-name>
          , Eds. Montréal, Quebec, Canada: IEEE, Oct.
          <year>2002</year>
          , pp.
          <fpage>214</fpage>
          -
          <lpage>223</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>