<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On Optimization of Test Parallelization with Constraints</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Masoumeh Parsa</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Adnan Ashraf</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dragos Truscan</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivan Porres</string-name>
        </contrib>
      </contrib-group>
      <fpage>164</fpage>
      <lpage>171</lpage>
      <abstract>
        <p>Traditionally, test cases are executed sequentially due to the dependency of test data. For large system level test suites, when a test session can take hours or even days, sequential execution does not satisfy any more the industrial demands for short lead times and fast feed-back cycles. Parallel test execution has appeared as an appealing option to cut down on the test execution time. However, running tests in parallel is not a trivial task due to dependencies and constraints between tests cases. Therefore, we propose an approach to parallelize test execution based on the available resources, constraints between tests, and the duration of tests that creates parallel test execution schedules that can be run in multiple testing environments simultaneously. We formulate and solve the problem as Ant Colony System, in order to find a nearoptimal solution.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>The number of test cases required to ensure the quality of a software system grows
handin-hand with its complexity, and consequently, the total test execution time increases
proportionally. For large software systems, test execution time becomes increasingly critical
in automated regression testing, where a large suite of tests is executed frequently on
continuous integration servers.</p>
      <p>Different approaches like test selection, test prioritization or test case reduction are
typically used to speed up test execution, especially in the context of regression testing [YH12].
However, these improvements can be limited for large test suites.</p>
      <p>Test suites are trivially parallelizable if tests are independent, that is, if one test does not
rely on the system state established by a previous test and it is free of interference from
other tests. An example of an interference is when two or more tests require exclusive
access to a shared resource such as a database. However, one cannot always assume that
these conditions hold and tests that pass when executed in a certain sequence may fail
under trivial parallelization. A third, perhaps less obvious but even more important issue
is what we refer to as state incompatibility. In this case, one test may leave the system
in a state from which another test cannot proceed. One concrete example of this is a test
that deletes data from a database that is expected by other tests. While such cases can
often be handled by resetting the system under test to a known initial state, resets can be
time-consuming and an efficient test execution approach should strive to minimize or even
remove the need for system resets.</p>
      <p>Although test designers may strive to create independent test cases, recent studies show
that in practice tests are often not completely independent [ZJW+14, BMDK15], while
other researchers consider these useful and exploit test dependencies [HM13]. As a
conseCopyright © 2016 for the individual papers by the papers' authors. Copying permitted for private and
academic purposes. This volume is published and copyrighted by its editors.
quence, there is a need to plan and schedule the execution of complex test suites in order
to avoid undesired interactions between tests and time consuming system resets. Although
it is possible to create a test schedule manually, this is not viable in practice if the number
of tests is large. To address this issue, we propose an approach to build test schedules
automatically by analyzing known dependencies among tests and their execution time. The
dependencies between the tests can be derived automatically [ZJW+14] or defined
manually [dep]. The scheduler is run before each integration testing while preparing the system
under test. We aim to find the best possible groups and order of tests in each group to be
distributed between a number of agents which share or do not share the same resources
with the objective of reducing the total time that is required for all the tests to be
executed. The other objective is to decrease the number of failed tests which are caused by
the dependencies between the tests. We define the relations between the tests as a set of
constraints in scheduling problems. The first objective of the work is to reduce the tests
failing due to their state incompatibilities, interferences or dependencies while executing
in parallel. The second objective is to minimize the test execution time of the entire test
suite, by searching for the best possible ordering of tests between different agents.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background and Related Work</title>
      <p>Recent work exploits test case dependencies as a means for prioritization of test suites
[HM13], online reduction of test suites [AMPW15], or for protocol conformance testing
in the context of distributed systems [MGM15]. Other works investigate how test
dependencies can be detected in order to improve the independence of test cases [GSHM15] or
to shorten the test execution time by minimizing the number of database resets [HKK05].
However, none of these works investigated approaches for executing tests in parallel.
Haftmann et al. propose an approach for running test cases in parallel [HKL05]. The
approach involves partitioning the test sequences between test executors and ordering the
execution sequence on each executor, as an extension of their work in [HKK05], in which
three scheduling strategies were proposed for resolving the test incompatibilities on
database applications systems with minimum number of resets.</p>
      <p>In the previously mentioned paper, the constraints that are taken into consideration are the
incompatibility and interference constraints between the tests. However, we also cover the
dependency constraint. Furthermore, the goal of reordering the tests in previous works is
to reduce the system reset in order to reduce the test execution time, but we are aiming on
generating a near optimal schedule which satisfy the constraint and reduce the total test
execution time.</p>
      <p>Since test scheduling optimization is a complex combinatorial optimization problem, it is
computationally expensive to find an exact solution for a very large number of test cases.
Therefore, we formulate it as a search problem and apply a highly adaptive online
optimization [HLS+13] approach called Ant Colony Optimization (ACO) [DG97] to find a
nearoptimal solution in polynomial time. There are a number of ant algorithms, such as, Ant
System (AS), Max-Min AS (MMAS), and Ant Colony System (ACS) [DDCG99] [DG97].
ACS [DDCG99] was introduced to improve the performance of AS and it is currently one
of the best performing ant algorithms. Therefore, in this paper, we apply ACS to the test
scheduling optimization problem.
3</p>
    </sec>
    <sec id="sec-3">
      <title>The Test Scheduling Problem</title>
      <p>In this section, we represent the problem of test scheduling by defining the dependencies,
interference and state incompatibilities between the tests. We assume that we have at our
disposal one or more test environments (T E) that can execute test cases. Each T E has a
number of agents (AG). Each AG can run one test case at a time, in parallel with the other
AGs. The AGs in a given T E have a shared system under test and state (memory, file
system, database), while each T E is isolated from the other T Es and cannot collaborate
or interfere with each other.</p>
      <p>In Figure 2 we represent an abstraction of the problem by having two T Es. The first
test environment T E1 includes two AGs while the second test environment T E2
contains three AGs. The number of test cases to be executed are 13 and each sequence of
tests that are scheduled for each AG are represented in front of the agent with respecting
the order. In this example, test3 depends on test1 and test2 while test9 is dependent on
test1 and test8; and test8 has interference with test1 and test2. Furthermore, test10 has
state incompatibility with test6 and interference with test9. The maximum execution time
represents the time when all tests are completely executed on the AGs. The example
constraints between the tests are represented in Figure 1. The notations that are used in Figure
1 are explained later in this section.</p>
      <p>The schedule for two different tests running in the same agent cannot overlap.
Furthermore, in the case of test interference, we cannot schedule two tests interfering
simultaneously in the same TE. We represent the test interference as a relation Ginf where
Ginf = (T C; Einf ) where T C is the set of tests and Einf T C T C. It is assumed
that tests share resources if they are executed on the same test environment.
(t1 ; t3) 2 Edep; (t2 ; t3) 2 Edep
(t1 ; t9) 2 Edep; (t8 ; t9) 2 Edep
(t1 ; t9) 2 Einc; (t10 ; t9) 2 Einf
(t1 ; t8) 2 Einf ; (t2 ; t8) 2 Einf</p>
      <p>T E1
T E2</p>
      <p>AG1
AG2
AG3
AG4
AG5</p>
      <p>τ(test2) δ(test3)
test1 test2 test3
test4 test5</p>
      <p>test6
test1 test8 test9
test10 test11 test12</p>
      <p>test7
0
test13</p>
      <p>t
Maximum test execution time
In state dependency, some tests might be preconditions for other tests which requires them
to be executed in order to succeed. This dependency can be represented as a relation
(t1 ; t2) 2 Edep which implies that test t1 should be executed before test t2.
A third, perhaps less obvious but relevant constraint, is what we refer to as state
incompatibility that occurs when one test leaves a T E in a state from which another test cannot
proceed. We can represent the incompatibility relation as Ginc where Ginc = (T C; Einc)
, Einc T C T C.</p>
      <p>To optimise the test execution time, we need to minimise the maximum execution time
of tests on the agents. Given the defined constraints the goal is to minimise the finishing
time of each test which cause the minimum ending time of all the tests in each agent.
This value is described as T ET or overall test execution time. By minimising T ET , we
actually minimise our objective which is the maximum execution time of tests.
4</p>
      <p>ACS-Based Test Scheduling Optimization Algorithm
In this section, we present our ACS-based Test Scheduling Optimization algorithm
(ACSTSO). ACO is a multi-agent approach to difficult combinatorial optimization problems,
such as, traveling salesman problem and network routing [DDCG99]. It is inspired by the
foraging behavior of real ant colonies. While moving from their nest to the food source and
back, ants deposit a chemical substance on their path called pheromone. Other ants can
smell pheromone and they tend to prefer paths with a higher pheromone concentration.
Thus, ants behave as agents who use a simple form of indirect communication called
stigmergy to find better paths between their nest and the food source. It has been shown
experimentally that this simple pheromone trail following behavior of ants can give rise to
the emergence of the shortest paths [DDCG99]. It is important to note here that although
each ant is capable of finding a complete solution, high quality solutions emerge only from
the global cooperation among the members of the colony who concurrently build different
solutions.</p>
      <p>In the context of test case schedule, each test case is allocated to an agent. Therefore,
ACSTSO makes a set of tuples T , where each tuple t 2 T consists of two elements: test case
tc, and agent ag</p>
      <p>t := (tc; ag)
The output of the ACS-TSO algorithm is a test case schedule plan , which, when
enforced, would result in a reduced overall test execution time. Thus, the objective function for
the proposed ACS-TSO algorithm is</p>
      <p>minimize f ( ) := maxs 2 agfT ETsg
where is the test case schedule plan and T ETs is the test execution time on ag. Since
the main objective of test case schedule is to minimize the overall test execution time, the
objective function is primarily defined in terms of test execution time T ET on each ag.
Each of the nA ants uses a state transition rule to choose the next tuple to traverse.
According to the following rule, an ant k chooses a tuple s to traverse next by applying
s := arg maxu 2 Tk f[ u] [ u] g
where denotes the amount of pheromone and represents the heuristic value associated
with a particular tuple. is a parameter to determine the relative importance of the heuristic
value with respect to the pheromone value. The expression arg max returns the tuple for
which [ ] [ ] attains its maximum value. Tk T is the set of tuples that remain to be
traversed by ant k. The heuristic value s of a tuple s is defined as
8&gt;(T ETag) 1
&lt;</p>
      <p>/tc;
s :=</p>
      <p>jDtcj=jDallj /tc;
&gt;:jDtcj=jDallj (T ETag) 1
/tc
if jDtcj = 0
if T ETag = 0
else</p>
      <p>T ETtc</p>
      <p>P
tc 2 T C</p>
      <p>T ETtc
where Dtc is the set of dependencies of test case tc in tuple s and T ETag is the current
test execution time of ag in tuple s. The heuristic value is based on the product of the
number of dependencies of the test case and the multiplicative inverse of the current test
execution time of the agent in tuple s. Therefore, the tuples in which the test case has
a higher number of dependencies and the agent has a shorter current test execution time
receives the highest heuristic value. Moreover, in the calculation of heuristic value, we
consider the execution time of each test in proportion of the execution time of all the tests,
in which a test with higher execution time will get a higher heuristic value. The heuristic
value favors dependent test cases over independent test cases. Therefore, a test case with a
higher number of dependencies receives higher heuristic value than a test case with fewer
dependencies. Similarly, the main reason for favoring agents with a shorter current test
execution time is to minimize the overall test execution time.
(5)
(6)
(7)
(8)
4.1</p>
      <p>Pheromone Distribution
The stochastic state transition rule in (3) prefers tuples with a higher pheromone
concentration which leads in a reduced overall test execution time. (3) is called exploitation [DG97].
It chooses the best tuple that attains the maximum value of [ ] [ ] . In addition to the
stochastic state transition rule, ACS also uses a global and a local pheromone trail evaporation
rule. The global pheromone trail evaporation rule is applied towards the end of an iteration
after all ants complete their test suite schedule plans. It is defined as
s :=
((1
(1
3
)
s +
)
s +
+ ;
s</p>
      <p>if s 2= Violations
+ ; if s 2 Violations
s
where +s is the additional pheromone amount that traditionally is given only to those
tuples that belong to the global best test schedule plan + in order to reward them. However,
we only add the additional pheromone to the subset of tuples that contribute to the global
best test schedule plan and did not violate the constraints. Moreover, we define another
global updating rule for the violated tuples to have higher decay pheromone in compared
to the tuples that were not existed in the solution. The additional pheromone is defined as
+ :=
s
(f ( +); if s 2
0;
otherwise
+ ^ s 2= Violations
2 (0; 1] is the pheromone decay parameter, and
plan from the beginning of the trial.</p>
      <p>The local pheromone trail update rule is applied on a tuple when an ant traverses the tuple
while making its test schedule plan. It is defined as
+ is the global best test schedule
s := (1
)
s +</p>
      <p>0
where 2 (0; 1] is similar to and 0 is the initial pheromone level, which is computed
as the multiplicative inverse of the execution time of all the test cases.</p>
      <p>X
0 := (</p>
      <p>T ETtc) 1
tc 2 T C
(9)
1:</p>
      <p>+ := ;, P := ;
2: 8t 2 T j t := 0
3: for i 2 [1; nI] do
4: for k 2 [1; nA] do
5:
6:
7:
Dtc
Inttc
Inctc
set of dependencies of test case tc
set of interference of test case tc
set of incompatibilities of test case
tc
set of test schedule plans
set of tuples
set of tuples not yet traversed by ant
k
a tuple
test case in a tuple
agent in a tuple
current test execution time of agent
ag
a test schedule plan
the global best test schedule plan
ant-specific test schedule plan of ant
k
heuristic value
amount of pheromone
initial pheromone level</p>
      <p>k := ;
while all tc 2 T C are allocated do
choose a tuple t 2 T to traverse by
P using (3)
T 8: apply local update rule in (8) on t
Tk 9: if ant k has not already allocated tc in t
then
t 10: if Dtc of tc in t is not empty then
tc 11: if test in Dtc is not already
allocaag ted to the same test environment as t
T ETag then
12: if allocating test is not creating
an interference or
incompatibili+ ty in k then</p>
      <p>13: add tuple (test,ag) to k
whek re test is in Dtc and ag is the
same agent as t
14: update T ETag of ag in t
15: end if
0+s avdendittoiotnhael tupphleersominone+ amount gi- 111786::: iefnadellniofdcaitfing t is not creating an
interfepheromone decay parameter in the rence or incompatibility in k then
pglaorabmaleuteprdatotindgetreurlme ine the relative 2109:: audpddattteoT EkTag of agent ag in t
importance of 21: end if
pheromone decay parameter in the 22: end if
local updating rule 23: end while
nA number of ants that concurrently 24: if k is complete then</p>
      <p>build their test schedule plans 25: add k to P
nI number of iterations of the for loop 26: end if</p>
      <p>that creates a new generation of ants 27: end for
f ( ) objective function that minimizes 28: + := arg max k 2 P ff ( k)g
the overall test execution time 29: apply global update rule in (6) on all s 2 T
30: end for
Fig. 3: Summary of concepts and their notations 3 31: return +
The pseudocode of the proposed ACS-TSO algorithm is given in Figure 4. The algorithm
makes a set of tuples T using (1) and sets the pheromone value of each tuple to the initial
pheromone level 0 by using (9) (line 2). Then, it iterates over nI iterations (line 3), where
each iteration i 2 nI creates a new generation of nA ants that concurrently build their
test schedule plans (lines 4–24). Each ant k 2 nA iterates its loop until all test cases in
T C are allocated (lines 6–23).</p>
      <p>Afterwards, based on the state transition rule in (3) each ant chooses a tuple t to traverse
next (line 7). The local pheromone trail update rule in (8) and (9) is applied on t (line 8).
If ant k has not already allocated tc in t and tc is an independent test case, t is added to
the ant-specific test schedule plan k if it cause no incompatibility and no interference in
the solution and the test execution time of agent ag in t, T ETag, is updated to reflect the
impact of the test case schedule (line 18–21). However, if tc in t is a dependent test case
(line 10), it is essential to allocate all test cases on which tc is dependent on the same test
environment before allocating tc on ag in t. Therefore, the algorithm uses the set of test
cases on which tc in t is dependent denoted as Dtc from (4)(line 11–16).
However, to prevent multiple scheduling of the same test cases on the same test
environment, the algorithm removes any test cases from Dtc which are already allocated to ag in
t (line 12). Then, it adds all tuples to the ant-specific test schedule plan k where tc is in
Dtc and ag is the same as in t (line 13). At this point, it is possible that a test case in Dtc
may be allocated to more than one server. Such a situation may arise when more than one
dependent test cases are dependent on the same test case(s). (line 18–21).
Afterwards, when all ants complete their test schedule plans, all ant-specific test schedule
plans are added to the set of test schedule plans P (line 24–26), each test schedule plan
k 2 P is evaluated by applying the objective function in (2), the thus far global best test
schedule plan + is selected (line 28), and the global pheromone trail update rule in (6)
and (7) is applied on all tuples (line 29). Finally, when all iterations i 2 nI complete, the
algorithm returns the global best test schedule plan + (line 31).
5</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions and Future Work</title>
      <p>In this paper, we presented a novel approach for scheduling tests in parallel. The aim is
to optimize the execution time of the tests in a test suite while satisfying incompatibility,
dependency and interference constraints between the tests. In order to decrease the
execution time we are looking for the best possible partitioning of tests in a number of groups
and reordering of tests in each group to reduce the false positive and false negative test
execution results. Since computing the exact solution is impractical for a large number of
tests, we proposed a metaheuristics based approach to achieve an approximate solution in
polynomial time. The time complexity of our proposed algorithm is not linear, however,
the algorithm can be parallelised to accelerate the execution time. Currently, we require
the algorithm to be run before each integration to take the possible modification of tests
into consideration. We implemented the proposed algorithm in a Java based framework.
There is a difference between scheduling the tests and the traditional resource constrained
scheduling that required us to propose a more general scheduling strategy rather than using
one of the known algorithms. There is no notion of state in performing the tasks in resource
constrained scheduling. If a task has been performed once, it will not be necessary to be
performed again. However, in test scheduling, the state achieved on a test environment is
important for the other tests. In this case, it is not enough that test has been executed before
on another environment.</p>
      <p>In the proposed approach, we defined the interference and incompatibility as hard
constraints in which should be satisfied while the solution is constructed, and the dependency
constraint as a soft constraint in which we aim at reducing in the next cycles. Moreover,
we define the objective function to reduce the execution time of tests. For the future work,
we may focus on incorporating SMT (Satisfiability Modulo Theories) solvers to tackle the
problem to achieve an optimal schedule.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>This work was supported by the Need for Speed (N4S) Program (http://www.n4s.fi).</p>
    </sec>
    <sec id="sec-6">
      <title>Literatur</title>
      <p>[Dep]
[DG97]</p>
      <p>DepUnit. https://code.google.com/p/depunit/. [Online].
M. Dorigo und L.M. Gambardella. Ant colony system: a cooperative learning approach
to the traveling salesman problem. Evolutionary Computation, IEEE Transactions on,
1(1):53–66, 1997.
[HKK05]
[HKL05]
[HLS+13]
[HM13]
[MGM15]
[YH12]
[ZJW+14]</p>
      <p>Florian Haftmann, Donald Kossmann und Er Kreutz. Efficient regression tests for
database applications. In In Conference on Innovative Data Systems Research (CIDR,
Seiten 95–106, 2005.</p>
      <p>Florian Haftmann, Donald Kossmann und Eric Lo. Parallel Execution of Test Runs for
Database Application Systems. In Klemens Bhm, Christian S. Jensen, Laura M. Haas,
Martin L. Kersten, Per-ke Larson und Beng Chin Ooi, Hrsg., VLDB, Seiten 589–600.
ACM, 2005.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [AMPW15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Arlt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Morciniec</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Podelski und S. Wagner. If A Fails, Can B Still Succeed? Inferring Dependencies between Test Results in Automotive System Testing</article-title>
          .
          <source>In Software Testing, Verification and Validation (ICST)</source>
          ,
          <source>2015 IEEE 8th International Conference on, Seiten</source>
          <volume>1</volume>
          -10,
          <year>April 2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [BMDK15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bell</surname>
          </string-name>
          , E. Melski, M. Dattatreya und
          <string-name>
            <given-names>G.E.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          . Vroom:
          <article-title>Faster Build Processes for Java</article-title>
          . Software, IEEE,
          <volume>32</volume>
          (
          <issue>2</issue>
          ):
          <fpage>97</fpage>
          -
          <lpage>104</lpage>
          ,
          <year>Mar 2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [DDCG99]
          <article-title>Marco Dorigo, Gianni Di Caro und Luca M. Gambardella. Ant algorithms for discrete optimization</article-title>
          .
          <source>Artif. Life</source>
          ,
          <volume>5</volume>
          (
          <issue>2</issue>
          ):
          <fpage>137</fpage>
          -
          <lpage>172</lpage>
          ,
          <year>April 1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [GSHM15]
          <article-title>Alex Gyori, August Shi, Farah Hariri und Darko Marinov. Reliable Testing: Detecting State-polluting Tests to Prevent Test Dependency</article-title>
          .
          <source>In Proceedings of the 2015 International Symposium on Software Testing and Analysis</source>
          ,
          <source>ISSTA 2015, Seiten 223-233</source>
          , New York, NY, USA,
          <year>2015</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>Mark</given-names>
            <surname>Harman</surname>
          </string-name>
          , Kiran Lakhotia, Jeremy Singer, David R.
          <article-title>White und Shin Yoo</article-title>
          .
          <article-title>Cloud engineering is Search Based Software Engineering too</article-title>
          .
          <source>Journal of Systems and Software</source>
          ,
          <volume>86</volume>
          (
          <issue>9</issue>
          ):
          <fpage>2225</fpage>
          -
          <lpage>2241</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Haidry</surname>
          </string-name>
          und
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <article-title>Using Dependency Structures for Prioritization of Functional Test Suites</article-title>
          .
          <source>Software Engineering</source>
          , IEEE Transactions on,
          <volume>39</volume>
          (
          <issue>2</issue>
          ):
          <fpage>258</fpage>
          -
          <lpage>275</lpage>
          ,
          <year>Feb 2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Alberto</given-names>
            <surname>Marroquin</surname>
          </string-name>
          ,
          <article-title>Douglas Gonzalez und Stephane Maag. A Novel Distributed Testing Approach Based on Test Cases Dependencies for Communication Protocols</article-title>
          .
          <source>In Proceedings of the 2015 Conference on Research in Adaptive and Convergent Systems, RACS, Seiten</source>
          <volume>497</volume>
          -504, New York, NY, USA,
          <year>2015</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Yoo und</surname>
          </string-name>
          <string-name>
            <given-names>M.</given-names>
            <surname>Harman</surname>
          </string-name>
          . Regression Testing Minimization, Selection and Prioritization:
          <string-name>
            <given-names>A</given-names>
            <surname>Survey. Softw</surname>
          </string-name>
          . Test. Verif. Reliab.,
          <volume>22</volume>
          (
          <issue>2</issue>
          ):
          <fpage>67</fpage>
          -
          <lpage>120</lpage>
          ,
          <year>Marz 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>In Proceedings of the 2014 International Symposium on Software Testing and Analysis</source>
          ,
          <source>ISSTA 2014, Seiten 385-396</source>
          , New York, NY, USA,
          <year>2014</year>
          . ACM.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>