<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Systematic Literature Review on Test Case Minimization</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ritwik Dalmia</string-name>
          <email>ritwik.d2020bds@srisriuniversity.edu.in</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Soumili Chandra</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sudhir Kumar Mohapatra</string-name>
          <email>sudhir.mohapatra@srisriuniversity.edu.in</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Srinivas Prasad</string-name>
          <email>sprasad@gitam.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mesfin</string-name>
          <email>mesfinabha@gmail.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abebe Haile</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>GITAM University</institution>
          ,
          <addr-line>Vishakhapatnam</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <fpage>237</fpage>
      <lpage>247</lpage>
      <abstract>
        <p>Regression testing is an important testing technique. This testing is done after the implementation of the software. Test case minimization (TCM) or Test case reduction (TCR) is used in regression testing to minimize the test cases. The Main Purpose of TCM is to find a minimal set of test cases without compromising on the fault detection potential. This NP-hard technique does not save cost as well as time. The article gives qualitative findings for the articles that were surveyed. Furthermore, it multiplies the results with scientific impeachments and significant results from the fundamental sources of information literacy. SLR, Test case minimization, test case reduction, Regression testing.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>✓
✓
✓</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        SLR (Systematic Literature Review) evaluates and analyses all potential connected activities to our research
questions. Kitchenham and Charters [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] based their SLR regulations on the SLR requirements., SLR has been
used to achieve the following goal:
To identify the chief rituals and practices by using exact technologies, procedures, tools, or processes by
accumulating information from other related modules. To summarize, identify, evaluate, and interpret all
available research gaps about test suite minimization. In this study, the systematic literature review is used as
the review methodology.
      </p>
      <p>
        There are some reasons why selecting a systematic literature review as a review methodology [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>SLR can define search strategies; this ensures the completeness of the primary studies to be assessed in
the review.
documents for the review,
It has well-defined primary study inclusion and exclusion criteria which allows filtering out more relevant
Since every process of the review is documented, it is easy to understand for the readers,
It is the well-defined methodology that makes the result of the review less likely biased and reporting the
report that supports the review question.</p>
    </sec>
    <sec id="sec-3">
      <title>2. The Need for Conducting a Systematic Literature Review</title>
      <p>The need for this systematic literature review is to extract some existing information about test suite
minimization.</p>
      <p>There are different reasons to conduct SLR among those reasons some are as follows:
To extract information about test suites minimization,
To extract information about the coverage information used for minimization of the test suite,
To know how the previous studies minimize the test suite,
EMAIL:</p>
      <p>1);
soumili.c2020bcs@srisriuniversity.edu.in
(A.</p>
      <p>2);
3</p>
      <p>2020 Copyright for this paper by its authors.
✓
✓</p>
      <p>To apply different inclusion and exclusion criteria,</p>
      <p>To apply different search strategies to get sufficient information.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Review Question</title>
      <p>RQ1: What are the existing techniques for minimization of the test suite?
RQ2: How the existing techniques minimize the test suites?
RQ3: What are frequently used coverage information for test suite minimization?
RQ4: What metrics are used by researchers to measure the experiments in test minimization?</p>
    </sec>
    <sec id="sec-5">
      <title>4. Conducting Review</title>
      <p>This section contains two main parts which are included as follows:</p>
      <sec id="sec-5-1">
        <title>4.1 Generating Search Strategies</title>
        <p>The systematic literature review uses properly defined search keywords to explore the dominant
subjects in selected search assets.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2 Search Keyword</title>
        <p>The coverage of this systematic literature review includes studies related test suite minimization in
software testing. To find out the primary studies related to test suite minimization the following
keywords are used
{((‘regression’ AND ‘test’) AND (‘suite’ OR ‘case’) AND (minimization OR reduction) OR
(algorithm OR technique OR approach OR heuristic)
OR (empirical study OR experiment OR experimental study)) &lt;in title, abstract, and keywords&gt;}</p>
      </sec>
      <sec id="sec-5-3">
        <title>4.3 Search Result:</title>
        <p>By using the above keywords 120 papers related to test suite minimization are downloaded which are
published in between 2017 to 2021. Then papers are filtered and selected based on the inclusion and
exclusion criteria defined. The primary studies searched for this systematic review is in the following
database: IEEE Xplore, ACM Digital library, Science Direct, Springer Library, Elsevier Online
Library, Google Scholar and the references found on the primary studies.</p>
      </sec>
      <sec id="sec-5-4">
        <title>4.4 Documenting the search process</title>
        <sec id="sec-5-4-1">
          <title>Name of Database: Science Direct Search Strategy for the database: Using search key words Date of search: June 13, 2021 Years covered: 2017 – 2021</title>
          <p>Name of Database: Springer Library Search Strategy for the database: Using
search key words
Date of search: June 13, 2021
Years covered: 2017 – 2021</p>
        </sec>
        <sec id="sec-5-4-2">
          <title>Name of Database: Elsevier Online Library, Search Strategy for the database: Using search key words Date of search: June 14, 2021 Years covered: 2017 – 2021</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Study Selection Criteria</title>
      <p>The inclusion and exclusion standards for the primary research of related works are described in this
section..</p>
    </sec>
    <sec id="sec-7">
      <title>6. Primary Study Selection</title>
      <p>
        PS3
PS4
PS5
PS6
PS7
PS8
PS9
PS10
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
2019
2020
2018
2017
2019
2018
2020
2017
set of test cases capable of both identifying and
locating problems.
      </p>
      <p>The work focuses on fault coverage-based test
suite minimization utilizing greedy "learning
from mistakes," which shortens the Time of
execution and decreases test suite size.
The "REDUNET" network-based optimization
method and integer linear programming are the
main topics of the study design an integrated
framework for the automatic development of
an effective and efficient test suite that
combines test suite generation, code coverage
analysis, and test suite reduction. To decrease a
test suite while preserving the same code
coverage.</p>
      <p>The study focuses on creating and
minimizing test cases based on maximal path
cover-age using an artificial bee colony
algorithm or the cuckoo search method.
The study focused on test case minimization
using a flower pollination technique, in an
effort to shorten the time needed for a single
run and the size of the finished test suite.
The study focuses on combining the test
requirements set, extracting test cases from
each cluster, and locating as many related test
cases as feasible using an enhanced K-means
algorithm, as well as using the Degree of
Membership Function to build a fuzzy
clustering approach.</p>
      <p>The study focuses on eliminating redundancies
at the test statement-level using fine-grained
test case minimization while maintaining test
assertions and suite coverage.</p>
      <p>The study focused on fault coverage-based test
suite optimization utilizing the butterfly
optimization algorithm, with fault detection as
a performance measure.</p>
      <p>Based on a reduction in test suite size and
execution and validation costs, the study
suggests the diversity dragonfly approach for
cost-aware test suite reductions.
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔</p>
    </sec>
    <sec id="sec-8">
      <title>7. Reporting the Systematic Literature Review Result</title>
      <p>This section describes the review questions listed in the section 3.</p>
      <p>
        RQ1: What are the existing techniques for minimization of the test suite?
Neha Gupta et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] presented a test suite reduction method based on the NSGA-II algorithm. The
proposed method sought to develop a small test suite that could discover and localize software flaws.
Finally, they stated that their approach provided a basic set of test suites with a 78 % reduction in
size and 95.16 % fault detection using test suites from the Defects4j repository.
Arun Prakash et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] a fault coverage-based test suite optimization (FCBTSO) method was
developed which reached from on Harrolds–Gupta–Soffa (HGS) and learning from mistakes
approach. Their goal was to maximize fault coverage while keeping the test suite to a minimum.
They stated that their technique was more effective than the Greedy method, HGS, Additional
Greedy, and En-hanced HGS fewer test cases are being used.
      </p>
      <p>
        Misael Mongiovì et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] REDUNET is a test suite reduction method that combines integer linear
programming with network-based optimization. With the objective of reducing the test suite while
ensuring equivalent code coverage, using test suite generation, code coverage analysis, and test case
reduction, they developed a comprehensive framework for the automatic creation of an effective and
efficient test suite. They claimed that their method resulted in a 50% the test suite's size was
decreased.
      </p>
      <p>
        Manju Khari et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] described the creation of an automated testing solution that involves the
production and reduction of test suites. For lowering the test suite, they developed the artificial bee
colony algorithm or the cuckoo search approach. Their main goal was to reduce the test suite while
increasing path coverage.
      </p>
      <p>
        Mohapatra et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] For test suite reduction, an ant colony optimization approach was devised.
Their objective was to construct a realistic collection of test cases that covered all of the
requirements while keeping the original test suite's fault detection capability and executing in the
shortest possible time.
      </p>
      <p>
        AbdulRahman et al [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] for the problem of test suite reduction, a suggested Test Generator Flower
Pollination Strategy based on the Flower Pollination Algorithm is presented. By removing pointless
test cases, they hoped to reduce the cost of software testing. They claimed that their approach
outperformed existing algorithms and that it is also simple to build, has fewer parameters, and is
more flexible.
      </p>
      <p>
        Bao-Sheng et al [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] For test suite reduction, an improved K-Means approach was developed. They
stated that their technique minimizes the number of redundant test cases while simultaneously
providing the largest coverage and is more effective and efficient.
      </p>
      <p>
        Arash et al [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposed fine-grained test suite minimization method. Their method uses the
inference of a test cases model to analyze data, allowing for automated test re-organization inside a
test suite. Their technique eliminates redundancy at the test statement level while maintaining test
assertions and testing suite coverage. The researchers claim that the method was able to eliminate
43 % of duplicated test cases and 20% of execution time on average.
      </p>
      <p>
        Abhishek et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] proposed an efficient self-adaptive butterfly optimization algorithm-based
approach for test suite minimization. When compared to the bat search method, they found that their
approach performed better in terms of fault detection.
      </p>
      <p>
        Shounak et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] developed a diversity dragonfly-based method for cost-conscious test suite
reduction. The test suite's quality and cost were their main concerns. They claimed that the DDF's
reduction capabilities are superior to previous approaches and that the cost is likewise inexpensive,
ensuring a high-quality test suite reduction.
      </p>
      <p>
        RQ2: How the existing techniques minimize the test suites?
Neha Gupta et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] used the NSGA-II algorithm. Setting NSGA-II parameters was the first step.
The first step was to set the NSGA-II parameters. They fed the algorithm a test suite in the form of
a bit string as input. The chromosomal size is the same as the test suite size. The suite's number of
solutions was proportional to the population size. They employed ramping half-and-half
initialization strategies to populate the population. They limited the number of generations based on
the amount of input. A half uniform crossover was utilized for the crossover function, and a bit-flip
with a probability of Pm 1/n was employed for the mutation function. Finally, they established two
criteria for determining when to cease. The first was the number of generations, and the second was
the threshold value for each input.
      </p>
      <p>
        Arun Prakash et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] created a fault coverage-based test suite optimization technique. Their
algorithm's initial step is to determine the fault weight (FW) for each fault type fi This brings the
total number of tests that cover each defected ith fault. Their approaches choose a test case from a
related cardinality test suite which covers the most flaws (or undetected faults) in step 2. If two test
cases address the same defects, the selection method can choose one at random. The overall number
of issues/faults fi that each test case's coverage covered (Step 3) and the test case weight were also
determined (TCW). In step 4, a loop was executed until all coverable faults (FS) or all test cases in
the suite had been performed.
      </p>
      <p>
        Misael Mongiovì et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] To make use of the advantages of the control flow graph, they used
integer linear programming and network-based optimization, and they identified test suite
minimization as a minimal set coverage issue. The following are the steps to obtain the reduced
suite: Step 1: For the program under test, the Randoop tool was used to construct a test suite. Step
2: The produced test suite was put as Java unit tests into the maven project. Step 3: The test cases
are run with a service tracing code coverage tool. Step 4: The method coverage was calculated when
executing the test cases. Finally, in Step 5 for test suite reduction, the test execution traces and control
flow graph are provided, resulting in a smaller test suite with the same code coverage and a faster
execution time.
      </p>
      <p>
        Manju Khari et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] provided a strategy for generating test data using black-box testing
techniques, then optimizing the data using an Cuckoo search or artificial bee colony algorithms.
The steps in the artificial bee colony algorithm are as follows: The first step is to set up the ABC's
control settings. Step 2 involved employing black-box testing methodologies to create the initial
population. Step 3 involves analyzing the population and adding test cases to the result set that follow
new independent routes. The fourth step is to make the cycle equal 1. Step 5 is based on the employed
bee and observer bee phases and performs the following tasks: locating a different neighbor test case
than ai, assessing a'i as a food supply, and replacing ai with a'i if it is superior to ai. Increase the test
counter if there is no way to make the solution better.
      </p>
      <p>To replace any test cases with test counters higher than the stopping criterion value, use the Scout
Bee Phase to find the faulty test cases and replace them with newer, random test cases. Then the
cycle is equal to cycle plus one. Finally, repeat all of the previous stages in Step 6 of the While cycle!
=stopping criterion.</p>
      <p>Steps in the Cuckoo Search Algorithm Step 1: Set up the CSA's control parameters and stopping
criteria. Step 2: Using any black-box approach, start the initial population. Step 3 Analyze the
sample and include test cases that take fresh, separate routes in the outcome set. In step 4, set cycle
= 1. Do the following procedures in Step 5 Cuckoo Phase: c'i, obtain a random cuckoo with an egg.
Check the self-reliant path the test case follows by going to a random nest.</p>
      <p>If the new egg (test case) c'i is superior to the old egg (test case) ci in the nest, c'i will replace ci if a
test case for that independent path already exists in the result set from the Egg Replace/Drop Phase.
Otherwise, find a new nest. Cycle + 1 is the make cycle. Finally, while cycling, step 6! = If the
stopping criteria are met, go to the next stage.</p>
      <p>
        Mohapatra et al.[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] presented an ant colony-based approach for test suite reduction. The test cases
are represented as nodes in a full graph in their method. The execution duration of each test case and
linked test criteria are stored in a matrix. In a full network, a node is a location where ants begin their
search. The ant uses the matrix to help choose neighbor nodes based on reducing execution time and
maximizing needs. The test suite is used as the foundation for creating a comprehensive graph,
according to their core concept. The test cases represent each vertex in the graph. There are at least
as many ants as there are test cases. The ant in the whole vertex will be the starting point for the
solution. Each ant adds additional edges to its existing path in order to find the optimum path. The
find next() method is used to create a new route. This function looks for nearby edges with the most
phero-mone deposits. If there is a tie between edges, it chooses one at random. When there are no
more edges remaining for the path, the process of adding edges comes to an end.
AbdulRahman et al [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] Once the input parameters, such as the parameter number p and the V set
of values for each feature V = [v0..vj], have been specified is the initial stage. Let the final test case
list be a collection of candidate tests, and then construct all potential interactions elements using IEL
based on P and V, as well as a random pollen population. If rand is smaller than pa, Create a step
vector L that complies with the Levy distribution and compute global pollination by xti+1= xti +
ϒL(λ)(g* - xti) while the IEL is not empty and the loop has not achieved maximum generation. If
not, choose a random seed from a uniform distribution in the range [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], carry out local pollination
using the equation xti+1= xti + ∈ (xti- xtk), and then stop. Update the population with the new solutions
if they are superior, and then call it a day. G* end is the most effective solution at the moment and
adds g* to the Final Test Case List as the best test case. End while, end-procedure, and remove
covered interaction items from IEL.
      </p>
      <p>
        Bao-Sheng et al [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] For test suite reduction, an improved K-Means approach was developed. They
refined the K-Means methodology and created a fuzzy clustering approach by integrating the test
criteria, selecting test cases from each cluster, and locating as many related test cases as possible
using the Degree of Membership Function. Assume that in this method, the data sets consist of K
categories. T = t1, t2, and tn is set by the software test case, and that mi is the cluster center of cluster
I i[1, K]. Derive the fuzzy K-Means membership functions to achieve the optimal answer for the
fuzzy K-Means algorithm.
      </p>
      <p>
        Arash et al [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] presented a fine-grained test suite minimization approach Instead of depending on
code coverage, their methodology captures the true behavior of test cases by documenting calls to
production methods and their inputs, keeping track of cover-age and defect detection, and keeping
track of all test assertions in a test suite. Instrumenting code and building models are the two
fundamental phases. They used code instrumentation to store and study the nature and value of every
referential variable so each test statement level are desired test state must be accomplished.
Additionally, variables are used and defined at this stage. Additionally, determining Equivalent Test
assertions and compatible states is a part of the second model generation stage. Two minimization
algorithms are then put into practice. Test composition and test suite reorganization.
By rearranging the test suite, they were able to determine the quickest route to the closest test
statement that was yet undiscovered. Starting from the beginning state. They repeated this technique
from that node until there was no longer any way to extend the route to encompass additional
equivalent test statements or assertions. This event occurs when all of the model's relevant test
statements and declarations are covered.
      </p>
      <p>Else, start from the beginning and repeat the method if there are any similar test statements or
assertions that are still undiscovered. They employed a variation of the best-fit search technique that
keeps running in an effort to find the quickest path.</p>
      <p>
        Then To store the state, the Composing Minimized Test Cases method employs a bidirectional map
from variable names and types to variable values. As it moves through the test statements in the
rearranged test suite path, it checks to see if each test statement contains the value for each variable.
If a value of this kind exists but is known by a different name in the state, it will rename the variable
present inside the test expression. If the target type and the variable type are different, it is cast to
the target type. If there are any names that are duplicated, they are renamed. Finally, it changes the
bi-directional state map with the test statement's modified values, signaling that the x call with inputs
set of test cases is now running.
Abhishek et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] presented a butterfly optimization-based strategy for test suite reduction. The
objective function f(s), where s=(s1, s2,..., sdim), and dim is the number of dimensions, must first be
defined. Si=(i=1,2,3,4,....n) is the second generation of the initial population of n butterflies. Add the
input parameters a, p, and c after that.
      </p>
      <p>Third, if the stopping conditions are not satisfied, the following activities are performed for each
butterfly bfy: compute fragrance using bfri=Sc (SI)a, where bfri is the magnitude of fragrance and
Sc, SI, and a parameter are a modality dependent. Then pick the best bfy. For each bfy in the
population, generate a random probability s ranging from 0 to 1. The fourth step is to check if sp,
and then proceed to the best bfy for global search by using sit+1 = sit + (r2 X bg* -sit) X bfri where sit
is the solution vector si for ith butterfly in iteration number t and bg* considering all solutions in the
current stage, this shows the best-known solution for the current condition. Ith butterfly's fragrance
is characterized by bfri and r as a random value between 0 and 1. If s&gt;p move at random and execute
a local search sit+1 = sit + (r2 X skt -sit) X bfri where sit and skt are jt and kth butterflies chosen at random
from solution space. After that, the newly developed solution is reviewed, and the better solution is
updated. Finally, alter the value of a parameter. The best solution was found.</p>
      <p>
        Shounak et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] proposed diversity dragonfly algorithm-based approach for minimization of test
suite. The first step is initializing the input parameters and population. The second step is updating
principal parameters of the dragonfly algorithm. Thirdly, it is about computing the objective
function. Then update the position. The final step is injecting of the diversity factor.
RQ3: What are frequently used the coverage information for test suite minimization?
For test suite reduction, many forms of coverage information are utilized in the literature
. Neha Gupta et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] employed diversity conscious mutation adequacy criteria fault finding,
statement coverage, and branch coverage. Arun Prakash et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] used fault-coverage. Misael
Mongiovì et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] used statement coverage (code and method coverage). Manju Khari et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
used path coverage. Mohapatra et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] used statement coverage. AbdulRahman et al [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] used
test requirement. Bao-Sheng et al [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] used test requirement set. Arash et al [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] used statement and
path coverage. Abhishek et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] used fault coverage. Shounak et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] used branch coverage.
RQ4: What metrics are used by researchers to measure the experiments in test minimization?
In the literature different metrics are used to measure the experiments in test minimization.
Neha Gupta et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] percentage of test suite size reduction, percentage of fault discovered, and
fault localization score were all employed.
      </p>
      <p>
        Arun Prakash et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] made use of the optimized test suite's size, execution speed, and coverage
of fault
Misael Mongiovì et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] utilized the Percentage of suite size and test execution time improvement.
Manju Khari et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] used path coverage and fitness values.
      </p>
      <p>
        Mohapatra et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] used Sizes of representative sets, and scalability.
      </p>
      <p>
        AbdulRahman et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] used the size of the final test suite and time for one execution. Bao-Sheng
et al [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] The rate of simplification, the efficacy of fault detection, and the rate of fault detection loss
were all employed.
      </p>
      <p>
        Arash et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] used Execution time, Code Coverage, and Fault Detection capability. Abhishek et
al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] used fault detection capability.
      </p>
      <p>
        Shounak et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] utilized the size and expense of the test suites needed for execution and
validation.
      </p>
    </sec>
    <sec id="sec-9">
      <title>8. Related Work</title>
      <p>This section cover related works on test suite minimization techniques. The related work result
described as follows.</p>
      <p>
        Qing et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] For wearable embedded software, a novel test-suite reduction methodology based
on the subtraction operation (TSRSO) was devised. The Quine-McCluskey (Q-M) algorithm was
created by Quine and McCluskey is employed to simplify a Boolean function algebraically. They
performed a matrix column transformation procedure to remove redundant criteria based on
interrelationships among testing requirements, and a modification to the row transformation of the
matrix to reduce the test suite in their suggested model.
      </p>
      <p>They compared their model's reduction capabilities to the outcomes of GRE, GE, and H with the aim
of providing broad guidance to testers in choosing the best strategy for test suite building. According
to their study, the minimal reduced test sets produced by TSRSO are smaller than those created by
GE, GRE, or H, and the performance of H, GE, and GRE is quite similar.</p>
      <p>
        Reena et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] Class partitioning and a Genetic Algorithm were used to produce a model for test
case reduction in a component-based system. For the construction of the test suite, fitness values
were maximized in their model. To boost the performance of the genetic algorithm, they enlarged
the search space and included fitness scaling. By contrasting the old genetic algorithm with the new
improved genetic algorithm, they determined which genetic algorithm produced the best fitness
values at constant mutation and crossover rates). They observed that when the two algorithms were
compared, the modified genetic algorithm produced superior results than the standard GA method.
Shaima et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] A method called Deterministic Test Suite Reduction (DTSR) was created using
Hyper graph Minimal Transversal Mining. They chose the candidate test cases on the basis of
hypergraph structural information. The requirement data enhanced the selected test cases by
retaining a deterministic set. For achieving the purpose hypergraph was considered as a test suite by
DTSR, where its hyperedges were analogous to requirements and its nodes were equivalent to tests
By selecting the fewest possible number of test cases that satisfy the criteria, a subset of a
hypergraph's minimal transversals was retrieved. They compared their approach with search-based
algorithms and based on their report, The outcomes indicated that their algorithm was superior and
the reduction rate of their approach differs from 50% up to 65% of the initial range of the set.
Shounak et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] created a cost-aware test suite reduction method using the Greedy Search
Algorithm with TAP measurement (GTAP). To determine the importance of test cases, TAP
measure was specially developed. Two more elements were added in their approach: the quantity of
test instances that might fulfil improved test requirements and the determination of the test suite's
most crucial test cases. Additionally, it generated a realistic collection of test scenarios at the lowest
possible cost. They used eleven subject programs available in the SIR repository to examine their
algorithms. They used reduction capability and relative capability to evaluate the effectiveness of
their and the existing algorithm. They stated the fact that the DIV-GA acquired 90.27% which is
less than the average performance of their algorithm in all the programs is 93.07%. As a result, they
achieved better outcomes.
      </p>
      <p>
        Shilpi et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] developed a similarity-based greedy approach for producing an effective number
of test cases. Their approach was a combination of two regression testing activities which are
minimization and prioritization. Their main strategy is to first analyze the test cases to determine the
test case pairings' difference and similarity values and then to optimize the test cases using a greedy
and clustering technique. They employed two algorithms Test Case Coverage Analyzer (TCCA) and
Similarity-Based Greedy Algorithm (SBGA)for optimizing the test suite. They compared their
experimental results to Harrold Gupta and Soffa's (HGS) prominent heuristic, considering the
minimal test suite size and fault coverage testing requirements. They claimed that the result of their
method was fairly successful in terms of defect detection, a reduced test suite without having a
significant impact on the percentage of suite size reduction.
      </p>
    </sec>
    <sec id="sec-10">
      <title>9. Conclusion</title>
      <p>Our literature review showed that different researcher use different techniques to find out minimized
test suite and different minimization techniques resulted in different outputs; the output may depend
on the input data for the algorithm. In minimization of test suite many researchers use different
metaheuristic optimization algorithm. Most of the developed tools are not adopted by the software
testing industry yet according to our review.
10. References
[19] Sugave, ShounakRushikesh, SuhasHaribhau Patil, and B. Eswara Reddy. "A Cost-Aware Test Suite
Minimization Approach Using TAP Measure and Greedy Search Algorithm." International Journal of
Intelligent Engineering and Systems 10.4 (2017): 60-69.
[20] Singh, Shilpi, and Raj Shree. "A new similarity-based greedy approach for generating effective test
suite." International Journal of Intelligent Engineering and Systems 11.6 (2018): 1-10.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] BA, Kitchenham and Charters, Stuart, "Guidelines for performing Systematic Literature Reviews in Software Engineering,"</article-title>
          <source>vol. 2</source>
          , 2007
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohanty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Mohapatra</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. F.</given-names>
            <surname>Meko</surname>
          </string-name>
          , “
          <article-title>Ant colony Optimization (ACO-Min) algorithm for test suite minimization</article-title>
          ,
          <source>” Advances in Intelligent Systems and Computing</source>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>63</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharma</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Pachariya</surname>
          </string-name>
          ,
          <article-title>“Multi-objective test suite optimization for detection and localization of software faults</article-title>
          ,
          <source>” Journal of King</source>
          Saud University - Computer and Information Sciences,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Choudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kaur</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Pandey</surname>
          </string-name>
          , “
          <article-title>Fault coverage-based test suite optimization method for regression testing: Learning from mistakes-based approach</article-title>
          ,
          <source>” Neural Computing and Applications</source>
          , vol.
          <volume>32</volume>
          , no.
          <issue>12</issue>
          , pp.
          <fpage>7769</fpage>
          -
          <lpage>7784</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mongiovì</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fornaia</surname>
          </string-name>
          , and E. Tramontana, “Redunet:
          <article-title>Reducing test suites by integrating set cover and network-based optimization</article-title>
          ,” Applied Network Science, vol.
          <volume>5</volume>
          , no.
          <issue>1</issue>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Khari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Burgos</surname>
          </string-name>
          , and R. G. Crespo, “
          <article-title>Optimized test suites for automated testing using different optimization techniques</article-title>
          ,
          <source>” Soft Computing</source>
          , vol.
          <volume>22</volume>
          , no.
          <issue>24</issue>
          , pp.
          <fpage>8341</fpage>
          -
          <lpage>8352</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Alsewari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. C.</given-names>
            <surname>Har</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Homaid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Nasser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Z.</given-names>
            <surname>Zamli</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Tairan</surname>
          </string-name>
          , “
          <article-title>Test cases minimization strategy based on flower pollination algorithm</article-title>
          ,
          <source>” Recent Trends in Information and Communication Technology</source>
          , pp.
          <fpage>505</fpage>
          -
          <lpage>512</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          <article-title>Tang and x. Zhou, "An Improved K-means Algorithm for Test Case Optimization,"</article-title>
          <source>2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>169</fpage>
          -
          <lpage>172</lpage>
          , doi: 10.1109/CCOMS.
          <year>2019</year>
          .
          <volume>8821687</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gao</surname>
          </string-name>
          , x. Ju,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , “
          <article-title>Cost-effective testing based fault localization with distance based test-suite reduction</article-title>
          ,”
          <source>Science China Information Sciences</source>
          , vol.
          <volume>60</volume>
          , no.
          <issue>9</issue>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Choudhary</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Tiwari</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Test Case Optimization using Butterfly Optimization Algorithm," 2020 10th International Conference on Cloud Computing, Data Science &amp; Engineering (Confluence)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>704</fpage>
          -
          <lpage>709</lpage>
          , doi: 10.1109/Confluence47617.
          <year>2020</year>
          .
          <volume>9058334</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Sugave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Patil</surname>
          </string-name>
          and
          <string-name>
            <given-names>B. E.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <article-title>"DDF: Diversity Dragonfly Algorithm for cost-aware test suite minimization approach for software testing,"</article-title>
          <source>2017 International Conference on Intelligent Computing and Control Systems (ICICCS)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>701</fpage>
          -
          <lpage>707</lpage>
          , doi: 10.1109/ICCONS.
          <year>2017</year>
          .
          <volume>8250554</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jiang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Lou</surname>
          </string-name>
          , “
          <article-title>A new test suite reduction method for wearable embedded software</article-title>
          ,
          <source>” Computers &amp; Electrical Engineering</source>
          , vol.
          <volume>61</volume>
          , pp.
          <fpage>116</fpage>
          -
          <lpage>125</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Reena</surname>
            and
            <given-names>P. K.</given-names>
          </string-name>
          <string-name>
            <surname>Bhatia</surname>
          </string-name>
          , “
          <article-title>Test case minimization in cots methodology using genetic algorithm: A modified approach</article-title>
          ,
          <source>” Proceedings of ICETIT</source>
          <year>2019</year>
          , pp.
          <fpage>219</fpage>
          -
          <lpage>228</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Trabelsi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Bennani</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Yahia</surname>
          </string-name>
          , “
          <article-title>A new test suite reduction approach based on hypergraph minimal transversal mining,” Future Data</article-title>
          and Security Engineering, pp.
          <fpage>15</fpage>
          -
          <lpage>30</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Sugave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Patil</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B. E.</given-names>
            <surname>Reddy</surname>
          </string-name>
          , “Ddf:
          <article-title>Diversity dragonfly algorithm for cost-aware test suite minimization approach for software testing</article-title>
          ,
          <source>” 2017 International Conference on Intelligent Computing and Control Systems (ICICCS)</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Shree</surname>
          </string-name>
          , “
          <article-title>A new similarity-based greedy approach for generating effective test suite</article-title>
          ,”
          <source>International Journal of Intelligent Engineering and Systems</source>
          , vol.
          <volume>11</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Choudhary</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Tiwari</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Test Case Optimization using Butterfly Optimization Algorithm," 2020 10th International Conference on Cloud Computing, Data Science &amp; Engineering (Confluence)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>704</fpage>
          -
          <lpage>709</lpage>
          , doi: 10.1109/Confluence47617.
          <year>2020</year>
          .
          <volume>9058334</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Sugave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Patil</surname>
          </string-name>
          and
          <string-name>
            <given-names>B. E.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <article-title>"DDF: Diversity Dragonfly Algorithm for cost-aware test suite minimization approach for software testing,"</article-title>
          <source>2017 International Conference on Intelligent Computing and Control Systems (ICICCS)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>701</fpage>
          -
          <lpage>707</lpage>
          , doi: 10.1109/ICCONS.
          <year>2017</year>
          .
          <volume>8250554</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>