<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Data Stochastic Preprocessing for Sorting Algorithms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Viktor</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentyn V. Raznosilin</string-name>
          <email>valentin.raznosilin@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kostiantyn K. Halanin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Software Systems of National Academy of Sciences of Ukraine</institution>
          ,
          <addr-line>Glushkov prosp. 40., Kyiv, 03187</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ukrainian State University of Science and Technologies</institution>
          ,
          <addr-line>Lazaryana str. 2., Dnipro, 49010, Ukrain</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>The possibilities of improving sorting time parameters through preprocessing by stochastic sorting were investigated. The hypothesis that preprocessing by stochastic sorting can significantly improve the time efficiency experimentally confirmed. Sorting with different computational complexity is accepted as classical sorting algorithms: shaker sorting with computational complexity O(n2), insertions O(n2), Shell O(n·(log n)2) ... O(n3/2), fast with optimization of ending sequences O(n·log n). The greatest effect is obtained when performing comparisons using stochastic sorting in the amount of 160 percent of the array's size. Indicators of the exchange efficiency of two elements and a series of comparisons with exchanges are proposed, which made it possible to establish the greatest efficiency of data preprocessing by stochastic sorting when one element for comparison is selected from the first part of the array, and the other from the second. For algorithms with a computational complexity of O(n2) the improvement in time efficiency reached 70-80 percent. However, for Shell sort and quick sort, the stochastic presort has no positive effect, but instead increases the total sorting time, which is apparently due to the initial high efficiency of these sorting methods. The hypothesis about increasing the time efficiency of quick sorting combined with sorting by insertions on the final sections due to the use of preliminary stochastic processing of such sections has not been confirmed. However, according to the experiment, the recommended size of the array was established, at which it is necessary to switch to insert sorting in the modified quick sort. The optimal length of the ending sequences is between 60 and 80 elements. Given that algorithm time efficiency is affected by computer architecture, operating system, software development and execution environment, data types, data sizes, and their values, time efficiency indicators should be specified in each specific case. Sorting, stochastic sorting, algorithm, time efficiency, experiment</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Anatoliy Y.</title>
      <sec id="sec-1-1">
        <title>Doroshenko 2,</title>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Olena A.</title>
      <sec id="sec-2-1">
        <title>Yatsenko 2, Valentyn V. Raznosilin 1, Kostiantyn K. Halanin 1</title>
        <sec id="sec-2-1-1">
          <title>1. Introduction</title>
          <p>sorting was investigated.</p>
          <p>
            The possibility of improving the time parameters of sorting by preprocessing with stochastic
Stochastic algorithms are very useful in solving complex optimization tasks, for example, in
finding the roots of complex equations [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ]. It was interesting how useful they can be in solving
nontraditional problems such as data sorting.
          </p>
          <p>Stochastic sorting consists in a cyclic comparison of randomly selected two elements of the array
and, if necessary, their permutation. The stochastic sorting method, like any method, is capable for</p>
          <p>2022 Copyright for this paper by its authors.
modifications related to the selection of elements for comparison, the distribution law of random
numbers, etc.</p>
          <p>
            Combination or hybridization of sorting algorithms is a well-known approach [
            <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2-5</xref>
            ] for increasing
the time efficiency of them. For example, combining quicksort and insertion sort on small finite areas
(to avoid frequent inefficient recursive calls on small arrays).
          </p>
          <p>In most cases, sorting algorithms are used with some regularity in relatively stable conditions:
software and hardware, array size and filling, etc. So there is an opportunity to select the most
effective algorithms or their combinations.</p>
          <p>Of course, stochastic sorting cannot be applied independently, due to the lack of a termination
condition and theoretical looping. However, a hypothesis was put forward about its effectiveness as a
preliminary preparation of the array for further sorting by one of the classic methods. Experiments
were carried out in order to confirm or refute the proposed hypothesis.</p>
          <p>
            Sorts of different computational complexity [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ] are accepted as classical sorting: shaker, with
computational complexity O(n2 ) , insertions (according to [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ], the fastest stable sorting of the class
O(n2 ) ), Shell O(n  (log n)2 ) ... O(n3/ 2 ) , fast with optimization of final sections O(n  log n) [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-1-2">
          <title>2. Related works</title>
          <p>
            For a long time [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ], many different sorting algorithms have been known and used to solve various
programming tasks. In terms of theoretical analysis and algorithm design, sorting is a fundamental
problem. Sorting algorithms are an ongoing topic of study in both scientific and educational circles.
Both theoretical (for example, [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ]) and experimental methods for studying algorithms [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ] are used
in this case.
          </p>
          <p>Since the time characteristics of algorithms are largely affected by the hardware and software
environment for their implementation and execution (multitasking, command execution pipeline,
prepipeline optimization of command execution, branch prediction in the pipeline, caching of commands
and data, and a number of others), the time of commands depends significantly non-linearly on
specified environments. Comparative studies of sorting efficiency in various software and hardware
environments are not the goal of this work. We have performed an experimental study on a laptop in
the Windows 10 operating system using the Delphi development environment.</p>
          <p>
            Sorting algorithms are quite universal, but they have their own effective application sphere. In this
regard, the search for new sorting algorithms and improvements to existing ones continues. The
"2 mm" algorithm [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ], for example, is a modification of the Min-Max Bidirectional Parallel Selection
Sort (MMBPSS) algorithm [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ], which modifies the bidirectional parallel selection sort. The use of a
stack is proposed in the MMBPSS algorithm to reduce the number of comparisons, and in [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ] –
memory size O(n) . New opportunities to improve well-known algorithms are being sought. For
example, proposed bidirectional insertion sort algorithm [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] improves the time characteristics of the
classical algorithm.
          </p>
          <p>
            Separately, we note algorithm hybridization in order to improve time efficiency. An experiment
was carried out in [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ] to develop a hybrid sorting program based on the method of selecting
algorithms, a machine learning system, and an algebraic-algorithmic approach to program design [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ].
The hybrid algorithm chose between insertion sort and quick sort based on the length and degree of
sorting of the array. The algorithm is based on a decision tree derived from the analysis of statistical
data from array sorting. In [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] considers hybrid parallel sorting, in which sorting is done by merging
or insertion depending on the block size of the numeric data array. The system of automatic program
debugging selects the optimal value of the block size.
          </p>
          <p>
            As is known, “efficient implementations generally use a hybrid algorithm, combining an
asymptotically efficient algorithm for the overall sort with insertion sort for small lists at the bottom
of a recursion” [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ].
          </p>
          <p>
            There is also ongoing research into the effectiveness of sorting algorithms. Additional quality
indicators are offered. Thus, in [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ] the range of asymptotic behavior of known algorithms was
determined.
          </p>
          <p>
            Work is being done to automate the development of sorting algorithms, such as those based on
logic and combinatorics [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-1-3">
          <title>3. The experimental basis</title>
          <p>Software. The experiments were carried out on the Windows 10 operating system. In the Delphi
10.3 environment, a 32-bit window program was created that implements sorting algorithms with
various variants of random permutations.</p>
          <p>The following measures were used to stabilize the results and significantly reduce the random
factors that affect the execution time of the algorithms:
• a separate thread was used to run a series of experiments;
• the program was compiled in code optimization (release) mode;
• the program was executed outside of the development environment;
• no other windowed programs running;
• during the experiments, the network was turned off;
• the cache was cleared prior to the time measurements.</p>
          <p>Hardware. The experiments were performed on a laptop with the technical characteristics listed in
the Table 1 and Table 2.</p>
          <p>
            Data structures. Sorting was accomplished by preparing arrays of 4 byte integers. The sizes of the
arrays were depending on the sorting method – smaller for less efficient sortings. Memory for arrays
was allocated directly by the Windows operating system using VirtualAlloc [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ]. The sorted array
was filled with numbers xi ranging from 0 to N −1, where N is the array element count. This was
followed by shuffling, which was accomplished by repeatedly swapping two randomly selected
elements. The number of such exchanges (5N ) was determined to be adequate for high-quality
mixing of the original array and obtaining a random distribution of array elements.
          </p>
          <p>The experiment's organization. The execution time of sorting the same array will be random
with some variance due to the volatility of Windows operating system threads and hardware state.</p>
          <p>The sorting time was calculated by averaging the mean and median time spent sorting the same
data array M times. The address of the array in memory did not change, the data was copied from the
original unsorted array before each measurement.</p>
          <p>The CPU cache was cleared before performing a separate sorting (parallel measurement). The
following techniques were used to accomplish this:
• at least ten passes of sequential access to cells in a previously allocated memory area;
•</p>
          <p>sleep function call for 500 ms delay.</p>
        </sec>
        <sec id="sec-2-1-4">
          <title>4. Preliminary experiments</title>
          <p>To estimate the required number of parallel measurements (previously denoted as M) that would
give a stable mean and median sorting time, a preliminary experiment was performed. In order to do
this, quicksort was used on the same dataset 500 times.</p>
          <p>This experiment results (Figure 1 and Figure 2) demonstrated that approximately 100 parallel
measurements for the same data set are adequate for estimating the mean and median sorting time. At
the same time, the sorting time will be overestimated rather than underestimated, with a maximum
deviation of 0.5% from the value for 500 parallel measurements. All of the data presented in this
paper were obtained for 100 parallel measurements (for each type of sorting, amount of data, etc.).</p>
        </sec>
        <sec id="sec-2-1-5">
          <title>5. An investigation into stochastic sorting</title>
          <p>Before carrying out the main part of the experiments, effectiveness of the swapping of elements
was investigated for improve of the array ordering. The time indicators of the series of exchanges
were not determined at this stage.</p>
          <p>Two methods of selecting random elements for comparison (with possible subsequent exchange)
were investigated.</p>
          <p>In first method selected two evenly distributed numbers ranging from 0 to N −1. A pair of
elements to be compared may be located anywhere in the array.</p>
          <p>In the second, the first element of the pair is chosen using the uniform distribution law from 0 to
N / 2 −1 and the second from N / 2 to N −1. As a result, a pair of elements in the array's different
halves is formed.</p>
          <p>For the experiment, an array of randomly mixed 105 integers ranging from 0 to 99999 was
prepared. Elements for comparison were chosen by series of 1000 pairs in one test and 200 such tests
were performed sequentially. Totally 2 105 or 2N comparisons. The following indicators were
recorded for each series of comparisons:
• the number of exchanges in the series;
• the average efficiency of exchanges in the series;
• the disorder of the array after the series of comparisons ended.</p>
          <p>The number of permutations for the first and second methods is shown in Figure 3. The trend line
(based on the exponential law) is also depicted.</p>
          <p>In a series of 1000 comparison attempts, the initial number of exchanges is around 500 (Figure 3),
after which it begins to decrease. Over a range of 200,000 comparisons, the first method reduces the
number of exchanges to 280 and the second to 100. The number of exchanges performed decreases in
both methods of element selection according to laws that are close to linear. With a large enough
number of attempts, the number of completed exchanges tends to zero. In practice, this means
determining not only the most efficient way to select elements for comparison, but also a reasonable
number of comparisons with at least 100 exchanges for every 1000 attempts.</p>
          <p>Note that in the first method, the exchange of elements is more often performed. However, in this
case, an additional comparison is required to determine the order of the selected indices i and j
( i  j , i  j or i = j ). For the second method of index selection, it is guaranteed that i  j .</p>
          <p>The relative change in the position of the elements before and after the permutation to their final
position (in the sorted array) is defined as the efficiency of the permutation of a pair of elements:
e(i, j, N ) =
| xi − i | − | xi − j |
max(xi , N − xi )
where i, j – index of elements in array x ; N – number of elements in array. We take into account
that the array element's value is also its index in the ordered array (due to the method of forming an
unordered array proposed above).</p>
          <p>The indicator e(i, j, N ) determines how close the element at index i will become to its final
position in the ordered array x after it is moved to the position at index j .</p>
          <p>It should be noted that this function gives the same range of values regardless of the number of
elements N in the array x . The worst possible value is e(0,0, N ) = −1 at xi = 0 , and the best possible
value is e(0, N, N ) = 1 at xi = N .</p>
          <p>Exchange efficiency indicator E(i, j, N ) is averages values e(i, j, N ) and e( j,i, N ) for a pair of
swapped elements:</p>
          <p>E(i, j, N ) =
e(i, j, N ) + e( j,i, N )
2
100%.</p>
          <p>The exchange efficiency for each element of the pair can be both positive and negative. The
worstcase scenario is that both elements' positions worsened as a result of the exchange; the best-case
scenario is that both elements' positions improved.</p>
          <p>Figure 4 depicts the average exchange efficiency for the first and second methods of selecting
elements for comparison. The trend line (according to the exponential law) is also shown.
The average disorder of array elements for both methods is shown in Figure 5.</p>
          <p>The second method is better on exchange efficiency if the number of previous comparisons
exceeds 160% of the total number of array elements.</p>
          <p>The results of experiments using the second method of exchange, in which the index values are
chosen from the first and second halves of the sorted array, are shown below. This method is the most
useful in practice because, even with a large number of comparisons ( 2N   3N ), it has lower
overhead due to a smaller number of conditional transition operators.</p>
        </sec>
        <sec id="sec-2-1-6">
          <title>6. Classical sorting with preprocessing by stochastic sorting</title>
          <p>A series of experiments were carried out to test the hypothesis regarding the efficacy of stochastic
sorting as array preprocessing prior to the application of the classic sorting algorithm. First, sorting
with computational complexity O(n2 ) was considered: insertion sort and shaker sort.</p>
          <p>The size of sorted array was of 103 , 104 , and 105 elements alternately. The sorting time was
averaged by 100 parallel measurements were taken. In stochastic sorting, the number of comparisons
varied from 0,5N to 5N with a step of 0,5N .</p>
          <p>The results of the experiments are shown in Figure 6 and Figure 7. Efficiency was calculated as
Ef =  tклас
 tстох + tклас2</p>
          <p>
−1 100%.</p>
          <p>
where tклас is the time of classical sorting without preprocessing; tстох – stochastic sorting time; tклас2
is the same classic sorting, but after stochastic.</p>
          <p>If the volume of the array is greater than 104 elements, the number of pairs for stochastic ordering
should be in the range 3N 4N . The time spent sorting can be reduced by 70-80% using the methods
considered.</p>
          <p>The maximum efficiency for small size arrays (less than 104 element) is approximately 25-40%
and is achieved after stochastic ordering of approximately 2N 2,5N pair. The efficiency then
reaches a plateau (shaker sorting) or starts to decline (insert sorting).</p>
          <p>A similar series of experiments was conducted for Shell sorting with a computational complexity
of O(n  (log n)2 ) ... O(n3/2 ) and quick sorting – O(n  log n) . However, for these sortings the
preordering of randomly selected pairs of elements no longer have a positive efficiency. On the contrary
increases the total sorting time, which is obviously explained by the initial high efficiency of these
sorting methods.</p>
        </sec>
        <sec id="sec-2-1-7">
          <title>7. Bicomponent sorting: quicksort and insertion</title>
          <p>
            In subsequent experiments, the quicksort implementation assumes that end sections of a certain
length will be sorted by insertion sort. This type of sorting outperforms fast sorting with small
amounts of data [
            <xref ref-type="bibr" rid="ref17">17</xref>
            ]. We did not investigate other modifications of the quick sorting algorithm, such
as those with two reference points [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ], SMS algorithms [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ], hybrid algorithms [
            <xref ref-type="bibr" rid="ref20">20</xref>
            ], and others [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ].
          </p>
          <p>The efficiency of applying preprocessing by stochastic sorting of the final parts of the array before
using insertion sorting is of interest.</p>
          <p>A series of experiments were carried out to determine the optimal value at which the transition
from recursive calls to insertion sorting should occur. Quicksort was used to sort an array of 2 106
elements without end-section optimization and an array of 16 to 160 elements with a step of two. In
each case, 100 parallel measurements were taken, and the results were averaged. The average time on
sorting was compared. Without optimization, the quicksort time was 212.3 ms. Figure 8 depicts the
time on sorting with optimization at various sizes of the final section. The optimal length of the final
section is between 60 and 80 elements. This quicksort parameter was assumed to be 70 in subsequent
experiments. Quicksorting with two components and inserts reduces quicksort time by about 20%.</p>
          <p>In subsequent experiments, an attempt was made to improve the efficiency of quick sort by
performing stochastic sorting before insertion sorting at the end sections. An assessment of the
distribution of the lengths of the final sections was previously performed in order to accomplish this;
70 elements were chosen as the maximum length of the final section. Figure 9 depicts distribution of
the arrays size sorted at the final sections. Approximately 80% are arrays with 2 to 30 elements, and
the remaining 20% are arrays with 31 to 70 elements.</p>
          <p>Experiments have shown that using random permutations to sort the final sections has no positive
effect. The very short lengths of the end sections (2…30 elements in 80% of cases) explain this. As
previously demonstrated, the effectiveness of stochastic sorting is most significant on large arrays
with 104 elements or more.</p>
        </sec>
        <sec id="sec-2-1-8">
          <title>8. Conclusions</title>
          <p>The experiments confirmed the hypothesis that there is a significant improvement in the time
efficiency of classical sorting with time efficiency O(n2 ) using stochastic preprocessing compared to
the same classic without preprocessing.</p>
          <p>The hypothesis about the possibility of increasing the time efficiency of quicksort by
preprocessing small final parts of the array with stochastic sorting before switch to insert sort was not
confirmed.</p>
          <p>However, during the experiment, the recommended of the subarray size was determined, at which
the two-component quick and insertion sort must be switched to the second component - insertion
sort.</p>
          <p>Any sorting algorithm has advantages and disadvantages, and the programmer must choose based
on the task requirements. In some cases, using class O(n  log n) sorts is less efficient than using class
O(n2 ) . This could be due to hardware limitations, array filling specifics, the law of the array's value
distribution, data types, and their representation in the RAM. Pre-processing with a stochastic
algorithm in these cases may improve their time efficiency.</p>
          <p>Because algorithm time efficiency is affected by computer architecture, operating system, software
development and execution environment, data types, data volumes, and their values, time efficiency
indicators should be determine on a case-by-case basis.</p>
        </sec>
        <sec id="sec-2-1-9">
          <title>9. References</title>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.</given-names>
            <surname>Shynkarenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ilchenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.</surname>
          </string-name>
          <article-title>Zabula Tools of investigation of time and functional efficiency of bionic algorithms for function optimization problems</article-title>
          ,
          <source>in Proceedings of the 11th International Conference of Programming (UkrPROG</source>
          <year>2018</year>
          ), Kyiv,
          <year>2018</year>
          ,
          <string-name>
            <surname>CEUR-WS Team</surname>
          </string-name>
          ,
          <volume>2139</volume>
          ,
          <fpage>270</fpage>
          -
          <lpage>280</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mubaraka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Iqbalb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Naeemc</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Hussaind 2 mm: a new technique for sorting data</article-title>
          ,
          <source>Theoretical Computer Science</source>
          <volume>910</volume>
          ,
          <year>2022</year>
          ,
          <fpage>68</fpage>
          -
          <lpage>90</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>O.</given-names>
            <surname>Yatsenko</surname>
          </string-name>
          <article-title>On application of machine learning for development of adaptive sorting programs in algebra of algorithms</article-title>
          ,
          <source>in: Proc. Int</source>
          . Workshop “Concurrency:
          <article-title>Specification and Programming” (CS&amp;P'</article-title>
          <year>2011</year>
          ),
          <year>2011</year>
          ,
          <fpage>577</fpage>
          -
          <lpage>588</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Doroshenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>O.</surname>
          </string-name>
          <article-title>Yatsenko Formal and adaptive methods for automation of parallel programs construction: emerging research and opportunities</article-title>
          .
          <source>Hershey: IGI Global</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Doroshenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ivanenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Novak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Yatsenko</surname>
          </string-name>
          <article-title>A mixed method of parallel software auto-tuning using statistical modeling and machine learning</article-title>
          .
          <source>Communications in Computer and Information Science. Information and Communication Technologies in Education, Research, and Industrial Applications</source>
          <volume>1007</volume>
          .
          <year>2019</year>
          ,
          <fpage>102</fpage>
          -
          <lpage>123</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -13929-
          <issue>2</issue>
          _
          <fpage>6</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Cormen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Leiserson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Rivest</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Stein Introduction to algorithms</article-title>
          (3rd ed.). Cambridge, MA: The MIT Press.
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Yadav</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yadav</surname>
          </string-name>
          , S. B.
          <article-title>Gupta Comparative study of various stable and unstable sorting algorithms</article-title>
          .
          <source>in Artificial Intelligence and Speech Technology. CRC Press</source>
          ,
          <year>2021</year>
          ,
          <fpage>463</fpage>
          -
          <lpage>477</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>[8] Sorting algorithm</article-title>
          . URL: https://en.wikipedia.org/wiki/Sorting_algorithm.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>D. Knuth</surname>
          </string-name>
          <article-title>The Art of Computer Programming</article-title>
          , Vol.
          <volume>3</volume>
          :
          <article-title>Sorting and searching (3rd</article-title>
          . ed.), Addison Wesley Longman Publishing Co., Inc.,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>H.H. Aung</surname>
          </string-name>
          <article-title>Analysis and Comparative of Sorting Algorithms</article-title>
          ,
          <source>International Journal of Trend in Scientific Research and Development</source>
          <volume>3</volume>
          (
          <issue>5</issue>
          ),
          <year>2019</year>
          ,
          <fpage>1049</fpage>
          -
          <lpage>1053</lpage>
          . doi:
          <volume>10</volume>
          .31142/ijtsrd26575
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Choudhury</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Dutta,</surname>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Establishing pertinence between Sorting Algorithms prevailing in n log (n) time</article-title>
          ,
          <source>J Robot Auto Res</source>
          <volume>3</volume>
          (
          <issue>2</issue>
          ),
          <year>2022</year>
          ,
          <fpage>220</fpage>
          -
          <lpage>226</lpage>
          . doi:
          <volume>10</volume>
          .21203/rs.3.rs-
          <volume>1754555</volume>
          /v1
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Thabit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bawazir</surname>
          </string-name>
          <article-title>A novel approach of selection sort algorithm with parallel computing and dynamic programing concepts</article-title>
          .
          <source>JKAU: Comp. IT 2</source>
          ,
          <year>2013</year>
          ,
          <fpage>27</fpage>
          -
          <lpage>44</lpage>
          . doi:
          <volume>10</volume>
          .4197 / Comp. 2-
          <fpage>2</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Mohammed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Ş. E.</given-names>
            <surname>Amrahov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. V.</given-names>
            <surname>Çelebi</surname>
          </string-name>
          , (
          <year>2017</year>
          ).
          <article-title>Bidirectional Conditional Insertion Sort algorithm; An efficient progress on the classical insertion sort</article-title>
          .
          <source>Future Generation Computer Systems</source>
          ,
          <volume>71</volume>
          ,
          <year>2017</year>
          ,
          <fpage>102</fpage>
          -
          <lpage>112</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.future.
          <year>2017</year>
          .
          <volume>01</volume>
          .034
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>O. K.</given-names>
            <surname>Durrani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Shreelakshmi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shetty</surname>
          </string-name>
          , D. C.
          <article-title>Vinutha Analysis and determination of asymptotic behavior range for popular sorting algorithms</article-title>
          . In Special Issue of
          <source>International Journal of Computer Science &amp; Informatics</source>
          <volume>2</volume>
          (
          <issue>1</issue>
          ),
          <year>2018</year>
          ,
          <fpage>149</fpage>
          -
          <lpage>154</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>I.</given-names>
            <surname>Drămnesc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Jebelean</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Stratulat</surname>
          </string-name>
          <article-title>Mechanical synthesis of sorting algorithms for binary trees by logic and combinatorial techniques</article-title>
          .
          <source>Journal of Symbolic Computation</source>
          <volume>90</volume>
          ,
          <year>2019</year>
          ,
          <fpage>3</fpage>
          -
          <lpage>41</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jsc.
          <year>2018</year>
          .
          <volume>04</volume>
          .002
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <article-title>VirtualAlloc function</article-title>
          . URL: https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-virtualalloc.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A. N</given-names>
            <surname>Frak</surname>
          </string-name>
          . et al.
          <article-title>Comparison study of sorting techniques in static data structure</article-title>
          ,
          <source>International Journal of Integrated Engineering: Special Issue Data Information Engineering</source>
          <volume>10</volume>
          (
          <issue>6</issue>
          ),
          <year>2018</year>
          ,
          <fpage>106</fpage>
          -
          <lpage>112</lpage>
          . doi:
          <volume>10</volume>
          .30880/ijie.
          <year>2018</year>
          .
          <volume>10</volume>
          .06.014
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kushagra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>López-Ortiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Qiao</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. I.</surname>
          </string-name>
          <article-title>Munro Multi-pivot quicksort: theory and experiments</article-title>
          .
          <source>Proc. Sixteenth Workshop on Algorithm Engineering and Experiments (ALENEX)</source>
          ,
          <year>2014</year>
          ,
          <fpage>47</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mansi</surname>
          </string-name>
          <article-title>Enhanced quicksort algorithm</article-title>
          .
          <source>Int. Arab J. Inf. Technol</source>
          <volume>7</volume>
          (
          <issue>2</issue>
          ),
          <year>2010</year>
          ,
          <fpage>161</fpage>
          -
          <lpage>166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>A.</given-names>
            <surname>Aftab</surname>
          </string-name>
          et al.
          <article-title>Review on performance of quick sort algorithm</article-title>
          ,
          <source>International Journal of Computer Science and Information Security</source>
          <volume>19</volume>
          (
          <issue>2</issue>
          ),
          <year>2021</year>
          ,
          <fpage>114</fpage>
          -
          <lpage>120</lpage>
          . doi:
          <volume>10</volume>
          .5281/zenodo.4602255
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Woźniak</surname>
          </string-name>
          et al.
          <article-title>Preprocessing large data sets by the use of quick sort algorithm</article-title>
          ,
          <source>Springer: Knowledge, Information and Creativity Support Systems: Recent Trends, Advances and Solutions</source>
          ,
          <year>2016</year>
          ,
          <fpage>111</fpage>
          -
          <lpage>121</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -19090-
          <issue>7</issue>
          _
          <fpage>9</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>