<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Alternative Approach to the Eficiency of Recursive Merge Sort∗</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tibor Ásványi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Eötvös Loránd University, Faculty of Informatics Budapest</institution>
          ,
          <country country="HU">Hungary</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>29</fpage>
      <lpage>31</lpage>
      <abstract>
        <p>The time complexity (also called asymptotic running time or operational complexity) Θ( lg  ) of merge sort is usually calculated by solving the recurrence</p>
      </abstract>
      <kwd-group>
        <kwd>algorithm</kwd>
        <kwd>merge sort</kwd>
        <kwd>recursion</kwd>
        <kwd>operational complexity</kwd>
        <kwd>asymptotic running time</kwd>
        <kwd>eficiency analysis</kwd>
        <kwd>education</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        where  is the length of the sequence of keys to be sorted,  ( ) is either the
best-case or the worst-case asymptotic running time of the algorithm, Θ( )
is that of division + sorted merge, and Θ(1) is that of the base case [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
      <p>
        In this paper we invent an alternative approach: We analyze the structure
of the tree of recursive calls, consider its depth and estimate the number of
steps [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] of computation at the diferent levels of that tree. Compared to
the equation above we use a more strict notation [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
        ] and argue about
its scientific and didactic advantages in eficiency analysis of algorithms in
general.
      </p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        In the following sections of this paper first we introduce our time complexity
measure (section 2) which is quite traditional (see [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]) but following [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ] we try to
avoid abuse of notation. Next we make clear some notational conventions (section
we present our calculation of the time complexity of this merge sort (section 5).
Finally we argue about the educational and scientific advantages of our method of
calculation compared to solving a recurrence of the above style (Section 6).
2.
      </p>
      <p>Operational complexity of programs</p>
      <p>( ) be the maximum and minimum running time of some
program where  is the size of its input. Thus  , 
: N → P where P = { ∈
0 . The problem is that we cannot speak of the running time of it, because
we do not know the computing environment. We count the number of steps of the
algorithm instead.</p>
      <sec id="sec-2-1">
        <title>In this paper we define the</title>
        <p>
          steps of an algorithm as its subprogram
invocations1 and its loop iterations. Counting these steps we get the appropriate
information about the running time of the program while omitting constant factors
which are unknown because we do not know the programming environment [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>We will be interested in the maximum number of steps denoted as  
( )
(Maximum Time complexity) and in the minimum number of steps denoted as
( ) where  is the size of the input data structure, in case of a sorting algorithm
 is the length of the input array or list. Thus  , 
: N → N.</p>
        <p>While we omit constant factors it is enough to give good estimations of functions</p>
        <p>( ). These estimations may be functions of type N → R and they
may be negative for small sizes of the input. Consequently the following sets of
Let  
R :  &gt;</p>
        <p>}


 
functions are introduced traditionally.</p>
        <p>Definition 2.1.  : N → R is</p>
        <p>- asymptotically nonnegative, if ∃</p>
        <p>- asymptotically positive, if ∃</p>
        <sec id="sec-2-1-1">
          <title>Definition 2.2.</title>
          <p>( ) = { : ∃ &gt;
Ω( ) = { : ∃ &gt;
Θ( ) =  ( ) ∩ Ω( )
0, ∃
0, ∃
We can say,
ℎ + Θ( ) = { : ∃ ∈ Θ( ) :  = ℎ +  }
∈ N, ∀ ≥</p>
          <p>:  ( ) &gt; 0
∈ N, ∀
≥</p>
          <p>:  ( ) ≥ 0
Assume that ,</p>
          <p>and ℎ are asymptotically nonnegative functions.
∈ N, ∀
≥ 
: 0 ≤  ( ) ≤  *</p>
          <p>( )}
∈ N, ∀ ≥  : 0 ≤  *  ( ) ≤  ( )}
 ∈  ( ) means that function  is an asymptotic upper bound of function  ,
 ∈ Ω( ) means that function  is an asymptotic lower bound of function 
and</p>
          <p>∈ Θ( ) means that functions 
corollary 2.3.b).  ∈ Θ( ) also means that the asymptotic order of  is  .
and  are asymptotically equivalent (see
Using these notions we can give appropriate estimations of functions  
( ) and
( ), as it is illustrated in Section 5.
1Surely we can omit nonrecursive calls and this omission is optional.</p>
          <p>From the definitions above we can easily derive the following useful
consequences:</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>Corollary 2.3. Assume that , , ℎ</title>
          <p>(b)  ∈ Θ( ) ⇐⇒  ∈ Θ( )
(a)</p>
          <p>∈ Θ( ) ∧  ∈ Θ(ℎ ) =⇒  ∈ Θ(ℎ )
and  : N → R is asymptotically positive.</p>
          <p>Corollary 2.4. Assume that  : N → R is asymptotically nonnegative
 ∈  ( ) ⇐⇒ (∃ : N → R), ∃ &gt; 0, ∃ ∈ N, ∀ ≥  :</p>
          <p>: N → R are asymptotically nonnegative.</p>
          <p>*  ( ) +  ( ) ≥  ( )
 ∈ Ω( ) ⇐⇒ (∃ : N → R), ∃ &gt; 0, ∃ ∈ N, ∀ ≥  :</p>
          <p>*  ( ) +  ( ) ≤  ( )
Theorem 2.5.  
∈ Θ( 
)
and 
∧
∧
lim
 →∞  ( )</p>
          <p>( )
lim
 →∞  ( )</p>
          <p>( )
∈ Θ(
).
directly embedded loops, but excluding the iterations of those loops and excluding
the run of the directly embedded subprogram invocations.</p>
          <p>Let a loop iteration as a step consist of the evaluation of the condition of
the loop – when this condition is true – followed by performing the statement
part of the loop, including the process of exiting from directly embedded loops,
but excluding the iterations of those loops and excluding the run of the directly
embedded subprogram invocations.</p>
          <p>Thus we covered the text of the whole program with a finite number disjoint
stages as steps. (The whole program is considered a special subprogram here.) No
step contains another loop iteration or recursion, although these may be embedded
into the step.
time.</p>
          <p>Let</p>
          <p>Consequently each step has a maximal and a minimal running
be the maximum of the maximums and let 
be the minimum
of the minimums. Therefore (/
( ) ∈ Ω( 
( ) ≤ 
*

( )) ∩  ( 
( ). Thus</p>
          <p>) *  
( )) = Θ( 
( ) ∈ Ω(
( ) ≤  
( ) ≤</p>
          <p>*  
( )). Similarly (/
( )) ∩  (
( )) = Θ(</p>
          <p>( )).</p>
          <p>( ). Thus
) *

( ) ≤
Thus it is enough to calculate the asymptotic order of  
( ) and 
( ), that
theorem 2.5 remains true. For example, we can omit (some of the) nonrecursive
calls, if it is more convenient for us.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Notations</title>
      <p>We suppose that an array consists of a pointer and a so-called array object where
the pointer refers to the object. An array object contains the length of the array
object and its elements.</p>
      <p>: T[ ] means that  is a pointer referring to an array object with element
type T and length  . If we write  : T[ ] on a formal parameter list, it specifies
pointer</p>
      <p>of type T[] and  is just a short notation for the length of the (actual
parameter) array:  can be omitted here, if it is not needed.  : T[ ] can also be
a declaration statement. Then it declares pointer 
of type T[], creates the array
object of  elements and assigns its address to  .</p>
      <sec id="sec-3-1">
        <title>We suppose that this array</title>
        <p>object is automatically deleted when the block containing it is finished. Arrays are
indexed from 0.  [..</p>
        <p>) represents the sequence ⟨ [ ], . . . ,  [ −1]⟩</p>
        <p>The size of a binary tree  is its number of nodes | |, the empty tree is
the number of internal nodes of  is  ( ), the number of its leaves is  ( ), where
,
.ℎ</p>
        <p>are its left and right subtrees.
| | =  ( ) +  ( ). The height of  is ℎ ( ) where ℎ ( ) = −1. If  ̸=

, . 
and</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Merge sort</title>
      <p>
        Merge sort was invented by John von Neumann in 1945 [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] (see Figure 1). It
uses the divide and conquer approach. Given a sequence of keys to be sorted, in
this algorithm we have two cases:
      </p>
      <p>The empty sequences and those consisting of a single item are already sorted;
but we half the longer sequences, sort the half-sequences with the same method
and merge the sorted parts in a sorted way. (See Figure 1.)
 [..
and  [..
because</p>
      <sec id="sec-4-1">
        <title>The interface procedure of merge sort is given in Figure 2.</title>
        <p>And its recursive
subroutine can be found in Figure 3. With the choice of 
:= ⌊
 + ⌋,  [..
2
) and
) have the same length, provided that the length of  [.. ) is even number;
) is shorter by one than  [..</p>
        <p>), if the length of  [.. ) is odd number,
ℎ
︂⌊  + 
2</p>
        <p>) to temporal arrays before merge, but it is enough to
copy  [..</p>
        <p>) to  [0.. ) and this is done by the first loop.</p>
        <p>The actual merge is done by the second and third loops of the procedure: each
time we write into  [ ], we read from ( [.. ) and)  [.. ), so it is enough to prove
that  &lt;  :
procedure mergeSort( : T[ ])</p>
        <p>mSort(, 0,  )
end procedure
◁ sort the whole array 
◁ which means sorting  [0.. )</p>
        <p>◁  ( ),   ( ) ∈ Θ( lg  )
Now we explain while the actual merge is divided into the second and third loops.
The second loop runs while both of  [0.. ) and  [.. ) contains some item(s) to
be copied to  [.. ).
procedure mSort( : T[]; ,</p>
        <p>− 1 then
:= ⌊
 +
−  , we have copied all the content of
form  [..
) to  [.. ). Altogether we have copied  −  items there. Thus
 −  = (</p>
        <p>−  ) + ( −  )
 −  =  −</p>
        <p>= 
as possible.
the second loop.)
This means that the elements of  [.. ) are already in place and the third loop
does nothing as needed.</p>
        <p>If the second loop stops with  =  , we have copied all the content of  [..
)
to  [..</p>
        <p>) and the remainder of  [0.. ) is copied to the end of  [.. ) as eficiently
(Notice that the stability of merge sort is ensured by condition  [ ] ≤  [ ] of</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. The time complexity of merge sort</title>
      <p>
        The time complexity analysis of sorting algorithms is often based on counting key
comparisons. Our method which counts steps [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] is more general. It is useful, even
if we have nothing to do with key comparisons.
      </p>
      <p>Let</p>
      <p>( ) be the maximal and minimal number of steps of
algono subarray determined by the algorithm will be empty.
rithm mergeSort( : T[ ]). Considering figures 2 and 3,  
(0) = 
(0) = 2 is
trivial. Thus in the rest of this section we can suppose that  ≥ 1. Consequently</p>
      <p>In this section first we estimate the number of steps of subroutine merge. Then
we analyze the structure of the call-tree of recursive procedure mSort. Thus we
can determine the number of subroutine invocations excluding merge calls + give
a good estimate of the number of steps in all of the merge calls. Adding these two
we will have a lower bound of 
( ) and an upper bound of  
( ) of merge sort,
and with corollary 2.4 we will establish the asymptotic order of its running time.
◁ sorted merge of  [.. ) and  [.. ) into  [.. )
◁ sorted merge of  [0.. ) and  [..
◁ copy  [.. ) into  [0.. )
◁
) into  [.. )
copy into  [ ]
◁ from  [ ] or  [ ]
procedure merge( : T[]; , , 
 [ −  ] :=  [ ]
end for
 := 
 := 0 ;  := 
while  &lt; 
else
if  [ ] ≤  [ ] then
 [ + +] :=  [ + +]
 [ + +] :=  [ + +]
 [ + +] :=  [ + +]
end while
5.1. The number of steps of procedure merge
of steps of performing procedure merge(, , , 
), respectively.
i.e.  ≥ 2. Let 
In order to calculate the operational complexity of procedure merge(, , , 
ifrst we introduce the notation  =  −  . Procedure merge is called, if  &lt; 
)
− 1,
merge( ) and</p>
      <p>merge( ) be the minimum and maximum number</p>
      <p>Remember that a step is a subroutine call or an iteration of a loop. We have a
the second and third loops: Minimum the ⌊/ 2⌋ items in array 
single procedure call now + the ⌊/ 2⌋ iterations of the first loop + the iterations of
must be copied
back to  , which means ⌊/ 2⌋ iterations of the second loop and no iteration of the
third loop. If the second and third loop copies all the items, then it is the maximal
 iterations of these loops together. Consequently

 
merge( ) ≥ 1 + ⌊︀ 2 ⌋︀ + ⌊︀ 2 ⌋︀
merge( ) ≤ 1 + ⌊︀ 2 ⌋︀ +  ≤ 2 , thus
≥</p>
      <p>2
︀⌈  ⌉︀ + ⌊︀  ⌋︀ =  and</p>
      <p>2
 ≤ 
merge( ) ≤  
merge( ) ≤ 2
(Therefore 
merge( ),  
merge( ) ∈ Θ( ).)
5.2. The call-tree of recursive procedure mSort
Let us consider figure 3. The number of mSort calls is clearly equal to the size | |
of the call-tree  of mSort. The leaves of  correspond to case  =  − 1. Thus
 ( ) =  . ( ≥ 1 is the length of array  which is being sorted.) And  is strictly
binary tree according to the next definition.</p>
      <p>Definition 5.1.  is strictly binary tree, if each internal node of  has two children.</p>
      <sec id="sec-5-1">
        <title>The next consequence comes by mathematical induction on  ( ).</title>
        <p>Corollary 5.2. If  ̸=
| | = 2 *  ( ) − 1.)
is a strictly binary tree, then  ( ) =  ( ) − 1. (Thus</p>
      </sec>
      <sec id="sec-5-2">
        <title>Thus mergeSort is invoked first +</title>
        <p>| | = 2 −1 = the number of mSort calls.</p>
        <p>Corollary 5.3. The number of subroutine invocations excluding merge calls = 2 .
Now we are going to prove that  is nearly complete, so the depth of  is Θ(lg  ).</p>
      </sec>
      <sec id="sec-5-3">
        <title>First we give the necessary definitions.</title>
        <sec id="sec-5-3-1">
          <title>Definition 5.4.  binary tree is</title>
          <p>- leaf-balanced, if for each internal node of  , the number of leaves of its two subtrees
can difer maximum by 1.
- complete, if it is strictly binary and each of its leaves are at the same level.
- nearly complete, if  is empty, or removing its lowest level we receive a complete
tree.</p>
          <p>Corollary 5.5. Tree  is leaf-balanced (because mSort divides the actual subarray
in a balanced way and the number of leaves in both parts is equal to the length of
the part).</p>
          <p>If  ̸= is complete binary tree with height ℎ , then at its zeroth (root) level there
is 20 node, at the its first level 21 nodes and so on, on its (last) level ℎ , it has 2ℎ
nodes, altogether | | = 2ℎ +1 − 1.</p>
          <p>Corollary 5.6. If  is nearly complete nonempty binary tree with height ℎ , then
2ℎ ≤ | | ≤ 2ℎ +1 − 1. Thus ℎ = ⌊lg | |⌋.</p>
          <p>Theorem 5.7. If  is a leaf-balanced strictly binary tree, then  is also nearly
complete.</p>
          <p>Proof. We can suppose  ̸= . Use mathematical induction on ℎ . If ℎ = 0, then
 consists of a single node, and  is nearly complete. If the statement is true for
heights ≤ ℎ , consider case ℎ ( ) = ℎ + 1. Then .  and .ℎ are leaf-balanced
strictly binary trees with heights ≤ ℎ , so they are nearly complete. We have two
cases. (1)  (.  ) =  (.ℎ ) ⇒ |.  | = |.ℎ | ⇒ ℎ (.  ) = ℎ (.ℎ ) ⇒
 is also nearly complete. (2) We can suppose that  (.  ) + 1 =  (.ℎ ) ⇒
|.  | + 2 = |.ℎ | ⇒ ℎ (.  ) = ℎ (.ℎ ) ∨ ℎ (.  ) + 1 = ℎ (.ℎ ). Case
ℎ (.  ) = ℎ (.ℎ ) is trivial. In case ℎ (.  ) + 1 = ℎ (.ℎ ) there are two
leaves at the lowest level of the strictly binary, nearly complete .ℎ , and . 
is complete. Thus  is also nearly complete.
5.3. The number of steps of all the merge invocations
In this section we give upper and lower estimates of the number of steps of all
the merge invocations. First notice that the merge-calls correspond to the
internal nodes of 
defined at the beginning of subsection 5.2. Let  

( )
and</p>
          <p>( ) be the maximal and minimal number of steps of all the merge
invocations where  is the length of the input array of mergeSort.
First we give an upper estimate of  
internal nodes (i.e. merge-calls) of</p>
          <p>( ). Based on corollary 5.8, the
are maximum at levels 0..⌊lg  ⌋ of  . Let
( )( ) be the maximal number of steps of all the merge invocations at
some level  of</p>
          <p>excluding its lowest level. Clearly  
the</p>
          <p>merge( ) values of all the merge calls at level  (see subsection 5.1). And the
subarrays  [.. ) of these merge calls are disjoint. We also know from subsection

( )( ) ≤ the sum of
5.1 that  
merge( ) ≤ 2 where</p>
          <p>=  − .
addition and multiplication of numbers we have  
Thus with the distributive rule of

( )( ) ≤ 2 . Therefore
( ) ≤ (⌊lg  ⌋ + 1) * 2 .
 
 
Now we give a lower estimate of 
( ). Because  is a nearly complete
tree, excluding its lowest two levels all the nodes of 
are internal nodes. Based
on corollary 5.8, all the nodes of</p>
          <p>are internal nodes (with merge-calls)
minimum at levels 0..(⌊lg  ⌋−2) of  . Considering such a level  , procedure merge is
called in each node of this level and the whole array 
is covered by the disjoint
 [..
) subarrays of the merge-calls. Let 

( )( ) be the minimal number
of steps of all the merge invocations at this level  .</p>
        </sec>
      </sec>
      <sec id="sec-5-4">
        <title>Clearly</title>
        <p>the sum of the 
merge( ) values of all the merge calls at level  (see
subsec
( )( ) ≥
tion 5.1). We also know from subsection 5.1 that 
Thus with the distributive rule of addition and multiplication of numbers we have
merge( ) ≥  where  =  − .
( )( ) ≥  . Therefore</p>
        <p>( ) ≥ (⌊lg  ⌋ − 1) *  .


( ) ≥ 2 + (⌊lg  ⌋ − 1) *</p>
      </sec>
      <sec id="sec-5-5">
        <title>Summarizing our results we have</title>
        <p>5.4. The number of steps of procedure mergeSort
We finish our calculations on the eficiency of
corollary 5.3 we have  
( ) = 2 + 
mergeSort in this section. Based on
( ) ≤ 2 +(⌊lg  ⌋+1)*2 . Consequently  
( ) ≤ 2 *lg  +4 . Similarly
≥ 2 + (lg  − 2) *  . Therefore 
( ) ≥  * lg  .
 * lg  ≤ 
( ) ≤  
( ) ≤ 2 * lg  + 4.</p>
        <p>
          The first two inequalities and definition 2.2 of
Ω( ) imply 
( ),  
lg  ). We have lim →∞ 4/ ( *lg  ) = lim →∞ 4/(lg  ) = 0. Using corollary 2.4 we
( ) ∈ Ω( *
receive 
( ),  
( ) ∈  ( * lg  ). With definition 2.2 of Θ( ) we can conclude

( ),  
( ) ∈ Θ( * lg  ).
6. Critical note on the notation of recurrence on  ( )
As we mentioned in the Abstract of this paper, the runtime complexity of merge
sort is traditionally calculated by solving the recurrence [
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ]
        </p>
        <p>( ) = ︂{ Θ((⌊1/) 2⌋) +  (⌈/ 2⌉) + Θ( ) iiff  &gt;= 11,,
Our problem is that this notation is not well defined. It uses a notation like
 = ℎ + Θ( ) or  =  ( ) where , , ℎ : N → R are asymptotically non-negative.
And in this notation “ =” means sometimes “∈”, other times “⊆” and it may mean
even equality. For example, 2 + 3 = Θ( ) means 2 + 3 ∈ Θ( ),  ( ) =  ( *lg  )
means  ( ) ⊆  ( *lg  ) and Θ(2 + 3) = Θ( ) means that the two sets are equal.
Each time it is used – maybe in many steps of a long proof – one has to decide
intuitively about its exact meaning.</p>
        <p>This notation is often used, because it makes proofs shorter, but it also makes
proofs unclear, so we believe that it does not serve purposes of teaching. A student
must be careful not to derive false consequences like the following one.</p>
        <p>1 =  (1) ∧ 2 =  (1) ⇒ 1 = 2</p>
      </sec>
      <sec id="sec-5-6">
        <title>This is a grotesque case of abuse of notation.</title>
        <p>We invented an alternative method instead. We analyzed the call-tree of the
recursive program, and performed elementary, but mathematically exact calculations.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>7. Summary</title>
      <p>The calculation of the computational complexity of some recursive algorithms is
traditionally based on the recurrences like that above. Their meaning may be
intuitively clear but it is mathematically unclear. Thus we proposed a calculation
on the eficiency of merge sort which is based strictly on the notions introduced
and on the analysis of the call-tree of the recursive part of the algorithm.</p>
      <p>
        Further work may go in this direction or in the direction of well defined recursive
formulas like in [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ].
      </p>
      <p>Acknowledgements. Thanks to my campus (Eötvös Loránd University, Faculty
of Informatics) for financial support. And thanks to my student, Kristóf Umann
for making Figure 1.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Cormen</surname>
            ,
            <given-names>T.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leiserson</surname>
            ,
            <given-names>C.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rivest</surname>
            ,
            <given-names>R.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stein</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , Introduction to Algorithms (Third Edition), The MIT Press (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Cormen</surname>
            ,
            <given-names>Thomas H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Algorithms</surname>
            <given-names>Unlocked</given-names>
          </string-name>
          , The MIT Press (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Goldstine</surname>
            ,
            <given-names>H.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neumann</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>von, Planning and coding of problems for an electronic computing instrument</article-title>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>II</given-names>
          </string-name>
          , Volume
          <volume>2</volume>
          , reprinted in
          <source>John von Neumann Collected Works</source>
          , Volume V:
          <article-title>Design of Computers, Theory of Automata and Numerical Analysis</article-title>
          , Pergamon Press, Oxford, England, pp.
          <fpage>152</fpage>
          -
          <lpage>214</lpage>
          . (
          <year>1963</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Knuth</surname>
          </string-name>
          , Donald, “
          <article-title>Section 5.2.4: Sorting by Merging”</article-title>
          .
          <source>Sorting and Searching. The Art of Computer Programming</source>
          .
          <volume>3</volume>
          (2nd ed.).
          <source>Addison-Wesley</source>
          . pp.
          <fpage>158</fpage>
          -
          <lpage>168</lpage>
          . ISBN 0- 201-89685-0. (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] Neapolitan, Richard E.,
          <source>Foundations of Algorithms (Fifth Edition)</source>
          , Jones &amp; Bartlett
          <string-name>
            <surname>Learning</surname>
          </string-name>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Shaffer</surname>
            ,
            <given-names>Clifford A..</given-names>
          </string-name>
          <article-title>A Practical Introduction to Data Structures and Algorithm Analysis</article-title>
          ,
          <source>Edition</source>
          <volume>3</volume>
          .1 (C++ Version),
          <source>Virginia Tech</source>
          ,
          <string-name>
            <surname>Blacksburg</surname>
          </string-name>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Tarján</surname>
            ,
            <given-names>Róbert</given-names>
          </string-name>
          <string-name>
            <surname>Endre</surname>
            , Data Structures and
            <given-names>Network</given-names>
          </string-name>
          <string-name>
            <surname>Algorithms</surname>
          </string-name>
          , Bell Laboratories (
          <year>1983</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Weiss</surname>
            ,
            <given-names>Mark</given-names>
          </string-name>
          <string-name>
            <surname>Allen</surname>
          </string-name>
          ,
          <source>Data Structures and Algorithm Analysis</source>
          ,
          <string-name>
            <surname>Addison-Wesley</surname>
          </string-name>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>