<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Singular value decomposition calculation comparison in video fragment processing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sergii Mashtalir</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro Lendel</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kharkiv National University of Radio Electronic</institution>
          ,
          <addr-line>14, Nauky Ave., Kharkiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Uzhhorod National University</institution>
          ,
          <addr-line>3, Narodna Square, Uzhhorod</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1867</year>
      </pub-date>
      <abstract>
        <p>This study considers different approaches to calculating the first singular value of the Singular Value Decomposition (SVD) transform. The SVD is closely associated with several common matrix norms and offers an efficient method for their computation. Sum first k singular values called the Ky Fan k-norm. In our approach, the Ky Fan norm is a fragment descriptor. There is no need to do a complete SVD transformation to obtain the norm value. It is enough to obtain a matrix of singular values. In video fragment analysis, the number of fragments and their size significantly affect the calculation speed. The SVD method is robust but does not necessarily scale well to larger matrices. Thus, to use SVD in a practical sense with large datasets, we needed a faster algorithm that finds the same dominant patterns as regular SVD but with only a fraction of the computational cost. We compare the effectiveness of alternative approaches depending on the size of the fragments and their number.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Video stream fragmentation</kwd>
        <kwd>Fragment processing</kwd>
        <kwd>Ky Fan norm</kwd>
        <kwd>Singular value decomposition</kwd>
        <kwd>UTV</kwd>
        <kwd>ULV</kwd>
        <kwd>Lanczos SVD</kwd>
        <kwd>Randomized SVD</kwd>
        <kwd>Power Iteration</kwd>
        <kwd>Data Analysis 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        There has been significant interest in the Singular Value Decomposition (SVD) algorithm over the
last few years because of its wide applicability in multiple fields of science and engineering, both
standalone and as part of other computing methods. The singular value decomposition is the most
common and valuable decomposition in computer vision [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Computer vision aims to reconstruct
the three-dimensional world from two-dimensional images. These images often result in square
and non-square singular matrices and transformations in real-world scenarios. Reversing
transformations from two to three dimensions cannot be entirely accurate, but it can be effectively
estimated using singular value decomposition. Singular value decomposition will also allow us to
establish a sense of order in objects and is, therefore, useful whenever attempting to compare.
Іmage denoising [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], image re-scaling [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], image compression [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], motion detection [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and video
fragment processing are far from a complete list of SVD applications.
      </p>
      <p>
        We focused on video fragment processing, and in our approach, we consider fragments to be
geometric parts of video frames, represented as matrices with arbitrary dimensions. The research
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] proposes a singular value decomposition of the matrix and the Ky Fan norm for scene change
analysis. In the context of motion detection, this approach was expanded [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Dividing the frame
into 5x5 or 10x10 allowed us to identify the fragments in which motion occurred Figure 1.
      </p>
      <p>
        In the study [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], increasing the number of fragments to 100x100 allowed us to find the contours
of a moving object Figure 2.
      </p>
      <p>The SVD transformation is applied to each fragment, and the first singular value is chosen as
the fragment descriptor. If we divide the frame into 5x5, then we need to calculate 25 matrices of a
certain size. Increasing the number of fragments will lead to a decrease in the size of the input
1International Workshop on Computational Intelligence, co-located with the IV International Scientific Symposium “Intelligent
Solutions” (IntSol-2025), May 01-05, 2025, Kyiv-Uzhhorod, Ukraine
sergii.mashtalir@nure.ua (S. Mashtalir); dmytro.lendel@uzhnu.edu.ua (D. Lendel)
0000-0002-0917-6622 (S. Mashtalir); 0000-0003-3971-1945 (D. Lendel)</p>
      <p>© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
matrices, but the number of SVD applications will increase. Considering the number of
transformations, optimization of the calculation process comes to the fore.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Singular value decomposition</title>
      <sec id="sec-2-1">
        <title>2.1. Singular value decomposition step by step</title>
        <p>
          The process of Singular Value Decomposition (SVD) [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] involves breaking down a matrix A into
the form:
        </p>
        <p>where U is an m×m complex unitary matrix, Σ is an m×n diagonal matrix with non-negative
real numbers on the diagonal, and V is an n×n complex unitary matrix. If A is real, U and V can be
guaranteed to be also real orthogonal matrices. The singular values (σ i) describe the "energy" or
importance of each corresponding dimension in the matrix. This computation allows us to retain
the important singular values that the image requires while also releasing the values that are not as
necessary in retaining the quality of the image. The singular values of an m × n matrix A are the
square roots of the eigenvalues of the n × n matrix AT A, which are typically organized by
magnitude in decreasing order. Before we apply the SVD to image processing, we will first
demonstrate the method using a small (2×3) matrix A:</p>
        <p>A =U Σ V t,</p>
        <p>A =[ 42 53 76]
and then follow a step-by-step process to rewrite the matrix A in the separated form U Σ V t :
[ 22
20
40
AT A, we need to compute the determinant of the matrix AT A − λI . In general, we compute the
determinant of a 3 × 3 matrix in the following way:
[dh ei fj ]=a|he fi|−d|bh ci|+ g|be cf|=a ( ej−hf )+ d ( bi−hc )+ d ( bf −ec )
a b c</p>
        <p>We could extend this computation to an n × n matrix as needed. For our example, we compute
the determinant of AT A − λI which is:</p>
        <p>we solve the characteristic equation for λ, and here we see that λ = 0, 4.3444, 134.6556. We
reorder the eigenvalues in decreasing magnitude, so that: λ1=134.6556 , λ2=4.3444 λ3=0. 0. The
singular values of  are defined as the square roots of the eigenvalues:
σ 1=√134.6556 ≈ 11.6041 , σ 2=√ 4.344 ≈ 2.0843 , σ 3=√0=0
(6)</p>
        <p>To determine the matrix Σ, we list the non-zero singular values, σ i, in decreasing magnitude
down the main diagonal of Σ, where σ i=√ λi. Then, we add any additional rows and columns of
zeros as needed to retain the original dimension of A in Σ. In our example, we have three singular
values: 11.6041, 2.0843 and 0. We only need to retain the non-zero values, and hence, we form the
matrix:</p>
        <p>Next, find eigenvectors (columns of ). The eigenvectors 1, 2 are determined from the
equation where λ1=134.67:</p>
        <p>We have two singular values in our example, and we use them to form the following vectors:
( A AT − λI ) u=0, [7645−134.67 65 x 1 ]=[0]
65−137.67 ][ y 1 0
u 1=( 11..40675163 , 1.41656 )=( 0.7311,0 .6823 )
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
Similarly for λ2=4.33:</p>
        <sec id="sec-2-1-1">
          <title>And the last matrix U:</title>
          <p>Next, we need find matrixV. The eigenvectors  1,2, 3 are determined from the equation:
Solving the equation for each eigenvalue, we obtain the normalized eigenvectors:</p>
          <p>
            Now, we understand the complexity of the SVD calculation. We can explore different
calculating approaches since we are only interested in the first singular value. The simplest
solution would be to stop the calculation as soon as the first singular value is found. Moreover, we
do not need the matrices U and V. This will be an incomplete SVD. Numpy library is the proposed
parameter “full_matrices” in linalg.svd function. We can set full_matrices =False and hope to find
singular value without full calculation. However, if the dimension of the input matrix is large, this
approach may not give the desired result.
2.2. UTV, ULV
The SVD method is robust but does not necessarily scale well to larger matrices. Thus, to use SVD
in a practical sense with large datasets, we needed a faster algorithm that finds the same dominant
patterns as regular SVD, but with only a fraction of the computational cost. In search of an
alternative approach it would be logical to pay attention on UTV and ULV decomposition. UTV
decomposition [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ] is an alternative to SVD that factorizes a matrix A:
          </p>
          <p>A =UT V t
(15)</p>
          <p>Where U and V are orthogonal matrices (similar to SVD). T is an upper triangular matrix, unlike
the diagonal Σ in SVD. This method is often used as a more efficient alternative to SVD in
applications where exact singular values are not required, but a good approximation is sufficient.
UTV factorization begins by applying a series of Householder reflections or randomized projections
to transform the given matrix A into an upper triangular or upper trapezoidal matrix T while
preserving its dominant numerical properties. This transformation ensures that most of the
essential information in A is retained while simplifying its structure. The process involves
computing orthogonal matrices U and V at capture the column and row spaces of A, respectively,
mapping it into the triangular form. Once the factorization is complete, the result is a
decomposition A =UT V t, where T serves as a computationally efficient approximation of the
singular structure of A, similar to Σ in SVD but with a triangular shape.</p>
          <p>
            ULV factorization [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ] transforms the given matrix A into a lower triangular or block lower
triangular matrix L while preserving its essential numerical properties. This is achieved through a
series of Householder reflections or Givens rotations, which iteratively reduce A into its structured
form while maintaining orthogonality. The decomposition also produces orthogonal matrices U
and V that encode the column and row transformations, respectively, mapping A into its triangular
form. The result is the factorization:
          </p>
          <p>A =UL V t
(16)
which serves as a computationally efficient alternative to SVD, particularly in cases where
structured rank-revealing decompositions are beneficial.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.3. Lanczos SVD</title>
        <p>
          ULV and UTV are providing full SVD, decomposition and Lanczos SVD [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] is not strictly an
optimization of either ULV or UTV, but rather an iterative alternative to SVD that shares
similarities with both approaches. UTV factorization transforms a matrix A into an upper
triangular matrix T using unitary transformations, maintaining full orthogonality in U and V.
Lanczos SVD, on the other hand, iteratively reduces A to a bidiagonal form instead of a strictly
triangular one. Both methods use orthogonal transformations, but Lanczos is focused on extracting
bk+1=
        </p>
        <p>A bk
‖A bk‖</p>
        <p>First, we multiply b0 to compute the matrix-vector product A ( A bk) and divide the result with
the norm (|‖A bk‖). We will continue until the result has converged, in other words, when the
difference between iterations is below a defined threshold. The power method has a few
assumptions: b0 has a non-zero component in the direction of an eigenvector associated with the
dominant eigenvalue. Initializing b₀ randomly minimizes the possibility that this assumption is not
fulfilled, and matrix A has a dominant eigenvalue that must be greater in magnitude than other
eigenvalues. These assumptions guarantee that the algorithm converges to a reasonable result. The
smaller the difference between the dominant eigenvalue and the second eigenvalue, the longer it
might take to converge.
dominant singular values efficiently, whereas UTV is more general in preserving an
uppertriangular structure. Lanczos SVD, in contrast, uses an iterative Krylov subspace approach to build
a bidiagonal matrix rather than a strict lower triangular matrix. Given that full SVD is
computationally expensive, we now turn to Power SVD and Randomized SVD, which efficiently
approximate dominant singular values without performing a complete decomposition.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.4. Power iteration</title>
        <p>
          Power iteration [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] starts withb0, which might be a random vector. At every iteration this vector
is updated using following rule:
(17)
(18)
(19)
(20)
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2.5. Randomized SVD</title>
      <p>
        A promising approach for efficient singular value decomposition is Randomized SVD [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], which
uses a random projection to approximate the column space of a given matrix, reducing it to a target
rank k before applying SVD. This method retains most of the important information while
significantly reducing the computational cost. Given anm×n matrix A, we choose a target rank k&lt;
m, which determines the dimensionality of the subspace for which we will compute SVD. We first
initialize a random matrix P of size n×k and then transform our original matrix A by computing the
matrix product:
      </p>
      <p>Now, Y is an k by n matrix and we are computing SVD on a matrix with a column size of k
rather than m, which should be much less computational cost if we choose a small k.</p>
      <p>U Y ∑Y V Y¿ ← SVD ( Y )</p>
      <p>Z = AP</p>
      <p>t</p>
      <p>Y =Q A</p>
      <p>Q , R ← QR Factorization ( Z )</p>
      <p>This reduces the column space of A while preserving its dominant features, decreasing the
dimensionality from n to k (matching the row dimension of P). However, with high probability, Z
will still retain the most significant column space features of A. QR factorization of Z provides an
orthonormal basis Q, which captures the dominant column space of A:</p>
      <sec id="sec-3-1">
        <title>Next, we project A onto the subspace defined byQ: Finally, we keep Σ and V from our SVD of Y, and then obtain our finalU matrix by:</title>
        <p>U =Q U Y
(22)
This step extends U yback to the original column dimension of A.</p>
        <p>We are ready to compare the approaches, evaluate their accuracy in calculating the first
singular value and speed in the fragment analysis contest.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Accuracy of finding the first singular value</title>
      <p>In this section, we will consider the results produced by the developed application. Our experiment
used a surveillance camera source Figure 3. Codec is h264, frame size is 1280 x 720. To visualize the
results of utilizing the Ky Fan norm for video analysis, a Python 3.10.11 application was developed
and executed on a system equipped with an Intel Core i5 processor, 16 GB of RAM, and running
the Windows operating system. The application relies on two open-source libraries licensed under
Apache License: OpenCV version 4.10.0 and NumPy version 2.2.1. Frames are converted from RGB
to a grayscale model. Each frame is divided into smaller fragments through a grid-based
segmentation technique.</p>
      <sec id="sec-4-1">
        <title>3.1. Accuracy of finding the first singular value</title>
        <p>Before evaluating the speed of the selected approaches, we need to check the accuracy of finding
the first singular value for different fragment sizes. The results are presented in Table 1. All
approaches: SVD, incomplete SVD, Power iteration, UTL, Lanczos SVD and Randomized SVD show
same singular values for different fragment sizes. The ULV approach showed no significant
deviation. This deviation may be because ULV does not explicitly compute singular values like
SVD-based methods, so minor deviations are expected. If high precision for singular values is
required, ULV may not be the best choice—methods like Power SVD, Lanczos SVD, and
Randomized SVD are preferable. However, if the goal is structured decomposition or efficient
matrix transformation, ULV remains a valuable alternative.</p>
        <p>All approaches
exclude ULV
31541.411066
572.544123</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Comparison of fragments processing time using different methods</title>
        <p>We compared the average calculation time for fragments of different sizes. The results are
presented on Figure 4. Methods with full decomposition showed the longest time for fragments of
high dimension. Methods optimized for finding the first singular value showed the best time.</p>
        <p>The best optimized approach Power SVD for different fragment sizes. The best result is marked
in blue, the worst in red in Table 2.</p>
        <p>The total processing time of the entire frame using different methods is presented in Figure 5. If
the frame is divided into 5x5, then the transformation should be applied to 25 matrices of size
144x256, with a division into 50x50 there will be 250 matrices of size 14x25. The speed is chosen as
average, therefore the processing speed of each fragment depends on the sparsity of the matrix.
Randomize SVD performed the worst on small matrices. This approach was designed for
highdimensional matrices and is not efficient for low-dimensional matrices. Approaches using full
decomposition are quite slow on both small and large matrices.</p>
        <p>The best optimized approach Power SVD for different fragment sizes. The best result is marked
in blue, the worst in red in Table 3.
Conclusions
Video fragment analysis involves dividing the frame into geometric regions of different sizes
depending on the task. For motion detection, dividing the frame into 5x5 or 10x10 is enough to
determine the area of interest. That is, find the part of the frame where the movement occurs. Of
course, the object's size must be smaller than the size of the fragment. To determine the contours of
the object, we must increase the scale. And as a result, we will get several high-order matrices or
many low-order matrices. It should also be noted that on fragments of small sizes, but with a large
number of fragments themselves, the worst results were shown by Randomized SVD . At the same
time, with the increase in fragment sizes, the performance of the UTV and ULV algorithms has
deteriorated, so they are the worst solution for motion detection. For practical real-time video
analysis, the Power SVD is best suited. For offline analysis, all algorithms will be pretty effective.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Gallier</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <article-title>Quaintance Linear Algebra and Optimization with Applications to Machine Learning -</article-title>
          <source>Volume I, World Scientific Publishing Co Pte Ltd</source>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1142/11446
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Q.</given-names>
             
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
           Zhang,
          <string-name>
            <surname>Y.</surname>
          </string-name>
           Zhang, H. 
          <article-title>Liu, An Efficient SVD-Based Method for Image Denoising</article-title>
          ,
          <source>IEEE Trans. Circuits Syst. Video Technol. 26.5</source>
          (
          <year>2016</year>
          )
          <fpage>868</fpage>
          -
          <lpage>880</lpage>
          . doi:
          <volume>10</volume>
          .1109/tcsvt.
          <year>2015</year>
          .
          <volume>2416631</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
             
            <surname>Motylinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
             J. 
            <surname>Plater</surname>
          </string-name>
          , J. E. Higham,
          <article-title>Re-scaling images using a SVD-based approach</article-title>
          , Signal,
          <source>Image Video Process. 19.3</source>
          (
          <year>2025</year>
          ).
          <source>doi:10.1007/s11760-025-03825-1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>H.</surname>
          </string-name>
           R. Swathi,
          <string-name>
            <surname>S.</surname>
          </string-name>
           Sohini, Surbhi, G. 
          <article-title>Gopichand, Image compression using singular value decomposition</article-title>
          ,
          <source>IOP Conf. Ser</source>
          .
          <volume>263</volume>
          (
          <year>2017</year>
          )
          <article-title>042082</article-title>
          . doi:
          <volume>10</volume>
          .1088/
          <fpage>1757</fpage>
          -899x/263/4/042082.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mashtalir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lendel</surname>
          </string-name>
          ,
          <article-title>Video pre-motion detection by fragment processing</article-title>
          ,
          <source>12th International Scientific and Practical Conference “Information Control Systems and Technologies”</source>
          ,
          <year>2024</year>
          , https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3790</volume>
          /paper30.pdf
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>M. </surname>
          </string-name>
          <article-title>Koliada, KY FAN NORM APPLICATION FOR VIDEO SEGMENTATION, Her</article-title>
          .
          <source>Adv. Inf. Technol. 3</source>
          .
          <issue>1</issue>
          (
          <year>2020</year>
          )
          <fpage>345</fpage>
          -
          <lpage>351</lpage>
          . doi:
          <volume>10</volume>
          .15276/hait01.
          <year>2020</year>
          .
          <volume>1</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. V.</given-names>
             
            <surname>Mashtalir</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
           P. Lendel,
          <article-title>Video fragment processing by Ky Fan norm</article-title>
          ,
          <source>Appl. Asp. Inf. Technol. 7</source>
          .
          <issue>1</issue>
          (
          <year>2024</year>
          )
          <fpage>59</fpage>
          -
          <lpage>68</lpage>
          . doi:
          <volume>10</volume>
          .15276/aait.07.
          <year>2024</year>
          .
          <volume>5</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S. V.</given-names>
             
            <surname>Mashtalir</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
           P. Lendel,
          <article-title>Moving object shape detection by fragment processing</article-title>
          ,
          <source>Her. Adv. Inf. Technol. 7</source>
          .
          <issue>4</issue>
          (
          <year>2024</year>
          )
          <fpage>414</fpage>
          -
          <lpage>423</lpage>
          . doi:
          <volume>10</volume>
          .15276/hait.07.
          <year>2024</year>
          .
          <volume>30</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>M. De Castro-Sánchez</surname>
          </string-name>
          , J. A. 
          <string-name>
            <surname>Moríñigo</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
           
          <string-name>
            <surname>Terragni</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
           
          <string-name>
            <surname>Mayo-García</surname>
          </string-name>
          ,
          <article-title>Analysis of the SVD Scaling on Large Sparse Matrices</article-title>
          , in: 2024
          <source>Winter Simulation Conference (WSC)</source>
          , IEEE,
          <year>2024</year>
          , pp. 
          <fpage>2523</fpage>
          -
          <lpage>2534</lpage>
          . doi:
          <volume>10</volume>
          .1109/wsc63780.
          <year>2024</year>
          .
          <volume>10838971</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>D.</surname>
          </string-name>
           Keyes,
          <string-name>
            <surname>H.</surname>
          </string-name>
           Ltaief,
          <string-name>
            <given-names>Y.</given-names>
             
            <surname>Nakatsukasa</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
           
          <article-title>Sukkari, High-Performance SVD Partial Spectrum Computation</article-title>
          , in: SC '23:
          <article-title>International Conference for High Performance Computing, Networking, Storage and Analysis</article-title>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1145/3581784.3607109.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>X.</given-names>
             
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
             
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
             
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
           Tang,
          <article-title>Algorithm xxx: Faster Randomized SVD with Dynamic Shifts</article-title>
          ,
          <source>ACM Trans. Math. Softw</source>
          . (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .1145/3660629.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
             
            <surname>Noorizadegan</surname>
          </string-name>
          , C. S. Chen,
          <string-name>
            <given-names>R.</given-names>
             
            <surname>Cavoretto</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
           De Rossi,
          <article-title>Efficient truncated randomized SVD for mesh-free kernel methods</article-title>
          ,
          <source>Comput. &amp; Math. With Appl</source>
          .
          <volume>164</volume>
          (
          <year>2024</year>
          )
          <fpage>12</fpage>
          -
          <lpage>20</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.camwa.
          <year>2024</year>
          .
          <volume>03</volume>
          .021.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Szeliski</surname>
          </string-name>
          Computer Vision: Algorithms and Applications, Springer,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Watson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Griffiths</surname>
          </string-name>
          ,
          <source>Numerical Analysis</source>
          <year>1993</year>
          , Taylor &amp; Francis Group,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1201/9781003062257
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vandecappelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Lathauwer</surname>
          </string-name>
          ,
          <article-title>Updating the multilinear UTV decomposition</article-title>
          ,
          <source>IEEE Trans. Signal Process</source>
          . (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . doi:
          <volume>10</volume>
          .1109/tsp.
          <year>2022</year>
          .
          <volume>3187814</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>T.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>The Lanczos algorithm for matrix functions: a handbook for scientists</article-title>
          . ,
          <year>2024</year>
          , doi:10.48550/arXiv.2410.11090
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nakatsukasa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Higham</surname>
          </string-name>
          ,
          <article-title>Stable and Efficient Spectral Divide and Conquer Algorithms for the Symmetric Eigenvalue Decomposition and the SVD</article-title>
          ,
          <source>SIAM J. Sci. Comput</source>
          .
          <volume>35</volume>
          .3 (
          <issue>2013</issue>
          )
          <fpage>A1325</fpage>
          -
          <lpage>A1349</lpage>
          . doi:
          <volume>10</volume>
          .1137/120876605.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Janekovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bojanjac</surname>
          </string-name>
          ,
          <article-title>Randomized Algorithms for Singular Value Decomposition: Implementation and Application Perspective</article-title>
          , in: 2021 International
          <string-name>
            <surname>Symposium</surname>
            <given-names>ELMAR</given-names>
          </string-name>
          , IEEE,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1109/elmar52657.
          <year>2021</year>
          .
          <volume>9550979</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>