<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ming⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vijay Adoni</string-name>
          <email>vijay.adoni@uleth.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sajad Fathi Hafshejani</string-name>
          <email>sajad.fathihafshejan@uleth.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daya Gaur</string-name>
          <email>daya.gaur@uleth.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Quantum Algorithms, Central Path Method, Interior Point Methods.</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Math and Computer Science, University of Lethbridge</institution>
          ,
          <addr-line>Lethbridge, AB</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vancouver</institution>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The central path method is a crucial technique for solving a wide range of optimization problems. The method relies on an equation solving step, which hits its limit for very large instances in practise. This paper proposes to explore the use of quantum approaches to enhance the central path method's performance when solving very large linear programs. We will go through the potential benefits and limitations of replacing the iterative equation-solving step with the HHL quantum algorithm. We experiment with several instance types using each proposed algorithm and evaluate their efectiveness through numerical simulations to find a promising approach for the central path method.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1. Introduction
ical and practical science and engineering domains. It has
been extensively used to solve optimization problems in
various areas, including operations research, engineering,
economics, or even in more abstract mathematical areas
such as combinatorics. LP has applications in machine
learning and numerical optimization. Some of the
examples of application of LP are ℒ1-regularized support
vector machines (SVMs) [1], basis pursuit (BP) problems [2],
sparse inverse covariance matrix estimation (SICE) [3],
non-negative matrix factorization (NMF) [4], MAP
inference [5], and adversarial deep learning [6, 7]. Fung et al.
[8] introduced a technique for learning a kernel function
that is a linear combination of other positive semi-definite
kernels. They demonstrated how diagonal dominance
constraints can be utilized to obtain an approximate
kernel through linear programming. This method can be
This is an important use of linear programming in
computing Kernels for support vector machines. The results
mentioned in this paragraph are illustrative of the utility
of linear programming in solving optimization problems
Joint Workshops at 49th International Conference on Very Large Data
Bases (VLDBW’23) — International Workshop on Quantum Data
Science and Management (QDSM’23), August 28 - September 1, 2023,
(D. Gaur)
∗Corresponding author.
†These authors contributed equally.
0000-0002-5731-8234 (S. F. Hafshejani); 0000-0001-6876-6000
arising in machine learning and data science.</p>
      <p>When it comes to solving linear problems, there are
specifics of the problem at hand, diferent methods may
be more appropriate than others. Factors like problem
size and structure, desired level of accuracy, and
computational eficiency all come into play when determining
which method to use. Regardless of which technique is
employed, the ultimate goal is to identify the feasible
solution space and then optimize the objective function
within that space eficiently. LP problems can be complex
and time-consuming to solve strictly, so n-approximation
algorithms are used to determine the proximity of the
solution to optimal. Another way to reduce the time it
takes to solve an LP is to use quantum computations for
the expensive steps instead of the traditional classical
algorithm, resulting in a possible speed-up, the object to
study in this paper.</p>
      <p>Quantum computing is a new and exciting way to
promechanics. Unlike traditional computing, which uses
bits to represent data and relies on binary logic, quantum
computing uses qubits that can be in a superposition of
states possibly entangled. In November 2022, IBM
created the largest quantum computer, Osprey, with 433
qubits capable of holding 2433 states simultaneously.
Ongoing research is making quantum computing practical
for solving certain problems in optimization and
simulation, which are dificult or impossible with classical
methods. As the field continues to develop and advances
are made in hardware and software, quantum methods
are becoming more practical for a wide variety of
applications. One of the most promising quantum methods
for solving linear equations is the HHL algorithm [10],
created by Harrow, Hassidim, and Lloyd, which ofers an
employed for feature selection using a blend of kernels. cess information that utilizes the principles of quantum</p>
      <p>Over the years, the complexity of solving linear programs
has substantially improved due to algorithm design
techniques and computational hardware advancements. The
key developments that have played a crucial role in this
improvement will be discussed in this section.
Algorithms used for solving linear optimization programs
date back to 1939 with the introduction of the
graphical method. However, the simplex method, introduced
by Dantzig [11], has become a more prominent and
frequently used method. Karmarkar [12] proposed interior
point methods (IPMs) and showed that they could solve
linear programs much faster than the simplex method in
the worst-case. This led to extensive research on interior
point methods, resulting in the development of many
other algorithms since then.</p>
      <p>The existence of an interior path, known as the central
path, that converges to an optimal solution paved the
way for eficient algorithms that find the optimal
solution quicker than the simplex method. There have been
several successful attempts at traversing this central path.</p>
      <p>Methods where the solutions lie in the interior of the
feasible region at each iteration while maintaining feasibility
at each step are known as feasible interior point methods
(see Roos et al. [13]). Methods where the solutions are
strictly positive, with an initial solution outside the
feasible region, and which converge to an optimal are known
as infeasible IPMs. One of the seminal papers on such
methods is by Wright [14], which shows a sub-quadratic
complexity can be achieved given that the starting point
is a positive infeasible solution and the problem has a
strict complementarity solution. Comparisons have been
made between feasible and infeasible interior point
methods. In [15], Wright states that although feasible IPMs are
theoretically faster than infeasible IPMs by a factor of  ,
where  is the size of the input, in practice, both methods
can be highly efective for solving linear programs.</p>
      <p>IPMs have been implemented with great success in
recent years. In fact, major optimization software such
as CPLEX and Gurobi provide the ability to use IPM to
solve LPs. For the purposes of this paper, we use the</p>
    </sec>
    <sec id="sec-2">
      <title>3. Motivation and Contributions</title>
      <p>Most IPMs employ iterative techniques that solve
complex and time-consuming linear systems of equations as
an intermediate stage. Solving such systems with
numerous variables and constraints becomes particularly
challenging. Consequently, a demand exists for more
effective and precise approaches to address these problems.</p>
      <p>Our objective is to alleviate the computational burden
associated with solving linear equations in the classi- where the first order derivatives vanishes. The following
cal domain by leveraging quantum algorithms. This is
equations are satisfied by critical points.
achieved through the use of the HHL algorithm.</p>
      <sec id="sec-2-1">
        <title>Our contributions include,</title>
        <p>• Implementation of a novel modification to the
feasible Interior Point Method for solving LPs.</p>
      </sec>
      <sec id="sec-2-2">
        <title>The modification replaces the equation-solving</title>
        <p>step with the quantum HHL method.
• Provide extensive analysis of the modification to</p>
        <p>IPM using simulation in qiskit.
• To establish a baseline for comparison, we
consider basic IPM, IPM which uses the full Newton
system (with the non-linear terms) and a
linearization using McCormick relaxations. The
simulation results for the baseline using McCormick
relaxation are not reported here. Please see the
thesis by Adoni [22] for details on the baseline
algorithms.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Preliminaries</title>
      <p>In this section, we will discuss two important concepts
that are referred to frequently in this paper. The first
is the feasible IPM, as introduced in [9]. This is used to
solve a maximization problem given below:
max   
s.t.
 ≤ 
 ≥ 0
In this problem,  ∈ ℝ 
,  ∈ ℝ  represents the
primal variable,  ∈ ℝ  , and  ∈ ℝ  .</p>
      <p>Later on, we will provide a brief description of the
HHL algorithm, which is used to solve a set of linear
equations.</p>
      <sec id="sec-3-1">
        <title>4.1. Feasible IPM</title>
        <p>Feasible Interior Point Methods (IPMs) are techniques
employed to solve linear optimization problems when an
initial point and a path leading to the optimal solution
exist within the solution space. In order to begin the
process, the Lagrangian of the log-barrier problem (2) is
analyzed, as explained in [9].</p>
        <p>(, ,  ) = 
  + (
∑ log   + ∑ log   )</p>
        <p>+   ( −  − )
where,  is a vector of dual variables, 0 &lt;  ≤ ∞ , and , 
are a vector of slack variables. Critical points are points
assumption is strong and requires more progress.</p>
        <p>(3)
(4)
(5)
(6)
(7)
∀ = 1, 2, ..., 
∀ = 1, 2, ..., 
∀ = 1, 2, ..., 
The equations above can further be simplified to,
 +  = 
   −  = 
  = 
  = 
(1)
(2)
where we substitute  = 
matrices given by vectors , 
−1 and  ,</p>
        <p>are diagonal
respectively. In feasible
IPM, we start with an arbitrary point (,  , , )</p>
        <p>that is
either primal or dual feasible. It is essential that each
vector in the point (,  , , )</p>
        <p>is non-negative. The next
step involves determining the direction (Δ, Δ , Δ, Δ)
that the point must traverse to converge towards the 
centre on the central path. This step brings the new point
( + Δ,  + Δ ,  + Δ,  + Δ)</p>
        <p>closer to the  -center. To
do this, we modify equations 3 - 6 by replacing  with Δ
and the other variables as necessary. Finally, solve the
resulting set of equations to obtain the direction.</p>
        <p>⎡
⎢ 0
⎢
⎢
⎣ 0
0
 
0
 

0
0
0

0
Δ</p>
        <p>+  − 
− ⎥
⎤ ⎤ ⎡
⎥ × ⎢⎢⎡⎢ΔΔ ⎥⎥⎥ = − ⎢⎢    − 
⎥</p>
        <p>⎢   −  − 
⎦
⎣ Δ ⎦
⎣    − 
⎤
⎥
⎥
⎥
⎦
where we substitute a variable,  in place of  −  − 
 in place of  −    +  , and  in place of    +    .</p>
        <p>Computing a solution to Equation 7 is known as the
Newton step. The Newton step has a classical worst-case
log(</p>
        <p>1 )), where  is the condition
time complexity of (
number of the input matrix and  is the desired output
precision and  is the sparsity of the matrix. Standard
methods used to solve the linear set of equations require
storing the entire matrix of coeficients in memory, along
with intermediate results obtained during multiple passes
through the matrix. This leads to a significant growth
in memory requirement when using classical algorithms.</p>
        <p>A Newton step can be executed more eficiently using
the HHL algorithm. This is provided we can load the
right-hand side eficiently and the solution vector (or a
near approximation) can be recovered eficiently. The
ifrst assumption is not limiting; however, the second
where N is the size of vector  and   denotes the  ℎ</p>
        <p>To solve a linear system of equations, we require a
circuit that can determine and invert the eigenvalues of
matrix  . This circuit will produce a state that matches
the solution to the system of equations. Figure 1 shows
one circuit that can accomplish this.</p>
        <p>where  = 2 arcsin  ; for some constant  bounded
by the smallest eigenvalue  . A measurement of 1 in
the auxiliary bit after applying the</p>
        <p>gate gives us the
inverse eigenvalues.
matrices of large size. However, in quantum algorithms, variables  1 and dual variables  2 in our implementation,
matrix inversion can be performed much more eficiently
using an 
gate conditioned on the eigenvalues.
cos  /2
sin  /2
− sin  /2
cos  /2
as this leads to faster convergence.</p>
        <p>In order to use the HHL algorithm for solving a linear
system of equations, it’s important to provide the correct
type of input. There are a few key things to keep in mind,
such as:
2
3
4
5
6
7
8
9
10
11</p>
        <p>Algorithm 1: Quantum Accelerated Central
Path Algorithm
Input:  ∈ ℝ  ,  ∈ ℝ × ,  ∈ ℝ 
Output:</p>
        <p>Set: (,  ,  , ) &gt; 0
1 while  ≥  do</p>
        <p>Set:  =  −  − 
Set:  =  − 
Set:  =    +</p>
        <p>+ 
Set:  =  +</p>
        <p>, 0 &lt;  &lt; 1</p>
        <sec id="sec-3-1-1">
          <title>Compute step directions (Δ, Δ , Δ , Δ</title>
          <p>)
using HHL (8)
Compute step size using
 =  + 
 =  + 
 =  + 
 =  +  2Δ
1Δ
1Δ
2Δ
solution that satisfies    +    ≤  , is
1. Matrix type: The input matrix should be
Hermitian because it must have real eigenvalues to
perform phase estimation accurately. This also
allows for simulation of the Hamiltonian.
2. Matrix condition number: The condition num- above.
complexity is (

 2 2 (log  ) 2). ■
where ,  are for the matrix in (3)–(6). Therefore the</p>
          <p>If  is dense then we can use a version of HHL [25]
with complexity ( √( log  )
2/) to obtain a bound as
ber of the matrix is an important consideration.
A well-conditioned matrix with a low condition
number is easier to invert using the HHL
algorithm. In contrast, a poorly conditioned matrix
can result in more auxiliary qubits and longer
runtimes.
3. Number of registers available: The HHL
algorithm provides an approximate solution with a
precision that depends on the number of auxiliary
qubits and the accuracy of the phase estimation
step.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>4. Matrix sparsity: The sparsity of the matrix</title>
          <p>with
only a few non-zero entries. It also afects the
number of gates required for the algorithm and
the overall runtime.</p>
          <p>We can control the type and the number of registers.
However, the sparsity pattern and the condition number
are not under our control, and afect the run time of HHL.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>5.1. Complexity Analysis</title>
        <p>To determine the runtime of the algorithm, we can
calculate the number of iterations required by the IPM and
the complexity of the HLL algorithm in each iteration,
which is expressed below.</p>
        <p>Theorem 5.1. Consider the linear optimization problem
defined by 1. The complexity to get an  -solution, i.e., a</p>
      </sec>
      <sec id="sec-3-3">
        <title>5.2. Discussion</title>
        <p>Before delving into the section on experiments, we will
address some limitations and suggest modifications for a
more practical implementation of the new central path
approach. Here are some essential points to consider:
• QPE precision and computation time have a
tradeof relationship. Increasing the precision may
result in longer computation time and vice versa.
• The HHL algorithm has restrictions on input, so it
is crucial to start with optimal instances to ensure
favourable outcomes. This is especially important
for practical applications.
• Generating a time-evolved matrix from
Hermitian input is dificult in practical settings. So far,
only local and sparse Hamiltonians have been
identified as satisfying the necessary criteria.
• QPE has a range from 0 to 2 . Input values outside
of this range cannot be used for the QPE process.</p>
        <sec id="sec-3-3-1">
          <title>It is important to consider the range of values to efectively utilize the QPE method.</title>
          <p>6.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Measures of Performance</title>
      <p>In the HHL method of Qiskit, we set the circuit’s accuracy
as  = 10 −3 in the constructor. Furthermore, we outline
several metrics that will be examined in the outcomes.</p>
      <sec id="sec-4-1">
        <title>6.1. State Fidelity</title>
        <p>State fidelity is a way to measure how similar the actual
state of a quantum system is to the desired target state.
It helps us evaluate how well quantum operations and
algorithms are performing. State fidelity values range
from 0 to 1, with a score of 1 indicating an exact match
between the actual and target states. If the score is less
than 1, there is some deviation from the desired state. We
can use the built-in s t a t e _ f i d e l i t y ( ) method in Qiskit
to calculate this value.
6.2. SSE
We assess the efectiveness of the HHL approach using
the Sum of Squared Estimate of Errors (SSE) metric. This
measures the diference between the actual data and the
estimation model. To calculate the SSE, we subtract the
estimated data from the actual data, square the diference,
and add up all the squared diferences. The SSE serves as
a numerical indicator of how accurately the HHL method
solves the linear system of equations. The smaller the
SSE, the higher the accuracy of the HHL method, and
conversely.</p>
        <p>=1
  =
∑(  −  (  ))2
lfexibility in changing solvers, and ability to
communicate with most solvers in memory, eliminating the need
for intermediary files. Benchmarking shows that JuMP
can create problems at similar speeds to special-purpose
modelling.</p>
        <p>The JuMP library is an excellent tool that works well
with diferent solvers. We’ll be using the Gurobi solver
because it has a wide range of parameters that allow you
to control how it works. Two parameters that stand out
as particularly useful are ”Method” and ”NonConvex.” For
example, by setting the ”Method” parameter to 2, you can
instruct the solver to use interior point methods when
solving linear programming problems. Doing this makes
comparing results obtained from the in-built optimizer
with the Central Path Method easier. Moreover, setting
the ”NonConvex” parameter to 2 allows Gurobi to tackle
nonlinear problems more eficiently by introducing
Mc</p>
        <sec id="sec-4-1-1">
          <title>Cormick relaxations.</title>
          <p>To experiment with the HHL algorithm, we used the
Qiskit 0.36.1 library in Python instead of creating our
implementation in Julia. We integrate with the Qiskit
package in our Julia code using the ”PyCall” package.</p>
          <p>We can then use the HHL circuit as a black box for our
experimentation.</p>
          <p>Simulation of the Quantum Central Path Method can
be computationally demanding. This is true for
complex problems requiring large intermediate memory and
where   is a given value and  (  ) is the result (output) intricate constraints. Clusters, on the other hand, are
of the Quantum Central Path Method.</p>
          <p>We will establish naming conventions for the diferent
specifically designed to handle large and complex
calculations. They have specialized hardware and software
types of Central Path Methods used. To ensure clarity, that enable highly parallel and eficient processing. In
we will refer to the Central Path Method that utilizes
Newton’s method as the ”Linear Central Path Method.”
The approach that uses the non-convex solver provided
by Gurobi will be called the ”Nonlinear Central Path
Method.” Lastly, we will refer to the Central Path Method
that implements the quantum HHL algorithm as the
”Quantum Central Path Method.” This way, we can easily
distinguish between the diferent methods being
considered.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>7. Experimental Setup</title>
      <p>We rely on specialized software packages to ensure the
efective implementation and operation of the Central
Path Method. The Julia programming language is the
foundation for our work due to its compatibility with
other languages and systems, making it easy to integrate
with tools such as Qiskit. Additionally, Julia has a
growing ecosystem of packages dedicated to linear
optimization and mathematical programming, including the
popular JuMP library for modelling and solving optimization
problems. There are several reasons why this library is
essential for this work, including its user-friendly syntax,
this paper, we utilize the Cedar supercomputer provided
by Alliance Canada to tackle these challenges.</p>
      <p>To maximize the resources and save time, we submit
jobs on Cedar as job arrays using the SLURM workload
manager. Each job is limited to a maximum of 7 days and
has access to 500 GB of memory. Job execution occurs
on one of the 96 available nodes in this memory category.
This approach allows for efective parallel processing and
management of tasks, as they share many similarities.</p>
      <p>We summarize the CPU usage statistics in Table 1. The
table shows that 8.08 core years were used, equivalent
to continuously running computations on a single CPU
core for about eight years. These experiments require a
high level of computational demand. Therefore, utilizing
a cluster is essential.</p>
      <sec id="sec-5-1">
        <title>7.1. Instance Generation</title>
        <p>In this study, we focus on tridiagonal matrices, which
have non-zero elements only on the main, lower, and
upper diagonals. These matrices have well-studied
algorithms for various operations, such as solving linear
equations, finding eigenvalues and eigenvectors, and
inverting matrices. We refer to them as ”eficient” instances
Resource
cedar-compute</p>
        <p>Total CPU Usage
(in core years)
8.08</p>
        <p>Projected CPU Usage
(in core years)
9.27
because they allow for a more accurate simulation of the Table 3 displays the results of the nonlinear Newton
HHL algorithm in Qiskit. step on eficient instances. The first column lists the</p>
        <p>Instances for the Linear and Nonlinear Central Path instances, while the next three columns show the number
Methods are as follows: of iterations, objective function value, and the time it
takes for Gurobi’s built-in interior point solver. The last
• Eficient Instances: The input matrix is tridiago- two columns reveal the objective value and the time taken
nal, the vector  and the slack variables are vectors by our implementation of the nonlinear Newton step
of ones. We start with a matrix size of 2 × 2 and when Gurobi is used to solve the nonlinear program (with
go up to a size of 25 × 25. option n o n - c o n v e x = 2 ), which is explained in detail in [22].</p>
        <p>Instances for the Quantum Central Path Method take However, compared to the results obtained using the
the following form, Linear central path method, this method’s computation
time and optimal solution are less favourable. For
in• Eficient Instances: The input matrix is tridiago- stance, for a matrix size of 12 × 12, the Nonlinear Central
nal, the vector  and the slack variables are vectors Path Method took almost 10 hours to find a solution of
of ones. We start with a matrix size of 2 × 2 and value 6, while the optimal solution is 4. The last row
go up to a size of 5 × 5. of the table shows that when solving a 14x14 matrix, a
large number of feasible solutions were searched and
over 500 GB of memory was consumed. To improve the
8. Results and Discussion performance of the Nonlinear Central Path Method,
implementing stricter bounds could potentially address this
Our main goal is to evaluate the performance of the issue.
quantum-accelerated HHL algorithm. To do this, we The Quantum Central Path method was used to
anwill compare it with the classical method of solving the alyze eficient instances and the results are presented
Newton step. For the baseline results for the non-linear in Table 4. The data indicates that the SSE values are
Newton step and the use of McCormick relaxations see low and the gap values are small, demonstrating the
re[22]. We will analyze each method regarding accuracy, liability and precision of the method. It is important to
eficiency, and scalability. We will conduct numerical note that the 5x5 instance reached its maximum runtime
experiments on diferent test instances to gain further limit of 7 days, but the final iteration values were still
insights into their strengths and limitations. Our study recorded. However, the results for the eficient instances
will provide valuable information for future research on are promising. The simulation of HHL is computationally
developing new mathematical methods and quantum al- expensive due to the input matrix not being Hermitian
gorithms that involve solving linear systems of equations. and the size not being a power of 2. To address this issue,</p>
        <p>Table 2 displays the outcomes of the Linear Central a reduction is used to make it Hermitian and of the right
Path Method for eficient instances. Column 2 displays size, which increases the state space that needs to be
the number of iterations completed. Columns 3,4 are explored by the simulation. Table 5 shows that the state
for the objective function value and time as computed space for a 5x5 input is 264.
by the Gurobi’s built-in IPM solver. column 5 shows In Table 5, we can find detailed information about the
the optimal, and the final column, column 6, shows the computing resources used by the Quantum Central Path
amount of time taken by the Linear Central Path Method Method. As the instance size increases, the input matrix
as implemented by us. We modify this reference imple- size required for the HHL algorithm also increases. This
mentation and replace the equation solving step with the is because the matrix needs to be changed into a
HermiHHL algorithm. tian matrix. Our experiments have found that the largest</p>
        <p>The data shown in Table 2 reveals that the Linear matrix size used was 64 × 64, which required
approxiCentral Path Method needs more iterations as the in- mately 50GB of memory. The CPU Eficiency column
stance size grows. This implies that solving larger and shows how long it would take to run the job with the
more complex instances may need more computational given resources. For instance, for the 5x5 instance, the
resources. Nevertheless, the optimal values obtained job took about 37 days to run.
from IPM Optimal are identical to those of the optimal.</p>
        <p>Instance</p>
        <p>Iterations</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>9. Conclusion</title>
      <p>Note that the 4x4 instance requires 18 iterations, but
the number of iterations needed for the other four cases is
similar to the best classical method. The simulation took Our research proposes a new way to tackle linear
proa long time because of the large search space. To conduct gramming problems by combining the HHL algorithm
a definitive study, more eficient simulation methods are with the central path method. We examined three
vernecessary due to the enormous state space. According sions of the Central Path Method: the Linear Central
to the data in Table 5, simulation studies are currently Path Method, the Nonlinear Central Path Method, and
not feasible even on 6x6 instances. Using a quantum the Quantum Central Path Method, and conducted
extencomputer to run the proposed algorithm is also currently sive experimentation on tridiagonal instances. Though
not possible until more eficient methods or quantum the HHL algorithm was impressive, the quantum
accelermemory that can read and write state vectors are de- ate central path algorithm simulation took too long and
veloped. Let  be of size  ×  with sparsity  . If  is could not complete some instances in the allocated time.
iennctohdeeednacsod∑i n=−g01. T h|⟩e tHheHnL(allgoogr i)thmquhbaitss caoremnpeleexdietdy
Ttihone,isasnudeinoftelromadeidnigattehmearitgrihcte-sh’acnodndsiidtieo,nsonluumtiobnerenxetreadcs( log   2 2/ that depends on the sparseness  and the to be further examined. However, integrating quantum
condition number  [10]. So only for sparse matrices an algorithms with classical optimization methods can solve
exponential speedup is observed compared to the best large-scale linear programs. Our study highlights the
classical algorithms which have complexity ( ) . If  potential of quantum algorithms in solving optimization
is dense then a modification of HHL by Wossnig et al. problems. It suggests that further optimization of the
[25] has complexity ( √ log   2/) . This algorithm integration process and exploration of the practical
apfor dense matrices is faster by a quadratic factor. The plications of this approach in data science is necessary.
speedup for dense matrices is not exponential. There is
a related issue of determining the quantum state from Acknowledgments
the measurements using a process known as quantum
tomography. This step is needed to extract the solution The authors thank Robert Benkoczi for valuable
discus from the state vector |⟩ . The complete solution can be sions. The authors thank the reviewers for suggestions
determined by measuring the complete set of observables that helped improve the presentation and highlight the
whose expectations characterize the state [26]. For most relevance of this work to Data Science. The authors also
quantum states the probabilities of observing some spe- thank David Neufeld for detailed comments on a draft
cific basis state can be very low (exponentially small in which helped improve the readability.
 ). This means an exponential number of measurements The NSERC Discovery Grant provided support for this
(in  ) are needed to determine  completely. For states research. Additionally, the Digital Research Alliance of
that are matrix product states (MPS) eficient methods Canada (https://alliancecan.ca) played a part in enabling
are known for estimating  [27]. The complexity of the this research.
proposed algorithms with methods using quantum
tomography to recover  from |⟩ is Ω( + √ log   2/) .</p>
      <p>This also explains the large running times in Table 5. No References
eficient methods are known for quantum tomography in
general. However, there are several types of MPS states
that be eficiently estimated from the quantum state and
for such states the method proposed here can be eficient
[27].
[1] J. Zhu, S. Rosset, R. Tibshirani, T. Hastie, 1-norm
support vector machines, Advances in neural
information processing systems 16 (2003).
[2] J. Yang, Y. Zhang, Alternating direction algorithms
for ℓ1-problems in compressive sensing, SIAM
journal on scientific computing 33 (2011) 250–278.</p>
      <p>[3] M. Yuan, High dimensional inverse covariance
matrix estimation via linear programming, The Journal gorithms for systems of linear equations inspired
of Machine Learning Research 11 (2010) 2261–2286. by adiabatic quantum computing, Physical review
[4] B. Recht, C. Re, J. Tropp, V. Bittorf, Factoring non- letters 122 (2019) 060504.</p>
      <p>negative matrices with linear programs, Advances [19] P. A. Casares, M. A. Martin-Delgado, A quantum
in neural information processing systems 25 (2012). interior-point predictor–corrector algorithm for
lin[5] O. Meshi, A. Globerson, An alternating direction ear programming, Journal of physics A:
Mathematmethod for dual MAP LP relaxation, in: Machine ical and Theoretical 53 (2020) 445305.
Learning and Knowledge Discovery in Databases: [20] M. Mohammadisiahroudi, R. Fakhimi, T. Terlaky,
European Conference, ECML PKDD 2011, Athens, Eficient use of quantum linear system algorithms
Greece, September 5-9, 2011, Proceedings, Part II in interior point methods for linear optimization,
22, Springer, 2011, pp. 470–483. arXiv preprint arXiv:2205.01220 (2022).
[6] L. Weng, H. Zhang, H. Chen, Z. Song, C.-J. Hsieh, [21] Z. Wu, M. Mohammadisiahroudi, B. Augustino,
L. Daniel, D. Boning, I. Dhillon, Towards fast com- X. Yang, T. Terlaky, An inexact feasible
quanputation of certified robustness for relu networks, tum interior point method for linearly
conin: International Conference on Machine Learning, strained quadratic optimization, arXiv preprint
PMLR, 2018, pp. 5276–5285. arXiv:2301.05357 (2023).
[7] E. Wong, Z. Kolter, Provable defenses against ad- [22] V. Adoni, A quantum accelerated approach for the
versarial examples via the convex outer adversarial central path method in linear programming,
Maspolytope, in: International conference on machine ter’s thesis, University of Lethbridge, Faculty of
learning, PMLR, 2018, pp. 5286–5295. Arts and Science, 2023.
[8] G. Fung, R. Rosales, R. B. Rao, Feature selection and [23] J. Peng, C. Roos, T. Terlaky, Self-regular functions
kernel design via linear programming., in: IJCAI, and new search directions for linear and
semidefi2007, pp. 786–791. nite optimization, Mathematical Programming 93
[9] R. J. Vanderbei, et al., Linear programming, (2002) 129–171.</p>
      <p>Springer, 2020. [24] A. Abbas, S. Andersson, A. Asfaw, A. Corcoles,
[10] A. W. Harrow, A. Hassidim, S. Lloyd, Quantum L. Bello, Y. Ben-Haim, M. Bozzo-Rey, S. Bravyi,
algorithm for linear systems of equations, Physical N. Bronn, L. Capelluto, et al., Learn quantum
comreview letters 103 (2009) 150502. putation using qiskit, chapter Investigating
Quan[11] G. B. Dantzig, Maximization of a linear function tum Hardware Using Quantum Circuits:
Measureof variables subject to linear inequalities, Activ- ment Error Mitigation (2020).
ity analysis of production and allocation 13 (1951) [25] L. Wossnig, Z. Zhao, A. Prakash, Quantum linear
339–347. system algorithm for dense matrices, Physical
Re[12] N. Karmarkar, A new polynomial-time algorithm view Letters 120 (2018).</p>
      <p>for linear programming, in: Proceedings of the [26] K. Vogel, H. Risken, Determination of
quasiprobsixteenth annual ACM symposium on Theory of ability distributions in terms of probability
districomputing, 1984, pp. 302–311. butions for the rotated quadrature phase, Physical
[13] C. Roos, T. Terlaky, J.-P. Vial, Interior point methods Review A 40 (1989) 2847.</p>
      <p>for linear optimization (2005). [27] M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma,
[14] S. Wright, A path-following infeasible-interior- D. Gross, S. D. Bartlett, O. Landon-Cardinal,
point algorithm for linear complementarity prob- D. Poulin, Y.-K. Liu, Eficient quantum state
tolems, Optimization Methods and Software 2 (1993) mography, Nature communications 1 (2010) 149.
79–106.
[15] S. J. Wright, Primal-dual interior-point methods,</p>
      <p>SIAM, 1997.
[16] S. Barz, I. Kassal, M. Ringbauer, Y. O. Lipp, B. Dakić,</p>
      <p>A. Aspuru-Guzik, P. Walther, A two-qubit photonic
quantum processor and its application to solving
systems of linear equations, Scientific reports 4
(2014) 6115.
[17] X.-D. Cai, C. Weedbrook, Z.-E. Su, M.-C. Chen,</p>
      <p>M. Gu, M.-J. Zhu, L. Li, N.-L. Liu, C.-Y. Lu, J.-W.</p>
      <p>Pan, Experimental quantum computing to solve
systems of linear equations, Physical review letters
110 (2013) 230501.
[18] Y. Subaşı, R. D. Somma, D. Orsucci, Quantum
al</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>