=Paper= {{Paper |id=Vol-3072/paper24 |storemode=property |title=Fast Classical and Quantum Algorithms for Online k-server Problem on Trees |pdfUrl=https://ceur-ws.org/Vol-3072/paper24.pdf |volume=Vol-3072 |authors=Ruslan Kapralov,Kamil Khadiev,Joshua Mokut,Yixin Shen,Maxim Yagafarov |dblpUrl=https://dblp.org/rec/conf/ictcs/KapralovKM0Y21 }} ==Fast Classical and Quantum Algorithms for Online k-server Problem on Trees== https://ceur-ws.org/Vol-3072/paper24.pdf
        Fast Classical and Quantum Algorithms for
          Online k-server Problem on Trees ? ??

    Ruslan Kapralov1 , Kamil Khadiev1,2 , Joshua Mokut1 , Yixin Shen3 , and
                             Maxim Yagafarov1
                       1
                         Kazan Federal University, Kazan, Russia
    2
        Zavoisky Physical-Technical Institute, FRC Kazan Scientific Center of RAS,
                                      Kazan, Russia
               3
                 Université de Paris, CNRS, IRIF, F-75006 Paris, France




         Abstract. We consider online algorithms for the k-server problem on
         trees. Chrobak and Larmore proposed a k-competitive algorithm for this
         problem that has the optimal competitive ratio. However, a naive imple-
         mentation of their algorithm has time complexity O(n) to process each
         request, where n is the number of nodes. We propose a new time-efficient
         implementation of this algorithm thathas O(n log n) time complexity
         for preprocessing and O k2 + k · log n time for processing a request.
         We also propose a quantum algorithm for the case where the nodes of
         the tree are presented using string paths. In this case, no preprocessing
                                                                       2√
         is needed, and the time complexity for each request is O(k n log n).
                                             √
         When the number of requests is o k2n , we obtain a quantum speed-up
         on the total runtime compared to our classical algorithm.
         We also give a simple quantum algorithm to find the first marked el-
         ement in a collection of m objects, that works even in the presence of
         two-sided bounded errors on the input oracle. It has worst-case query
                        √
         complexity O( m). In the particular case of one-sided errors on the in-
                                                  √
         put, it has expected query complexity O( x) where x is the position of
         the first marked element. Compared with previous work, our algorithm
         can handle errors in the input oracle.

         Keywords: online algorithms· k-server problem on trees · quantum
         computing · binary search.

?
   Copyright © 2021 for this paper by its authors. Use permitted under Creative
   Commons License Attribution 4.0 International (CC BY 4.0)
??
   This work on Quantum algorithm for k-server problem was supported by Russian
   Science Foundation Grant 19-19-00656. The research in Section 4 is funded by the
   subsidy allocated to Kazan Federal University for the state assignment in the sphere
   of scientific activities, project No. 0671-2020-0065; supported by the Kazan Federal
   University Strategic Academic Leadership Program. The research is also supported
   in part by the ERA-NET Cofund in Quantum Technologies project QuantAlgo and
   the French ANR Blanc project RDAM.
1   Introduction

Online optimization is a field of optimization theory that deals with optimization
problems having no knowledge of the future [27]. An online algorithm reads an
input piece by piece and returns an answer piece by piece immediately, even if the
answer can depend on future pieces of the input. The goal is to return an answer
that minimizes an objective function (the cost of the output). The most standard
method to define the effectiveness of an online algorithm is the competitive
ratio [20]. The competitive ratio is the approximation ratio achieved by the
algorithm. That is the worst-case ratio between the cost of the solution found
by the algorithm and the cost of an optimal solution. If the ratio is c, then the
online algorithm is called c-competitive. In the general setting, online algorithms
have unlimited computational power. Nevertheless, many papers consider them
with different restrictions. Some of them are restrictions on memory [6, 9, 24, 2,
5, 18, 23, 21, 26, 25, 22], others are restrictions on time complexity [15, 31].
    In this paper, we focus on efficient online algorithms in terms of time com-
plexity. We consider the k-server problem on trees. Chrobak and Larmore [11]
proposed a k-competitive algorithm for this problem where the competitive ratio
k is the best possible for deterministic algorithms for this problem. The existing
implementation of their algorithm has O(n) time complexity for each request,
where n is the number of nodes in the tree. For general graphs, there exists a
time-efficient algorithm for the k-server problem [31] that uses min-cost-max-
flow algorithms. However, in the special case of a tree, this algorithm is not
optimal. We propose a new time-efficient implementation of the algorithm from
[11]. It has O (n log n) time complexity for preprocessing and O k 2 + k log n
for processing a request. It is based on fast algorithms for computing Lowest
Common Ancestor (LCA) [7] and the binary lifting technique [8].
    We also revisit the problem of finding the first marked element in√a collection
of m objects. It is well-known that it can be solved in expected O( √m) queries
when given quantum oracle access to the input, and expected O( x) queires
where x is the position of the first √ marked element [13],[29, Theorem 10] and
[28, Theorem 6]. We give a O( m) queries algorithm that works even in the
presence
   √       of two-sided bounded errors in the input. We also provide an expected
O( x) queries algorithm in the case where the input has one-sided errors only.
Note that contrary to the previous results, we assume that the oracle can have
errors whereas [13, 29, 28] assume that the oracle is perfect.
    We then consider the k-server problem in the case where the description of
the tree is given by string paths. The string path of a node in a rooted tree
is a sequence of length h, where h is the height of the node, describing the
path from the root to the node. Such a way of representing the trees is useful,
for example, as a path to a file in file systems. Assuming oracle access to the
length of the string path of each node and to each element on the string path
with √time complexity O(1), we obtain a quantum algorithm for this problem with
O(k 2 n log(n)) running time to process each request and without preprocessing.
This algorithm is based on our improved quantum search algorithm. When the


                                        2
                         √ 
number of requests is o k2n , the total runtime of our quantum algorithm is
smaller than the classical one.
   The structure of the paper is the following. Section 2 contains preliminaries.
The classical algorithm is described in Section 3. Section 4 contains our improved
quantum search algorithm. The quantum algorithm for the k-server problem is
described in Section 5.


2     Preliminaries

2.1   Online algorithms

An online minimization problem consists of a set I of inputs and a cost func-
tion. Each input I = (x1 , . . . , xn ) is a sequence of requests, where n is the length
of the input |I| = n. Furthermore, a set of feasible outputs (or solutions) O(I)    e
is associated with each I; an output is a sequence of answers O = (y1 , . . . , yn ).
The cost function assigns a positive real value cost(I, O) to I ∈ I and O ∈ O(I).   e
An optimal solution for I ∈ I is Oopt (I) = arg minO∈O(I)    e   cost(I, O).
    Let us define an online algorithm for this problem. A deterministic online
algorithm A computes the output sequence A(I) = (y1 , . . . , yn ) such that yi is
computed based on x1 , . . . , xi . We say that A is c-competitive if there exists a
constant α ≥ 0 such that, for every n and for any input I of size n, we have:
cost(I, A(I)) ≤ c · cost(I, OOpt (I)) + α. The minimal c that satisfies the previous
condition is called the competitive ratio of A.


2.2   Rooted Trees

Consider a rooted tree G = (V, E), where V is the set of nodes/vertices, and
E is the set of edges. Let n = |V | be the number of nodes, or equivalently the
size of the tree. We denote by 1 the root of the tree. A path P is a sequence
of nodes (v1 , . . . , vh ) that are connected by edges, i.e. (vi , vi+1 ) ∈ E for all
i ∈ {1, . . . , h − 1}, such that there are no duplicates among v1 , . . . , vh . Here h
is a length of the path. Between any two nodes v and u on the tree, there is a
unique path. The distance dist(v, u) is the length of this path. For each node v
we can define a parent node Parent(v) which is the first node on the unique
path from v to root 1. We have dist (1, Parent(v))+1 = dist(1, v). Additionally,
we can define the set of children Children(v) = {u : Parent(u) = v}. Any
node y on the unique path from root 1 to node v is an ancestor of node v.
    Lowest Common Ancestor (LCA). Given two nodes u and v of a rooted
tree, the Lowest Common Ancestor is the node w such that w is an ancestor of
both u and v, and w is the closest one to u and v among all such ancestors. The
following result is well-known.

Lemma 1 ([7]). There is an algorithm for the LCA problem with the following
properties: (i) The time complexity of the preprocessing step is O(n); (ii)The
time complexity of computing LCA for two vertices is O(1).


                                           3
    We call LCA Preprocessing() the subroutine that does the preprocessing
for the algorithm and LCA(u, v) that computes the LCA of two nodes u and v.
    Binary Lifting Technique. This technique from [8] allows us to obtain a
vertex v 0 that is at distance z from a vertex v with O(log n) time complexity.
There are two procedures:
    • BL Preprocessing() prepares the required data structures. The time
complexity is O(n log n).
    • MoveUp(v, z) returns a vertex v 0 on the path from v to the root and at
distance dist(v 0 , v) = z. The time complexity is O(log n).
    The technique is well documented in the literature. We present an implemen-
tation in the arxiv version [19] for completeness.

2.3    k-server Problem on Trees
Let G = (V, E) be a rooted tree, and we are given k servers that can move among
nodes of G. At each time slot, a request q ∈ V appears. We have to “serve” this
request, that is, to choose one of the k servers and move it to q. The other servers
are also allowed to move. The cost function is the distance by which we move
the servers. In other words, if before the request, the servers are at positions
v1 , . . . , vk and after the request they are at v10 , . . . , vk0 , then q ∈ {v10 , . . . , vk0 } and
                             Pk
the cost of the move is i=1 dist(vi , vi0 ). The cost of a sequence of requests is the
sum of the costs of serving each requests. The problem is to design a strategy
that minimizes the cost of servicing a sequence of requests given online.

2.4    Quantum query model
We use the standard form of the quantum query model. Let f : D → {0, 1}, D ⊆
{0, 1}m be an m variable function. We wish to compute on an input x ∈ D.
We are given an oracle access to the input x, i.e. it is realized by a specific
unitary transformation usually defined as |ii |zi |wi → |ii |z + xi (mod 2)i |wi
where the |ii register indicates the index of the variable we are querying, |zi is the
output register, and |wi is some auxiliary work-space. An algorithm in the query
model consists of alternating applications of arbitrary unitaries independent of
the input and the query unitary, and a measurement in the end. The smallest
number of queries for an algorithm that outputs f (x) with probability ≥ 32 on
all x is called the quantum query complexity of the function f and is denoted by
Q(f ). We refer the readers to [30, 3, 1] for more details on quantum computing.

Definition 1 (Search problem). Suppose we have a set of objects named
{1, 2, . . . , m}, of which some are targets. Suppose O is an oracle that identi-
fies the targets. The goal of a search problem is to find a target i ∈ {1, 2, . . . , m}
by making queries to the oracle O.

   In search problems, one will try to minimize the number of queries to the
oracle. In the classical setting, one needs O(m) queries to solve such a problem.
Grover, on the other hand, constructed a quantum algorithm that solves the


                                                  4
                              √
search problem with only O( m) queries [16], provided that there is a unique
target. When the number of targets is unknown, Brassard et pal. designed a modi-
fied Grover algorithm that solves the search problem with O( m/λ) queries [10],
where λ is the number of targets, which is of the same order as the query com-
plexity of the Grover search.


3     A Fast Online Algorithm for k-server Problem on Trees
      with Preprocessing
We first describe Chrobak-Larmore’s k-competitive algorithm for k-server prob-
lem on trees from [11]. Assume that we have a request on a vertex q, and the
servers are on the vertices v1 , . . . , vk . We say that a server i is active if there are
no other servers on the path from vi to q. In each phase, we move every active
server one step towards the vertex q. After each phase, the set of active servers
can change. We repeat this phase (moving of the active servers) until one of the
servers reaches the queried vertex q.
    The naive implementation of this algorithm has time complexity O(n) for
each request. First, we run a depth-first search with time labels [12], whose
result allows us to check in constant time whether a vertex u is an ancestor of
a vertex v. Recall that time labels record the first timestamp f (v) when the
depth-first search enters a node v, and the last timestamp `(v) when the depth-
first search finishes processing the last child of node v; the timestamp increases
every time a new node is visited. A node u is then an ancestor of v if and only
if the interval [f (v), `(v)] is contained in [f (u), `(u)]. After that, we can move
each active server towards the queried vertex, step by step. Together all active
servers cannot visit more than O(n) vertices.
    In the following, we present an effective implementation of Chrobak-Larmore’s
algorithm with preprocessing. The preprocessing part is done once and has
O(n log n) time complexity (Theorem 1). The request processing part is done
for each request and has O k 2 + k · log n time complexity (Theorem 2).

3.1   Preprocessing
We do the following steps for the preprocessing :
     1. We do required preprocessing for LCA algorithm (Section 2.2)
     2. We do required preprocessing for Binary lifting technique (Section 2.2)
     3. Additionally, for each vertex v we compute the distance from the root to v,
i.e. dist(1, v). This can be done using a depth-first search algorithm [12]. We store
all the distances in an array. See the arxiv version [19] for an implementation of
ComputeDistance(u).
Theorem 1. The preprocessing has time complexity O(n log n).
Proof. The time complexity of the preprocessing phase is O(n) for LCA, O(n log n)
for the binary lifting technique and O(n) for ComputeDistance(1). Therefore,
the total time complexity is O(n log n).

                                            5
Algorithm 1 Preprocessing. The preprocessing procedure.
  LCA Preprocessing()
  BL Preprocessing()
  dist(1, 1) ← 0
  ComputeDistance(1)



3.2   Request Processing
Assume that we have a request on a vertex q, and the servers are on the vertices
v1 , . . . , vk . We do the following steps, implemented in Algorithms 2 and 4.
Step 1. We sort all the servers by their distance to the node q. The distance
dist(v, q) between a node v and the node q can be computed in the following way.
Let l = LCA(v, q) be the lowest common ancestor of v and q, then dist(v, q) =
dist(1, q) + dist(1, v) − 2 · dist(1, l). Using the prepocessing, this quantity can
be computed in constant time. We denote by Sort(q, v1 , . . . , vk ) this sorting
procedure. In the following steps we assume that dist(vi , q) ≤ dist(vi+1 , q) for
i ∈ {1, . . . , k − 1}.
Step 2. The first server on v1 processes the request. We move it to the node q.
Step 3. For i ∈ {2, . . . k} we consider the server on vi . It will be inactive when
some other server with a smaller index arrives on the path between vi and q.
Section 3.3 contains the different cases that can happen and how to compute
the distance d traveled by vi before it becomes inactive. We then move the i-th
server d steps towards the request q. The new position of the i-th server is vi0 .

Algorithm 2 Request(q). Request processing procedure.
  Sort(q, v1 , . . . , vk )
  v10 ← q
  for i ∈ {2, . . . , k} do
      d ← DistanceToInactive(q, i)                                . see Algorithm 3
      vi0 ← Move(vi , d)                                          . see Algorithm 4

3.3   Distance to Inactive State
When processing a request, all servers except one will eventually become inactive.
The crucial part of the optimization is to compute when a server becomes inactive
quickly. For the purpose of computing this time, we claim that we can pretend
that servers “never go inactive”. Formally, let q be a request, i be a server, and j
another server with smaller index. We know that i will become inactive because
it is not the closest to the target. However it is possible that this particular
server j is not the one that will render i inactive. Nevertheless, we can pretend
that j will never become inactive and compute the distance i will travel before
going inactive because of j, call this distance dqi,j (the index i is fixed in this
reasoning). We claim the following:
Lemma 2. For any request q and server i > 1 ( i.e. a server that will become
inactive), the distance Diq travelled by i before it becomes inactive is equal to
minj dist(l, v) then
     z ← z − dist(l, v)
     Result ← MoveUp(q, dist(l, q) − z)
  return Result




                                                 8
Lemma 4. The time complexity of Move is O (log n).

Proof. The time complexity of MoveUp is O(log n) using the binary lifting
technique from Section 2.2 and LCA is in O(1) by Section 2.2. Furthermore,
we can compute the distance between any two nodes in O(1) thanks to the
preprocessing. Therefore, the total complexity is O(log n).
                                                                                 
Theorem 2. The time complexity of the request processing phase is O k 2 + k log n .

Proof. The complexity of sorting the servers by distance is O(k log k). For each
server, we compute the distance traveled before being inactive in O(1) by Lemma 3.
We then move each server by that distance in time O(log n) by Lemma 4. There-
fore, the complexity of processing one server is O(k + log n), and there are k
servers.


4     Binary Search for a Function with Errors
Consider a search space S = {1, . . . , m} and a subset M ⊆ S of marked elements.
Define the indicator function gM : S 2 → {0, 1} by gM (`, r) = 1 if {`, . . . , r} ∩
M 6= ∅, and 0 otherwise. In other words, gM (`, r) indicates whether there is a
marked element from M in the interval [`, r]. Now assume that we do not know
M but have access to a two-sided probabilistic approximation g̃ of gM . Formally,
there is a probability p < 1/2 such that for any `, r ∈ S, g̃(`, r) = gM (`, r) with
probability at least 1 − p and 1 − gM (`, r) otherwise. Intuitively, g̃ behaves like
gM with probability at least 1 − p. However, sometimes it makes mistakes and
returns a completely wrong answer. Note that g̃ has two-sided error: it can return
0 even if the interval [`, r] contains a marked element, but more importantly, it
can also return 1 even though the interval does not contain any marked element.
We further assume that a call to g̃(`, r) takes time T (r − `) where T is some
nondecreasing function. Typically, we assume that T (m) = o(m), i.e. T is strictly
better than a linear search.
    We now consider the problem of finding the first marked element in S, with
probability at least, say, 1/2. A trivial algorithm is to perform a linear search
in O(m) until g̃ returns 1. If g̃ had no errors, we could perform a binary search
in O(T (m)). This does not work very well in the presence of errors because
decisions made are irreversible, and errors accumulate quickly. Our observation
is that if we modify the binary search to boost the success probability of certain
calls to g̃, we can still solve the problem in time in O (T (m)).

4.1   Algorithm
The idea is inspired by [4] and very similar to [14] except that the calls to g̃ (the
“comparisons”) do not necessarily have unit cost. The difference with [14] is
apparent in the complexity since we can avoid the extra log factor when the cost
of each comparison becomes high enough. For reasons that become clear in the
proof, we need to boost some calls’ success probability. We do so by repeating


                                         9
Algorithm 5 Binary search for a function with two-sided errors
  ` ← 1, r ← m + 1                                                     . search interval
  d←1                                                              . depth of the search
  while ` < r do
     mid ← b(` + r)/2c
     vl ← g̃(`, mid)                          . repeat d times and take the majority
     if vl = 0 then
         ` ← mid + 1
     else
         r ← mid
     d←d+1



them several times and taking the majority: by this we mean that we take the
most common answer, and return an error in the case of a tie.
Proposition 1. Assume that T satisfies T (m/k) = O(T (m)/k α ) for some α >
0 and every m and k, then with probability more than 0.5, Algorithm 5 returns
the position of the first marked element, or m + 1 if none exists. The running
time is O(T (m)).
Proof. The correctness of the algorithm, when there are no errors, is clear. We
need to argue about the complexity and error probability.
    At the uth iteration of the loop, the algorithm considers a segment [`, r] of
length at most m ·2−(u−1) . The complexity of g̃(`, mid) is at most O(T (r − `)) =
O T (m · 2−(u−1) ) but we repeat it 2u times, so the total complexity of the uth
iteration is O uT m · 2−(u−1) . The number of iterations is at most log2 m.
                                

Hence, the total complexity is
                                                      
     log2 m                          log2 m                               ∞
                                                                                    !
      X                             X                                 X
                    −(u−1)                         −αu                      −αu
O          uT m · 2           =O          T (m) 2    u = O T (m)           2    u
     u=1                               u=1                                   u=1
                                                 α
                                                        
                                                2
                               = O T (m)                     = O (T (m)) .
                                             (2α − 1)2
     Finally, we need to analyze the success probability of the algorithm: at the
uth iteration, the algorithm will run each test 2u times and each test has a
constant probability of failure p. Hence for the algorithm to fail at iteration u,
at least half of
                 the 2u runs   must fail: this happens with probability at most
 2u u        2ue u u        u
  u  p  6     u   p  6 (2ep)  , where e = exp(1). Hence, the probability that the
                                log 2m          P∞              2ep
                                       (2ep)u 6 u=1 (2ep)u 6 1−2ep
                                  P
algorithm fails is bounded by                                       . By taking p
                                u=1
small enough (say 2ep < 13 ), which is always possible by repeating the calls to
g̃ a constant number of times to boost the probability, we can ensure that the
algorithm fails less than half of the time.
Remark 1. The condition T (m/k) = O(T (m)/k α ) for some α > 0 and every m
and k is clearly satisfied by any function of the form T (m) = mα logβ m logγ log m.


                                        10
4.2   Application to Quantum Search

A particularly useful application of the previous section is for quantum search,
particularly
    √        when g̃ is a Grover-like search. Indeed, Grover’s search can decide in
O( m) queries if a marked element exists in an array of size m, with a constant
probability of error.
     More precisely, assume that we have a function f : {1, . . . m} → {0, 1} and
the task is to find the minimal x ∈ {1, . . . , m} such that f (x) =√1. If we let
g̃(`, r) = GROVER(`, r, f ) then g̃ has query complexity T (m) = m and fails
with constant probability. Hence, we can apply Proposition 1 and obtain an        √ al-
gorithm to find the first marked element with query complexity T (m) = O( m)
and constant probability of error. In fact, note that we are not making use of
Proposition 1 to its full strength because g̃ really has one-sided error: it will never
return 1 if there are no marked element. We will make use of this observation
later.

Proposition 2. There is a quantum
                                √ algorithm that finds the first marked ele-
ment in an array of size m in O( m) queries and error probability < 0.5.

    As observed above, we are not really using Proposition 1 to its full strength
because Grover’s search has one-sided error. This suggests that there is room for
improvement. Suppose that we now only have access to a two-sided probabilistic
approximation f˜ of f . In other words, f can now make mistakes: it can return 1
for an unmarked element or 0 for a marked element with some small probability.
Formally, f˜(x) = f (x) with probability at least 1 − p and 1 − f (x) otherwise,
for some probability p < 1/2. We cannot apply Grover’s search directly in this
case, but some variants have been developed that can handle bounded errors [17].
Using this result, we can build a two-sided
                                   √        error function g̃ with high probability
of success and time complexity O( m). Applying Proposition 1 again, we obtain
the following improvement:

Proposition 3. There exists a quantum algorithm   √ FindFirst that finds the
first marked element in an array of size m in O( m) queries and error prob-
ability less than 0.5; even when the oracle access to the array has a two-sided
error.

   In quantum computing, f rarely has two-sided errors. For instance, Grover’s
search has a one-sided error only. If we assume that f˜ has one-sided error only,
we can obtain a slightly better version of Proposition 3. Formally, we assume
that f˜(x) = f (x) with probability at least 1 − p and 0 otherwise.

Proposition 4. There exists a quantum algorithm   √        that finds the first marked
element in an array of size m in expected O( x) queries and with error proba-    √
bility less than 0.5, where x is the position of the first marked element, or O( m)
queries if none is marked. Furthermore, it works even when the oracle access to
the array
       √ has one-sided error. Additionally, it has a worst-case query complexity
of O( m) in all cases.


                                          11
Proof. Let FindFirst denote the algorithm from Proposition 3 and GroverTwoSided
denote the variant of Grover’s algorithm of [17] that works with two-sided error
oracles. Recall that we assume that f˜ has one-sided error, i.e. it may return 0
instead of 1 with small probability but not the other way around. Consider the
following algorithm:

Algorithm 6 FindFirstAdvanced(m, f ). Find the first marked element in an
array.
  r←1                                                         . size of the search space
  while r ≤ n and GroverTwoSided(1, r, f˜) = 0 do
     r ← min(m, 2r)
  return FindFirst(r, f˜)



     We now show that this algorithm satisfies the requirements of Proposition 4.
To simplify the proof, we assume that the array always contains a marked ele-
ment; this is without loss of generality because we can add an extra object at
the end that is always marked. Furthermore, we assume that n is a power 2, this
is again without loss of generality because we can add dummy object at the end
at the cost of doubling the array size at most.
     Recall that f˜ has a one-sided error, and the same applies to GroverTwoSided
in this case. Therefore the test GroverTwoSided(1, r, f˜) = 0 can only fail if
there actually is a marked element in the interval [1, r]. Of course, the problem
is that it can succeed even though there is a marked element in this interval. Let
p be the probability that this happens (i.e. GroverTwoSided fails), we know
that this is < 1/2 by [17, Theorem 10]. Let x be the position of the first marked
element and let `x be such that 2`x 6 x < 2`x +1 . Let R be the value of r after the
loop, it is a random variable and always a power of 2. By the above reasoning,
it is always the case that R > x. Furthermore, for any `x 6 ` < log2 n, the prob-
                      `               `−`x
ability
    √ that R = 2 is at most p              (1 − p). The call to FindFirst takes time
O( R) by Proposition 3. Hence, the expected time complexity of this algorithm
is
                      log
                       Xn                                    ∞
                                              !                      !
                                         √            √ X          √
                             `−`x                                `
                 O         p      (1 − p) 2` = O        2`x     p 2`
                   `=`x                                     `=0
                                                 √                        
                                                                  1
                                            =O        2`x         √
                                                            1−        2p
                                                 √ 
                                            =O    x

where we assume that p is small enough. This is always possible by repeating the
calls to FindFirst a constant number of times to reduce the failure probability
p. Finally, we note that the only way this algorithm can fail is if the (unique)
call to FindFirst fails and this only happen with constant probability.



                                       12
5   A Fast Quantum Implementation of Online Algorithm
    for k-server Problem on Trees

We consider a special way of storing a rooted tree. Assume that for each vertex
v we have access to a sequence av = (av1 , . . . , avd ) for d = dist(1, v) + 1. Here
av is a path from the root (the vertex 1) to the vertex v, av1 = 1, avd = v.
Such a way of describing a tree is not uncommon, for example when the tree
represents a file system. A file path “c:\Users\MyUser\newdoc.txt” is exactly
such a path in the file system tree. Here “c”, “Users”, “MyUser” are ancestors
of “newdoc.txt”, “c” is the root and “newdoc.txt” is the node itself.
    We assume that we have access to the following two oracles in time O(1): (i)
given a vertex u, a (classical) oracle that returns the length of the string path
au ; (ii) given a vertex u and an index i, a quantum oracle that returns the ith
vertex aui of the sequence au .
    We can solve the k-server problem on trees using the same algorithm as in
Section 3 with the following modifications: (i) The function LCA(u, v) becomes
LCP(au , av ) where LCP(au , av ) is a longest common prefix of two sequences au
and av . (ii) MoveUp(v, z) is the vertex avd−z where av is the sequence for v and
d = |av |; (iii) We can compute dist(u, v) if u is the ancestor of v: it is d0 − d00 ,
where d0 is a length of au and d00 is a length of av . Note that the invocations of
dist in Algorithms 4 and 2 are always this form. The only exception is dist(u, v)
in Sort in which the function uses LCA as a subroutine. The complexity of
Sort is thus the same as the complexity of LCA or LCP in our case.
    By doing so, we do not need any preprocessing. We now replace the LCP(au , av )
function by a quantum subroutine QLCP(au , av ), presented     √ in Section 5.1, and
keep everything else as is. This subroutine runs in time O( n log n) with O n13
error probability. This allows us to obtain the following result.


Theorem
      √   3. There is a quantum algorithm that processes a request in time
O k 2 n log n with probability of error O n1 . It does not require any prepro-
cessing.

Proof. The complexity Move is the complexity of LCA that is QLCP in our
implementation,
   √              plus the complexity of MoveUp. The former has complexity is
O( n log n) by Lemma 5, and    √ the latter O(1) by the oracle. Therefore, the total
running time of Move is O( n log n).
    The complexity of Request is O(k 2 ) times the cost of LCA that is QLCP
in our implementation, and then a call to Move. Additionally, the Sort func-
tion invokes √ LCA to compute distances. Hence, the complexity
                                                           √          of Sort is
O (k log k · n · log n), and the total complexity is O k 2 n · log n .
    We invoke, QLCP at most 4k 2 times so the success probability is at least
         4k2            4n2
 1 − n13      ≥ 1 − n13      ≥ 1 − Ω n1 for enough big n. Therefore, the error
                                          

probability is O n1 . Note that we do not need any preprocessing.


                                        13
5.1   Quantum Algorithm for Longest Common Prefix of Two
      Sequences
Let us consider the Longest Common Prefix (LCP) problem. Given two se-
quences (q1 , . . . , qd ) and (b1 , . . . , bs ), the problem is to find t such that q1 =
b1 , . . . , qt = bt and qi 6= bi for t + 1 ≤ i ≤ m, where m = min(d, s). Consider a
function f : {1, . . . , m} → {0, 1} such that f (i) = 1 iff qi 6= bi . Assume that x is
the minimal argument such that f (x) = 1, then t = x − 1. The LCP problem is
equivalent to finding the first marked element. It can be solved as follows.

Algorithm 7 QLCP(q, b). Quantum algorithm for the longest common prefix.
  m ← min(d, s)
  x ← FindFirst(m, f )               . Repeat 3 log m times and take the majority vote
  if x = N U LL then
      x←m+1
  return x − 1



Lemma
  √    5. Algorithm 7 finds the LCP of two sequences of length m in time
O( m log m) and with probability of error O m13 .
                                               

Proof. The correctness of the √algorithm follows from the definition of f . The
complexity
  √        of FindFirst  is O(   m) by Proposition 2. The total running time is
O( m log m) because of the repetitions.


References
 1. Ablayev, F., Ablayev, M., Huang, J.Z., Khadiev, K., Salikhova, N., Wu, D.: On
    quantum methods for machine learning problems part i: Quantum tools. Big Data
    Mining and Analytics 3(1), 41–55 (2019)
 2. Ablayev, F., Ablayev, M., Khadiev, K., Vasiliev, A.: Classical and quantum com-
    putations with restricted memory. LNCS 11011, 129–155 (2018)
 3. Ambainis, A.: Understanding quantum algorithms via query complexity. In: Proc.
    Int. Conf. of Math. 2018. vol. 4, pp. 3283–3304 (2018)
 4. Ambainis, A., Balodis, K., Iraids, J., Khadiev, K., Klevickis, V., Prusis, K., Shen,
    Y., Smotrovs, J., Vihrovs, J.: Quantum lower and upper bounds for 2d-grid and
    Dyck language. In: Proceedings of MFCS 2020. LIPIcs, vol. 170, pp. 8:1–8:14 (2020)
 5. Baliga, G.R., Shende, A.M.: On space bounded server algorithms. In: Proceedings
    of ICCI’93. pp. 77–81. IEEE (1993)
 6. Becchetti, L., Koutsoupias, E.: Competitive analysis of aggregate max in windowed
    streaming. In: ICALP. LNCS, vol. 5555, pp. 156–170 (2009)
 7. Bender, M.A., Farach-Colton, M.: The lca problem revisited. In: Latin American
    Symposium on Theoretical Informatics. pp. 88–94. Springer (2000)
 8. Bender, M.A., Farach-Colton, M.: The level ancestor problem simplified. Theoret-
    ical Computer Science 321(1), 5–12 (2004)
 9. Boyar, J., Larsen, K.S., Maiti, A.: The frequent items problem in online stream-
    ing under various performance measures. International Journal of Foundations of
    Computer Science 26(4), 413–439 (2015)


                                           14
10. Boyer, M., Brassard, G., Høyer, P., Tapp, A.: Tight bounds on quantum searching.
    Fortschritte der Physik 46(4-5), 493–505 (1998)
11. Chrobak, M., Larmore, L.L.: An optimal on-line algorithm for k servers on trees.
    SIAM Journal on Computing 20(1), 144–148 (1991)
12. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms.
    McGraw-Hill (2001)
13. Dürr, C., Høyer, P.: A quantum algorithm for finding the minimum. arXiv:quant-
    ph/9607014 (1996)
14. Feige, U., Raghavan, P., Peleg, D., Upfal, E.: Computing with noisy information.
    SIAM Journal on Computing 23(5), 1001–1018 (1994)
15. Flammini, M., Navarra, A., Nicosia, G.: Efficient offline algorithms for the bicriteria
    k-server problem and online applications. Journal of Discrete Algorithms 4(3), 414–
    432 (2006)
16. Grover, L.K.: A fast quantum mechanical algorithm for database search. In: Pro-
    ceedings of STOC96. pp. 212–219. ACM (1996)
17. Høyer, P., Mosca, M., de Wolf, R.: Quantum search on bounded-error inputs. In:
    Automata, Languages and Programming (2003)
18. Hughes, S.: A new bound for space bounded server algorithms. In: Proceedings of
    the 33rd annual on Southeast regional conference. pp. 165–169 (1995)
19. Kapralov, R., Khadiev, K., Mokut, J., Shen, Y., Yagafarov, M.: Fast classical
    and quantum algorithms for online k-server problem on trees. arXiv preprint
    arXiv:2008.00270 (2021)
20. Karlin, A.R., Manasse, M.S., Rudolph, L., Sleator, D.D.: Competitive snoopy
    caching. In: FOCS, 1986., 27th Annual Symposium on. pp. 244–254. IEEE (1986)
21. Khadiev, K.: Quantum request-answer game with buffer model for online algo-
    rithms. application for the most frequent keyword problem. In: CEUR Workshop
    Proceedings. vol. 2850, pp. 16–27 (2021)
22. Khadiev, K., Khadieva, A.: Two-way quantum and classical machines with small
    memory for online minimization problems. In: International Conference on Micro-
    and Nano-Electronics 2018. Proc. SPIE, vol. 11022, p. 110222T (2019)
23. Khadiev, K., Khadieva, A.: Quantum online streaming algorithms with logarithmic
    memory. International Journal of Theoretical Physics 60, 608–616 (2021)
24. Khadiev, K., Khadieva, A., Mannapov, I.: Quantum online algorithms with respect
    to space and advice complexity. Lobachevskii J. of Math. 39(9), 1210–1220 (2018)
25. Khadiev, K., Khadieva, A.: Two-way quantum and classical automata with advice
    for online minimization problems. In: Reversibility in Programming, Languages,
    and Automata (RPLA 2019) procceedings. LNCS, Springer (2020)
26. Khadiev, K., Lin, D.: Quantum online algorithms for a model of the request-answer
    game with a buffer. Uchenye Zapiski Kazanskogo Universiteta. Seriya Fiziko-
    Matematicheskie Nauki 162(3), 367–382 (2020)
27. Komm, D.: An Introduction to Online Computation: Determinism, Randomiza-
    tion, Advice. Springer (2016)
28. Kothari, R.: An optimal quantum algorithm for the oracle identification problem.
    In: Proceedings of STACS 2014. p. 482 (2014)
29. Lin, C.Y.Y., Lin, H.H.: Upper bounds on quantum query complexity inspired by
    the elitzur-vaidman bomb tester. In: Proceedings of CCC 2015 (2015)
30. Nielsen, M.A., Chuang, I.L.: Quantum computation and quantum information.
    Cambridge univ. press (2010)
31. Rudec, T., Baumgartner, A., Manger, R.: A fast work function algorithm for solving
    the k-server problem. Cent Eur J Oper Res 21(1), 187–205 (2013)



                                            15