=Paper= {{Paper |id=None |storemode=property |title=Asymptotical Information Bound of Consecutive Qubit Binary Testing |pdfUrl=https://ceur-ws.org/Vol-1000/ICTERI-2013-p-163-177.pdf |volume=Vol-1000 |dblpUrl=https://dblp.org/rec/conf/icteri/VaravaZ13 }} ==Asymptotical Information Bound of Consecutive Qubit Binary Testing== https://ceur-ws.org/Vol-1000/ICTERI-2013-p-163-177.pdf
Asymptotical Information Bound of Consecutive
            Qubit Binary Testing

                   Anastasiia Varava and Grygoriy Zholtkevych

                     V.N. Karazin Kharkiv National University
    School of Mathematics and Mechanics, 4, Svobody Sqr., 61022, Kharkiv, Ukraine

                     {nastia.varava,g.zholtkevych}@gmail.com



        Abstract. The problem of estimating quantity of information, which
        can be obtained in the process of binary consecutive measuring a qubit
        state, is considered in the paper. The studied quantity is expressed as
        Shannon mutual information between the initial qubit state and the out-
        comes of a consecutive measurement. It is demonstrated that the maxi-
        mum of information is reached by projective measurements. The maxi-
        mum quantity of accessible information is calculated. It is proved that in
        the case of arbitrary binary test the maximum is reached asymptotically
        for consecutive measurements.

        Keywords. accessible classical information, Shannon mutual informa-
        tion, pure qubit state, consecutive qubit binary testing

        Key terms. MathematicalModel, MathematicalModelling, Research


1     Introduction
Industrial applications of quantum communication channels require developing
sound method for constructing them. It is well known that Shannon information
theory is a mathematical background for the such methods and the notion of
Shannon mutual information [1, 9] is a basic instrument for the classical theory
of communication. Thus, studying informational quantities for quantum commu-
nication channels is a very important problem for building quantum information
theory.
    The amount of accessible information on the quantum system is an impor-
tant information-theoretic quantity. This information is obtained by performing
quantum measurements on the given system. To study the possibility of ob-
taining information it is suitable to express it as Shannon mutual information
between the qubit state and the outcomes of a measurement.
    One of the first investigated problems with respect to the discussed quantity
was the so-called problem of distinguishing quantum states. It consists of the
following: suppose that quantum system is prepared in a state described by
one of the density operators ρi (i = 1, ..., n) with probability pi . The goal is
to determine which state is given using a single measurement of the considered
164    A. Varava and G. Zholtkevych

quantum system. Thus, the task is to find measurements providing the maximum
of Shannon mutual information. This problem was investigated by Holevo [4],
Davies [2] and many others. Particularly, Holevo proved in [4] that obtainable
information I is less or equal than the so-called Holevo bound
                                         !
                                 X           X
                         I≤S        pi ρi −      pi S(ρi ) ,
                                 i            i

where S(ρ) = Tr(ρ log ρ) is the von Neumann entropy. It follows that the amount
of information obtainable by a single measurement never exceeds log(dim H),
where H is a state space of a quantum system.
    Consider a more general problem. Imagine that we do not have any pre-
liminary information on the possible states of a quantum system. In this case
all possible pure states are equiprobable, so, we can not work with a finite set
of initial states. Assume also that we have in our disposal a single measuring
instrument. How much classical information can we now obtain?
    In the paper we consider the simplest case of this problem. Suppose we have
an arbitrary pure qubit state and an instrument doing binary test of it. Assume
that all possible initial states are equiprobable. Our aim is to obtain classical
information on the qubit state using only the given measuring instrument. Cer-
tainly, we want to get as much classical information as possible. So, this time we
investigate the amount of information accessible in this case.
    It is well known that in the case of performing a single measurement the
maximum of classical information is obtainable by a projective measurement.
We present a simple proof of this fact. In addition, we calculate the maximum
value of information and then we show that in the case of arbitrary measuring
instrument this value can be attained asymptotically using consecutive testing.
    This paper is organized as follows: in Section 2 the problem is set formally
and the formula for Shannon mutual information of two random variables is
obtained. The first one describes the density of the parameter of the initial
state, and the second one corresponds to the measurement result. In Section 3 a
special kind of measuring instrument which performs a projective measurement
is considered. It is demonstrated that in this case we obtain the maximum of
accessible information. Further, in Section 4, it is proved that in general case
consecutive measurements provide to attain this value asymptotically.


2     Shannon Mutual Information between the Qubit State
      and the Outcomes of a Consecutive Measurement
To set the problem formally we firstly need some basic definitions.
   In quantum computing, a qubit is the quantum analogue of the classical bit.
Like a bit, a qubit have two basis states: |0i and |1i. The difference is that a
qubit can be in a superposition of the basis states at the same time. This means
that a pure qubit state can be represented as a linear combination of |0i and |1i:
                               |ψi = α|0i + β|1i ,
      Asymptotical Information Bound of Consecutive Qubit Binary Testing       165

where α and β are probability amplitudes and can in general both be complex
numbers such that |α|2 + |β|2 = 1.
   We use a density operator as a mathematical model of a qubit state.

Definition 1. Let Hn be an n-dimensional Hilbert space of a quantum system
with n levels. A nonnegative definite operator ρ on Hn is called a density operator
if it holds the condition Tr(ρ) = 1.

   Density operator ρ corresponds to a pure state if it is an one-dimensional
ortho-projector. In other cases ρ corresponds to a mixed state.
   We describe the considering pure state of a qubit by the corresponding density
operator ρ(θ, α). It can be represented by the following matrix
                                                 r       
                                    1          iα   1   2
                                      −θ      e       −θ 
                             
                   ρ(θ, α) =       2r              4      ,
                             
                                 −iα   1    2
                                                  1       
                               e         −θ         +θ
                                       4          2
                                                   1        1
where parameters θ and α satisfy the conditions −     ≤ θ ≤ , 0 ≤ α < 2π.
                                                   2        2
    Let Θ be the equiprobability distribution on the segment − 21 , 12 . It is used
                                                                      

to model uncertainty of our knowledge about parameter θ for an initial state of
the qubit.
    Let us describe the given measuring instrument using the following definition
of a qubit binary test [5].

Definition 2. A pair of nonnegative definite operators T = {M0 , M1 } on H2
such that M0 + M1 = 1 is called a qubit binary test.

The mathematical model of a qubit binary test is described and studied in [10].
    Let M be a set of all possible qubit binary tests. Consider T ∈ M, T =
{M0 , M1 }. Let {|0i, |1i} be the ortho-normal basis in H2 such that |0i, |1i are
eigenvectors of the operator M0 corresponding to eigenvalues 0 ≤ m1 ≤ m2 ≤ 1
respectively. In this case the operators M0 and M1 are represented in a such
way:
                                                          
                           m1 0               1 − m1     0
                 M0 =               , M1 =                     .
                           0 m2                  0    1 − m2

   Denote by Σ = {0, 1} the set of outcomes for the given qubit binary test.
The probability distribution on this set is defined by the following formulae:

                Pr(0 | ρ) = Tr(ρ · M0 ),   Pr(1 | ρ) = Tr(ρ · M1 ).

   As we already mentioned, our aim is to obtain classical information value of
the initial state as much as possible, using only the given instrument to measure.
For this purpose we perform consecutive qubit binary testing. In other words, we
repeat our measurement n times to observe the result of each iteration. After n
166       A. Varava and G. Zholtkevych

iterations we obtain a sequence x(n) = (x1 , . . . , xn ), where xi ∈ Σ(i = 1, ..., n).
As it is shown in [10],
                                                                              
        (n)               k          n−k   1               k         n−k   1
   Pr(x     | ρ(θ, α)) = m1 (1 − m1 )        − θ + m2 (1 − m2 )              +θ ,
                                           2                               2

where k is a number of ’1’ in the outcomes sequence x(n) .
   In further computations it is suitable to use some properties of Bernstein
basis polynomials of degree n [7]:
                                     
                                      n k
                         Bn,k (z) =     z (1 − z)n−k ,
                                      k

so, we rewrite the last equation as follows:
                                  −1                                   
              (n)                 n       1                   1
      Pr(x          | ρ(θ, α)) =            − θ Bn,k (m1 )+     + θ Bn,k (m2 ) .
                                  k       2                   2

   Let X(n) ∈ {0, 1}n be a random variable describing an outcomes sequence.
The density of X(n) with respect to Θ is defined by the following formula

                                  p(x(n) |θ) = Pr(x(n) | ρ(θ, α)).

     The main objective of this section is to compute Shannon mutual information
[1, 9] of random variables X(n) and Θ. It is equal to

                             I(X(n) ; Θ) = H(X(n) ) − H(X(n) |Θ),                      (1)

where H(X(n) ) is Shannon entropy of X(n) , and H(X(n) |Θ) is conditional en-
tropy [1, 9]:
                                 Z 21
                    H(X(n) |Θ) =      H(X(n) |Θ = θ)dθ .
                                              − 21

   Let us compute the marginal distribution of the random variable X(n) , which
corresponds to the joint density function p(x(n) , θ). The latter is described in a
such way:
                              −1                                    
                (n)           n       1                    1
         p(x          , θ) =            − θ Bn,k (m1 ) +     + θ Bn,k (m2 ) .
                              k       2                    2

      The marginal distribution p(x(n) ) is defined as follows:

                    Z 21              −1                               Z 1
        (n)                 (n)         n        Bn,k (m1 ) + Bn,k (m2 ) 2
  p(x         )=       p(x , θ)dθ =                                            dθ−
                  − 12                  k                   2             − 12
                                         Z 21     !  
                                                            −1
                                                        n      Bn,k (m1 ) + Bn,k (m2 )
               (Bn,k (m1 ) − Bn,k (m2 ))       θdθ =                                   .
                                          − 21          k                 2
      Asymptotical Information Bound of Consecutive Qubit Binary Testing                         167

   Firstly, let us compute entropy of X(n) :
                                                                n  
                          X                                     X  n
  H(X(n) ) = −                      p(x(n) ) log p(x(n) ) = −              p(x(n) ) log p(x(n) ) .
                                                                       k
                     x(n) ∈{0,1}n                               k=0


Substituting the expression of marginal distribution, we obtain

                   n                                 
         (n)
                   X   Bn,k (m1 ) + Bn,k (m2 )           n
  H(X          )=                                log          −
                                  2                      k
                  k=0
                  n                                                          
                 X    Bn,k (m1 ) + Bn,k (m2 )           Bn,k (m1 ) + Bn,k (m2 )
                                                · log                             . (2)
                                 2                                 2
                  k=0

Secondly, we can compute H(X(n) |Θ):

                         Z 21                              Z 12 Xn  
         (n)                           (n)                          n
  H(X          |Θ) = −           H(X         |Θ = θ)dθ = −             p(x(n) |θ) log p(x(n) |θ)dθ .
                          − 21                              −2 1    k
                                                                 k=0

By direct calculations it is easy to check that

                      n                              
         (n)
                      X   Bn,k (m1 ) + Bn,k (m2 )         n
  H(X          |Θ) =                              log         −
                                     2                    k
                     k=0
                   n Z 1                                       
                  X      2    1                    1
                                − θ Bn,k (m1 ) +      + θ Bn,k (m2 ) ·
                          1   2                    2
                  k=0 − 2
                                                                    
                                   1                     1
                            log      − θ Bn,k (m1 ) +      + θ Bn,k (m2 ) dθ . (3)
                                   2                     2

    Now, using (1), (2) and (3), one can obtain the expression for Shannon mutual
information
                     n Z 12                                             
        (n)
                     X         1                           1
  I(X         ; Θ) =                 − θ Bn,k (m1 ) +        + θ Bn,k (m2 ) ·
                           − 12   2                        2
                     k=0
                                                               
                          1                      1
                   log      − θ Bn,k (m1 ) +        + θ Bn,k (m2 ) dθ−
                          2                      2
                                                                                  !
                           Bn,k (m1 ) + Bn,k (m2 )         Bn,k (m1 ) + Bn,k (m2 )
                                                   · log                              .
                                      2                               2

   It is easy to see that if k is such that Bn,k (m1 ) = Bn,k (m2 ), then the corre-
sponding summand is equal to zero:
         Z 12
                 (Bn,k (m1 ) · log Bn,k (m1 )) dθ − Bn,k (m1 ) · log Bn,k (m1 ) = 0 .
          − 12
168       A. Varava and G. Zholtkevych

Note that in the case of equality m1 = m2 the polynomials Bn,k (m1 ) and
Bn,k (m2 ) are equal for each k, so, we can not obtain any information about
the initial state. Thus we futher consider the case m1 < m2 .
      Let An be a set of non-negative integers defined as follows:

                    An = {k ∈ {0, ..., n}|Bn,k (m1 ) 6= Bn,k (m2 )} .


   Denote by An the complement of the set An , i.e. An = {0, ..., n}\An . Now
we can consider only values of k from this set, but we keep on working with all
summands. We demonstrate further that it is more suitable. Instead of omitting
zero summands, let us present them in the following way:

                                                 
                                     Bn,k (m1 )           2 ln 2 − 1
               0=       Bn,k (m1 ) −                  −              · Bn,k (m1 ) .
                                      2 ln(2)                2 ln 2


      Thus, we have

                            Z 12                                           
                    X              1                        1
  I(X(n) ; Θ) =                       − θ Bn,k (m1 ) +        + θ Bn,k (m2 ) ·
                           − 21    2                        2
                 k∈An
                                                                 
                        1                         1
               log        − θ Bn,k (m1 ) +          + θ Bn,k (m2 ) dθ−
                        2                         2
                                                                       !
              Bn,k (m1 ) + Bn,k (m2 )          Bn,k (m1 ) + Bn,k (m2 )
                                       · log                                +
                         2                                2
                     X                     Bn,k (m1 )
                                                        
                                                              2 ln 2 − 1
                                                                                     
                                Bn,k (m1 ) −              −              · Bn,k (m1 ) .
                                               2 ln(2)           2 ln 2
                      k∈An



      After integration we obtain

                    X  Bn,k (m1 ) + Bn,k (m2 )
  I(X(n) ; Θ) =        −                        +
                                4 ln 2
                  k∈An

               Bn,k (m2 )2 log Bn,k (m2 ) − Bn,k (m1 )2 log Bn,k (m1 )
                                                                       −
                             2(Bn,k (m2 ) − Bn,k (m1 ))
                                                                     
              Bn,k (m1 ) + Bn,k (m2 )         Bn,k (m1 ) + Bn,k (m2 )
                                      · log                              +
                         2                               2
                     X                    Bn,k (m1 )
                                                       
                                                           2 ln 2 − 1
                                                                                  
                             Bn,k (m1 ) −                −            · Bn,k (m1 ) .
                                              2 ln(2)         2 ln 2
                      k∈An
        Asymptotical Information Bound of Consecutive Qubit Binary Testing                  169

   Taking into account that for k from the set An the equality Bn,k (m1 ) =
Bn,k (m2 ) is held, we can rewrite this formula as follows:

                    n
                    X 2 ln 2 − 1                              X 2 ln 2 − 1
    I(X(n) ; Θ) =                  (Bn,k (m1 )+Bn,k (m2 ))−                     Bn,k (m1 )+
                          4 ln 2                                      2 ln 2
                    k=0                                       k∈An
            X  Bn,k (m2 )2 log Bn,k (m2 ) − Bn,k (m1 )2 log Bn,k (m1 )
                                                                               −
                                    2(Bn,k (m2 ) − Bn,k (m1 ))
            k∈An
                                                                                        
                           (Bn,k (m1 ) + Bn,k (m2 )) · log (Bn,k (m1 ) + Bn,k (m2 ))
                                                                                            .
                                                       2
                                                                         n
                                                                         P
     Simplifying this expression and using the evident equality,               Bn,k (z) = 1,
                                                                        k=0
we get the following expression for mutual information:

              2 ln 2 − 1    X 2 ln 2 − 1
    I(X(n) ; Θ) =        −                Bn,k (m1 )+
                 2 ln 2          2 ln 2
                           k∈An

          1 X Bn,k (m2 )2 log Bn,k (m2 ) − Bn,k (m1 )2 log Bn,k (m1 )
               
            ·                                                         −
          2                   (Bn,k (m2 ) − Bn,k (m1 ))
             k∈An

                     (Bn,k (m2 )2 − Bn,k (m1 )2 ) · log (Bn,k (m1 ) + Bn,k (m2 ))
                                                                                    
                                                                                        . (4)
                                     (Bn,k (m2 ) − Bn,k (m1 ))


3     Extremal Property of Projective Measurements

In this section we consider a special kind of measurement. Let m1 = 0 and
m2 = 1. In this case the qubit binary test T = {M0 , M1 } looks as follows:
                                              
                                00              10
                        M0 =          , M1 =          .
                                01              00

    The first interesting property of this measurement is its repeatability. Actu-
ally, note that Mi · Mj = δi,j · Mi . So, M0 and M1 are orthoprojectors, and T
is a projective measurement. Thus, in this case the repeated measurements give
the same result. We demonstrate further that due to this property consecutive
testing do not provide any extra information. The second and the most remark-
able property is that we can obtain the maximum of the accessible information
if and only if we use a projective measurement.
    First of all let us compute the amount of information obtainable by the
considering measurement. In the considered case we have only two values of k
such that the corresponding polymomials Bn,k (m1 ) and Bn,k (m2 ) are not equal.
So, An = {0, n}, and

                          ∀k ∈ An : Bn,k (m1 ) = Bn,k (m2 ) = 0 .
170       A. Varava and G. Zholtkevych

Now using (4) it is easy to see that considering m1 = 0 and m2 = 1 we obtain
                                                   2 ln 2 − 1
                                I(X(n) ; Θ) =                 .
                                                      2 ln 2
    Our next goal is to show that the obtained amount of information can not
be reached using any other qubit binary test. For this purpose we investigate
the function describing the accessible information (4).
    At first let us rewrite this function in a more suitable way. Consider the
general case, in which 0 < m1 < m2 < 1. We will demonstrate futher that the
exceptional cases, when 0 = m1 < m2 < 1 or 0 < m1 < m2 = 1, are similar to the
latter. Taking into account that in the general case ∀k ∈ {0, ..., n} Bn,k (m1 ) > 0,
let us denote the ratio between Bn,k (m2 ) and Bn,k (m1 ) by a new function:
                                                   Bn,k (m2 )
                               tn,k (m1 , m2 ) =              .
                                                   Bn,k (m1 )
   Thus, by direct calculations we obtain the following formula for mutual in-
formation:
                    2 ln 2 − 1 2 ln 2 − 1 X                       1 X
  I(X(n) ; Θ) =                 −                   Bn,k (m1 ) −           Bn,k (m1 )·
                       2 ln 2        2 ln 2                       2
                                              k∈An                   k∈An
                                                     
                                    tn,k (m1 , m2 )
                                                                                   
         tn,k (m1 , m2 )2 · log                         + log (tn,k (m1 , m2 ) + 1)
       
                                 tn,k (m1 , m2 ) + 1                               
                                                                                     . (5)
                                    1 − tn,k (m1 , m2 )                            


      Let f (t) be a function defined as follows:
                                               
                                             t
                                 t2 · log         + log(t + 1)
                                            t+1
                         f (t) =                               .
                                              1−t
Now it is easy to see that the formula (5) can be written as
                                 
                   2 ln 2 − 1  2 ln 2 − 1 X
  I(X(n) ; Θ) =              −               Bn,k (m1 )+
                      2 ln 2       2 ln 2
                                               k∈An
                                                                                     !
                                           1 X
                                                      Bn,k (m1 ) · f (tn,k (m1 , m2 ))   . (6)
                                           2
                                               k∈An

    We claim that in the considered case (0 < m1 < m2 < 1) mutual information
is less than
                                    2 ln 2 − 1
                                               .
                                       2 ln 2
To prove this fact it is enough to notice that the variable part of expression (6)
is always negative. Actually, we know that Bn,k (m1 ) > 0 and tn,k (m1 , m2 ) is a
        Asymptotical Information Bound of Consecutive Qubit Binary Testing             171

ratio of two positive values, so, it is also positive. In addition, it is easy to show
in the classical way that for all t ∈ (0, 1) ∪ (1, ∞) function f (t) is greater than
zero.
    Now we need to consider the special case, in which 0 < m1 < 1, m2 = 1. It
is evident that the case when m1 = 0, 0 < m2 < 1 is similar to the latter.
    Suppose that 0 < m1 < 1, m2 = 1. Thus, on the one hand, for all k we have
Bn,k (m1 ) > 0. On the other hand, for all k < n Bn,k (m2 ) = 0, and only for
k = n Bn,k (m2 ) = 1. It is easy to see that in this case we obtain

                  2 ln 2 − 1
    I(X(n) ; Θ) =            +
                     2 ln 2
           Bn,n (m2 )2 log Bn,n (m2 ) − (Bn,n (m2 )2 − 1) · log (1 + Bn,n (m2 ))
                                                                                 
     1
       ·                                                                             =
     2                                Bn,n (m2 ) − 1
                                        2 ln 2 − 1 1                       2 ln 2 − 1
                                                  − · f (Bn,n (m2 )) <                 .
                                           2 ln 2    2                        2 ln 2
Thus, now we know that

                                                2 ln 2 − 1
                                I(X(n) ; Θ) ≤              ,
                                                   2 ln 2
and, in particular, the equality is held if and only if the considered binary test
is projective.


4     Asymptotic Properties of Consecutive Measurements

In the previous section we have considered the case when the given qubit binary
test is a projective measurement. We have proved that only this type of measure-
ment allows to achieve the maximum of information about the initial state. As
far as the measurement is projective, repeating of the measuring procedure does
not provide any extra information. In addition, we have found the maximum
value of the accessible information:
                                                      2 ln 2 − 1
                            max     {I(X(n) ; Θ)} =              .
                         T∈M, n∈N                        2 ln 2

   In this section we return to considering of the general view of a qubit test,
and we work with consecutive qubit testing. So, this time we investigate the
dependence of the amount of information on n – the number of iterations. The
objective of this section is to prove that the maximum of accessible information
can be reached asymptotically by performing consecutive measurements using
an arbitrary qubit binary test.
   More strictly, our aim is to prove the next theorem:

Theorem 1. Suppose we have a pure qubit state and we perform consecutive
qubit binary testing using the given test T = {M1 , M2 }. Then for arbitrary
172     A. Varava and G. Zholtkevych

ε > 0 there exists a corresponding number of iterations n(ε) such that for all
subsequent iterations (n > n(ε)) the following inequality is held:
                          max       {I(X(m) ; Θ)} − I(X(n) ; Θ) < ε .
                     T∈M, m∈N

   In other words, as far as the mutual information can be written as
                            
                2 ln 2 − 1  2 ln 2 − 1 X
  I(X(n) ; Θ) =           −                  Bn,k (m1 )+
                   2 ln 2        2 ln 2
                                               k∈An
                                                                                   !
                                               1 X
                                                   Bn,k (m1 ) · f (tn,k (m1 , m2 )) ,
                                               2
                                                 k∈An

we need to find n1 (ε) such that for all n > n1 (ε)
                      X                                      ε
                           Bn,k (m1 ) · f (tn,k (m1 , m2 )) < ,                     (7)
                                                             2
                          k∈An

and n2 (ε) such that for all n > n2 (ε)
                                X                ε
                                     Bn,k (m1 ) < .                                 (8)
                                                 2
                                     k∈An

Therefore, for n > n(ε) = max{n1 (ε), n2 (ε)} both of these inequalities are held.
   Let us fix a certain positive value of ε. At first we consider the left side of
inequality (7). Let us divide the set An into two non-intersecting subsets:
                                                k
                 Γn (m1 ) = {k ∈ An } :           − m1 < δ̃(m1 , m2 ) ,
                                                n
                                            k
                 ∆n (m1 ) = {k ∈ An } :       − m1 ≥ δ̃(m1 , m2 ) ,
                                            n
where δ̃(m1 , m2 ) is a certain positive function.
   It was demonstrated in [7], that for 0 < m1 < 1 and δ > 0:
                               X                    1
                                      Bn,k (m1 ) ≤       .                          (9)
                                                   4nδ 2
                                 k∈∆n (m1 )

   On the one hand, it is easy to see that f (t) is a bounded function. Suppose
that for all t ∈ (0, 1) ∪ (1, ∞) f (t) < C, where C is a certain positive constant.
Thus we have
                 X                                                  C
                        Bn,k (m1 ) · f (tn,k (m1 , m2 )) ≤                  .
                                                           4n δ̃ 2 (m , m )
              k∈∆ (m )                                                1  2
                 n    1


So, we can choose a value n1,1 (ε) such that for all n > n1,1 (ε)
                     X                                       ε
                           Bn,k (m1 ) · f (tn,k (m1 , m2 )) < .
                                                             4
                     k∈∆n (m1 )
       Asymptotical Information Bound of Consecutive Qubit Binary Testing            173

    On the other hand, we can see that when k is close to n · m1 , the value of
tn,k (m1 , m2 ) goes to zero as n goes to infinity. As far as lim f (t) = 0, there exists
                                                                 t→0
a value n1,2 (ε) such that for all n > n1,2 (ε)
                        X                                              ε
                                  Bn,k (m1 ) · f (tn,k (m1 , m2 )) <     .
                                                                       4
                     k∈Γn (m1 )


Now let n1 (ε) be a maximum of values n1,1 (ε) and n1,2 (ε). Thus, for all n > n1 (ε)
inequality (7) is held.
   Finally, let us consider inequality (8). Note that in the case of inequality
m1 < m2 the set An contains at most one element. Actually, by construction
k ∈ An if and only if Bn,k (m1 ) = Bn,k (m2 ). Solving this equation for the variable
k, we have
                                    ln ((1 − m1 )/(1 − m2 ))
                  k0 = n ·                                           .
                            ln (((1 − m1 ) · m2 )/((1 − m2 ) · m1 ))
If n, m1 and m2 are such that k0 is an integer then An = {k0 }. If not, the set
An is empty. It is easy to show that lim Bn,k (m1 ) = 0, so, we can easily find
                                             n→∞
n2 (ε) such that for all n > n2 (ε) inequality (8) is held.
    Now let us build a rigorous proof of the considering statement using this
heuristic consideration. To do it, we firstly need several trivial propositions.
Proposition 1. The following statements are correct:

1. for all x ∈ (0, 1) the inequality x > ln (x + 1) is true;
2. for all x > 0 the inequality ln(x/(x + 1)) < −1/(x + 1) is true too.

This proposition can be easily proved using Tailor series expansion of ln (x) and
Euler’s transform applied to this expansion.
Proposition 2. Let x, y ∈ (0, 1) and x 6= y then the following inequality is held:
                               y      (1−y)
                               x     1−x
                                                <1.
                               y     1−y

The proof is omitted.
   Now we can prove the above formulated theorem.

Proof (of Theorem 1). As we already know, the mutual information can be
presented as
                                  
                  2 ln 2 − 1  2 ln 2 − 1 X
  I(X(n) ; Θ) =             −               Bn,k (m1 )+
                     2 ln 2       2 ln 2
                                                k∈An
                                                                                    !
                                                1 X
                                                    Bn,k (m1 ) · f (tn,k (m1 , m2 )) .
                                                2
                                                  k∈An
174     A. Varava and G. Zholtkevych

We also know that
                                                         2 ln 2 − 1
                            max       {I(X(n) ; Θ)} =               .
                          T∈M, n∈N                          2 ln 2

    To prove the theorem it is enough to show that there exists n(ε) such that
for all n > n(ε)
              X                                      X
                  Bn,k (m1 ) · f (tn,k (m1 , m2 )) +   Bn,k (m1 ) < ε .
             k∈An                                        k∈An

   Consider an arbitrary ε > 0. Let us divide the set An into subsets Γn (m1 )
and ∆n (m1 ) in the following way:

                                                k
                  Γn (m1 ) = {k ∈ An } :          − m1 < δ̃(m1 , m2 ) ,
                                                n
                                                 k
                 ∆n (m1 ) = {k ∈ An } :            − m1 ≥ δ̃(m1 , m2 ) ,
                                                 n
where δ̃(m1 , m2 ) is a certain positive function of m1 and m2 defined further.
Our aim is to prove that there exists such n(ε) that for all n > n(ε):
                          X                                             ε
                                  Bn,k (m1 ) · f (tn,k (m1 , m2 )) <      ,             (10)
                                                                        4
                     k∈∆n (m1 )

                          X                                            ε
                                  Bn,k (m1 ) · f (tn,k (m1 , m2 )) <     ,              (11)
                                                                       4
                     k∈Γn (m1 )
                                    X                    ε
                                          Bn,k (m1 ) <     .                            (12)
                                                         2
                                   k∈An

    Firstly, let us consider inequality (10). We had already mentioned that for
all t ∈ (0, 1) ∪ (1, ∞) f (t) ≥ 0, and it is easy to see that for considered values of t
f (t) < 2 . So, using the above mentioned property of Bernstein basis polynomials
(9), we have:
                 X                                                      1
                          Bn,k (m1 ) · f (tn,k (m1 , m2 )) ≤                        .
             k∈∆n (m1 )
                                                                2nδ̃ 2 (m1 , m2 )

    Let n1,1 (ε) be sufficiently great. Then for all n > n1,1 (ε) the considering
inequality (10) is held.
    Secondly, let us consider inequality (11). On the one hand, it is not hard to
find such δ(ε) that
                                                   
                                                 t
                                     t2 · log         + log(t + 1)
                                                t+1                 ε
            ∀t ∈ (0, δ(ε)) : f (t) =                               < .
                                                  1−t               4
       Asymptotical Information Bound of Consecutive Qubit Binary Testing                 175

On the other hand, for sufficiently great values of n for all k ∈ Γn (m1 ) we have
tn,k (m1 , m2 ) < δ(ε). It follows that for great values of n we obtain
                                                                  n
             X                                               ε X             ε
                       Bn,k (m1 ) · f (tn,k (m1 , m2 )) <      · Bn,k (m1 ) = .
                                                             4               4
          k∈Γn (m1 )                                             k=0
                        √                         
                             1+(ε/2·ln(2))2 −1 1
     Let δ(ε) = min            ε/2·ln(2)      ;2       . Then for t ∈ (0, δ(ε)) we have
     ε · ln(2)
t<             (1 − t2 ). As far as 0 < t ≤ 12 < 1, we obtain
          4
                                     2
                                    t
                               t − t+1                 t           ε
                                            =                     < .
                          (1 − t) · ln(2)       (1 − t2 ) · ln(2)  4
If we combine this with Proposition 1, then for all t ∈ (0, δ(ε)) we have
                         
                      t                                       
           t2 · ln            + ln(t + 1)   t +  t 2
                                                     ·   −  1               t2
                     t+1                                   t+1        t − t+1       ε
   f (t) =                                <                      =                 < .
                   (1 − t) · ln(2)            (1 − t) · ln(2)      (1 − t) · ln(2)  4
   Now we need to find such n1,2 (ε) that for all k ∈ Γn (m1 ) : tn,k (m1 , m2 ) <
δ(ε). By definition,
                                          k         n−k
                                        m2      1 − m2
                    tn,k (m1 , m2 ) =        ·              .
                                        m1      1 − m1
                 m2
     As far as      > 1, tn,k (m1 , m2 ) strictly increases with respect to k. So, if
                 m1
 k
   − m1 < δ̃(m1 , m2 ) then
 n
                           m1 +δ̃(m1 ,m2 )         1−m1 −δ̃(m1 ,m2 ) !n
                         m2                    1 − m2
     tn,k (m1 , m2 ) <                      ·                               .
                         m1                    1 − m1

Consider the right side of this inequality. Note that
                            m1 +δ̃(m1 ,m2 )         1−m1 −δ̃(m1 ,m2 )
                        m2                      1 − m2
                                             ·
                        m1                      1 − m1

srtictly increases with respect to δ̃(m1 , m2 ). It is equal to 1 when δ̃(m1 , m2 ) =
δ̃ ∗ (m1 , m2 ), where
                                                                       
                 ∗                    ln ((1 − m2 )/(1 − m1 ))
               δ̃ (m1 , m2 ) =                                      − m1 .
                                 ln ((1 − m2 )/(1 − m1 ) · m1 /m2 )
     According to Proposition 2,
                            m1         1−m1
                          m2       1 − m2
                                 ·              <1,
                          m1       1 − m1
176       A. Varava and G. Zholtkevych

we see that for δ̃(m1 , m2 ) ∈ (0, δ̃ ∗ (m1 , m2 )) we have
                   m1 +δ̃(m1 ,m2 )               1−m1 −δ̃(m1 ,m2 )
                 m2                       1 − m2
                                        ·                              <1.
                 m1                       1 − m1
                      δ̃ ∗ (m1 , m2 )
      Let δ̃(m1 , m2 ) =              . Now it is easy to put n1,2 (ε) such that for all
                             2
n > n1,2 (ε) inequality (11) is held.
   Finally, let us find such n2 (ε) that for n > n2 (ε) condition (12) is satisfied.
We have already seen that |An | ≤ 1, and the equality is held when
                                       ln ((1 − m1 )/(1 − m2 ))
                    k0 = n ·                                            ;
                               ln (((1 − m1 ) · m2 )/((1 − m2 ) · m1 ))
is an integer. Let us denote the right side as n · c(m1 , m2 ) and write k0 as
k0 = n · c(m1 , m2 ).
    Referring to the standard way of proving the Stirling’s formula, we can write
the following inequality:
                       √        n n           √  n n
                         2πn ·        ≤ n! ≤ e · n ·       .
                                 e                   e
    It follows that
                   
                    n      e
                             r
                                    n       n k  n n−k
                       ≤                 ·       ·              .
                     k    2π k(n − k)        k      n−k
So, it is now easy to see that
                        e
                          r
                               n      n · m k  n · (1 − m ) n−k
                                            1               1
          Bn,k (m1 ) ≤             ·           ·                    .
                       2π k(n − k)        k           n−k
Substituting k0 = n · c(m1 , m2 ) for k in the last inequality, we get
                   s
                 e                   1
  Bn,k0 (m1 ) ≤                                       ·
                2π n · c(m1 , m2 )(1 − c(m1 , m2 ))
                              c(m1 ,m2 )                  1−c(m1 ,m2 ) !n
                       m1                        1 − m1
                                          ·                                   . (13)
                   c(m1 , m2 )               1 − c(m1 , m2 )
It follows from Proposition 2 that
                          c(m1 ,m2 )                  1−c(m1 ,m2 )
                  m1                        1 − m1
                                      ·                                <1.
               c(m1 , m2 )               1 − c(m1 , m2 )
   Now using (13) it is not hard to put n2 (ε) such great that for n > n2 (ε)
inequality (12) is held.
   Finally, let n(ε) = max{n1,1 (ε), n1,2 (ε), n2 (ε)} . Now for all n > n(ε)
                           max    {I(X(m) ; Θ)} − I(X(n) ; Θ) < ε .
                      T∈M, m∈N

      The theorem is proved.                                                          t
                                                                                      u
      Asymptotical Information Bound of Consecutive Qubit Binary Testing        177

5   Conclusions
In the paper the problem of obtaining classical information about the pure qubit
state using a single qubit binary test has been considered. It has been demon-
strated that the maximum of information is reached if and only if the using
measurement is projective. The maximum value of information has been calcu-
lated:                         n            o 2 ln 2 − 1
                        max      I(X(n) ; Θ) =             .
                      T∈M, n∈N                      2 ln 2
It follows, in particular, that to distinguish two arbitrary pure qubit states using
a single binary test it is necessary to have at least four pairs of qubits prepared
in the considered states.
    It has been shown that the maximum of reachable information can be at-
tained asymptotically using an arbitrary consecutive qubit binary test. Thus, if
we have a certain measuring instrument performing a qubit binary test, we can
obtain an amount of information arbitrary close to the maximum.
    As known [3, 6], Yu. Manin and R. Feynman proposed to use quantum sys-
tems to simulate others quantum systems. The results obtained in the paper show
that this idea should be refined: one should take into account all dependences
between an input data, a behaviour of a simulating system, and a structure of
an output data.
    Our further research will deal with generalizing results of the paper for the
case of an n-level quantum system and a measurement with m outcomes.


References
 1. Cover, T.M., Thomas, J.A.: Elements of Information Theory. John Wiley & Sons,
    Inc. (1991)
 2. Davies, E.B.: Information and Quantum Measurement. IEEE Trans. Inf. Theory
    IT-24, 596–599 (1978)
 3. Feynman, R.P.: Simulating Physics with Computer. Int. J. Theor. Phys., 21, 467–
    488 (1982)
 4. Holevo, A.S.: Bounds for the Quantity of Information Transmitted by a Quantum
    Communication Channel (in Russian). Problemy Peredachi Informatsii, vol. 9, 3,
    3–11 (1973)
 5. Holevo, A.S.: Statistical Structure of Quantum Theory. Springer-Verlag, Berlin
    (2001)
 6. Manin, Yu.I.: Mathematics as metaphor: selected essays of Yuri I. Manin. AMS
    (2007)
 7. Natanson, I.P.: Constructive function theory. Vol. 1, Ungar, New York (1964)
 8. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information,
    10th Anniversary edition. Cambridge University Press, Cambridge (2010)
 9. Shannon C.E., Weaver W.: The Mathematical Theory of Communication. Univer-
    sity of Illinois Press, Urbana (1949)
10. Thawi M., Zholtkevych G.M.: About One Model of Consecutive Qubit Binary
    Testing. Bull. Khark. Nat. Univ., vol. 890, 13, 71–81 (2010)