=Paper= {{Paper |id=Vol-1326/067-Skorski |storemode=property |title=An Improved Transformation between HILL and Metric Conditional Pseudoentropy |pdfUrl=https://ceur-ws.org/Vol-1326/067-Skorski.pdf |volume=Vol-1326 |dblpUrl=https://dblp.org/rec/conf/sofsem/Skorski15a }} ==An Improved Transformation between HILL and Metric Conditional Pseudoentropy== https://ceur-ws.org/Vol-1326/067-Skorski.pdf
An Improved Transformation between HILL and
     Metric Conditional Pseudoentropy

                                    Maciej Skorski

             Cryptology and Data Security Group, University of Warsaw
                            maciej.skorski@gmail.com


       Abstract. HILL Entropy and Metric Entropy are generalizations of the
       information-theoretic notion of min-entropy to the setting where an ad-
       versary is computationally bounded.
       The notion of HILL Entropy appeared in the breakthrough construction
       of a PRG from any one-way function, and has become the most impor-
       tant and most widely used definition of computational entropy. In turn,
       Metric Entropy which is defined as a relaxation of HILL Entropy, has
       been proven to be much easier to handle, in particular in the context of
       computational generalizations of the Dense Model Theorem.
       Fortunately, Metric Entropy can be converted, with some loss in quality,
       to HILL Entropy as shown by Barak, Shaltiel and Wigderson.
       In this paper we improve their result, slightly reducing the loss in quality
       of entropy. Interestingly, our bound is independent of size of the proba-
       bility space in comparison to the result of Barak et al. Our approach is
       based on the theory of convex approximation in Lp -spaces.


1     Introduction
1.1   Computational Entropy
The Idea of Computational Entropy. The notion of entropy, as a mea-
sure of randomness, is a fundamental concept in information-theory. The need
for computational versions of entropy comes from the fact that security defini-
tions based on classical information theoretic entropy notions are quite often too
strong for “practical” purposes, where resources of adversaries are limited. The
distribution which is not perfectly random might look random from the com-
putational point of view, when finding the real difference is simply inefficient.
The metrics used to quantify the amount and quality of pseudorandomness are
commonly called computational entropies.
HILL Entropy and Metric Entropy. The most popular approach of extend-
ing information-theoretic notions of entropy into computational case, is based
on the notion of computational indistinguishability. We discuss it briefly below.
    Given a class of boolean functions D and a parameter  we say that two n-bit
random variables X and Y are (D, )-indistinguishable if no function D from the
class D can distinguish X and Y with the advantage better than . Formally
                      E D(X) − E D(Y ) 6          for all D ∈ D
68        M. Skorski

If D is the class of all circuits of size at most t we slightly abbreviate this notation
and say that X and Y are (s, ) indistinguishable.
    In theoretical computer science and especially in cryptography the most po-
pular notion of entropy is the min-entropy (in contrast to information-theory
where one uses extensively the Shannon entropy), because it measures random-
ness in terms of “hardness” of predictability. The min entropy of X given (pos-
sibly) Z is at least 2−k if one cannot predict X given Z = z, on average1 over z,
better than with probability 2−k . Formally, for X and Z one defines the average
min-entropy [4] of X given Z as follows
                                           h                      i
               e ∞ (X|Z) > k iff E max Pr[X = x|Z = z] 6 2−k .
               H
                                   z←Z     x

Based on the concept of indistinguishability and min-entropy, one defines com-
putational HILL entropy [10] of X as the maximum amount of min-entropy in
a random variable Y (taking values in the same set X) which is indistinguish-
able from X. This idea was extended to the conditional case (the presence of
side information) by [11]. We can state the definition as follows:
        HILL,(s,)
      H∞         (X|Z) > k if and only if there exists Y jointly distributed with
      Z of average min-entropy (given Z) at least k such that | E D(X, Z) −
      E D(Y, Z)| 6  for all circuits D of size at most s.

Note that this captures the standard notion of pseudorandom distribution (for
k = n). By switching the order of quantifiers and restricting to deterministic
circuits one obtains a slightly weaker version, called computational metric en-
tropy [1, 5]
        M,det[0,1],(s,)
      H∞            (X|Z) > k iff for every deterministic [0, 1]-valued circuits
      D of size at most s there exists Y jointly distributed with Z of average
      min-entropy (given Z) at least k such that | E D(X, Z) − E D(Y, Z)| 6 .

In both definitions the parameters s,  quantify the quality of pseudorandomness:
the bigger s and the smaller , the higher quality is. Metric entropy is known to
be equivalent to HILL entropy with the same amount and some loss in quality
parameters [1, 2]. There are very good reasons to introduce and study metric-
entropy: quite often it is much easier to prove a statement for metric-entropy
and then pass to the HILL version. Actually, this strategy is unavoidable for the
standard proof technique which uses the min-max theorem to switch the order
of players in a game. Therefore, many facts on HILL entropy uses metric entropy
explicitly or implicitly [1, 2, 5, 7, 14, 17]. Perhaps the most spectacular example is
the efficient version of the Dense Model Theorem [5,14], being the key ingredient
of the famous result of Tao and Ziegler on primes in arithmetic progressions [16].
The efficient version, which found many interesting applications in complexity
1
     Sometimes one uses the stronger notion, called the worst-case min-entropy, when we
     require the same upper bound on guessing probability for every auxiliary input z,
     not only on average.
                                            An Improved Transformation...      69

theory, was originally proved using the idea of metric computational entropy
in [14] and independently in [5]. A much simpler proof with significant improve-
ments in quality was given in [6].
Conversions between HILL and metric entropy. The following result
states that metric and HILL computational entropy are equivalent up to some
loss in quality
Theorem 1 (Transformation between Metric and HILL Computational
Entropy [1]). Let X and Z be, respectively, n-bit and m-bit correlated random
variables. Then
                           0   0
                   HHILL,(s , ) (X|Z) > HM,det[0,1],(s,) (X|Z)
where s0 = O s · δ 2 /(n + m) and 0 =  + δ for arbitrary δ ∈ (0, 1).
                              

Remark 1. Since we have HHILL,(s,) (X|Z) 6 HM,det[0,1],(s,) (X|Z), the conver-
sion in the other direction is lossless.

1.2   Our Contribution
Our result. We improve Theorem 1 of Barak, Shaltiel and Widgerson in the
following way:
Theorem 2 (Dimension-independent transformation between metric
and HILL entropy). For any n-bit random variable X and a correlated random
variable Z we have
                           0   0
                   HHILL,(s , ) (X|Z) > HM,det[0,1],(s,) (X|Z)
where δ ∈ (0, 1) is an arbitrary parameter, s0 = O s · δ 2 /(∆ + 1) , 0 =  + δ
                                                                   

and ∆ = n − k.
In comparison to Theorem 1 we replace the factor n + m by ∆ + 1. Our result
shows that the conversion does not depended on the dimension of the domain
of X and Z but only on the entropy deficiency ∆. While this does not offer
significant improvement in the asymptotic setting, it might be of some interest
in case when m is much longer than n (see for instance equivalence between
HILL and unpredictability entropy of X given Z for short X [17]) or when the
deficiency ∆ is very small. We also remark that our result automatically improves
the best known parameters for the efficient dense model theorem by the factor
of n.
Our techniques. Our results might be interesting because of the novel proof
technique: instead of using Chernoff Bounds for approximating convex hulls with
uniformly small error as in [1], we show that it is enough to do the approximation
with respect to the p-th norm induced by some appropriately chosen measure,
and optimize the value of p. There is a lot of research focused on achieving
better rates of convex approximations in Lp -spaces for some restricted class of
functions. In case of the metric-to-HILL transformation (or similar result) it
might be possible to obtain some further improvements for restricting classes of
adversaries.
70      M. Skorski

1.3   Organization of the Paper
In Section 2 we explain basic notions and provide necessary definitions. The
proof of our main technical result together with an improved Metric-to-HILL
transformation appears in Section 3. In Section 4 we demonstrate a simple ap-
plication: a slight improvement over the best known parameters for the Dense
Model Theorem.


2     Preliminaries
Probabilities, measures and integrals. By µX or PX we denote the pro-
bability mass function (distribution) of X, that is µX (x) = PX (x) = Pr[X = x]
for all x. A measure ν on a finite set Ω is a function µ : Ω → R+ ∪ {0}. For
notation convenience, we use the signs of sums and integrals interchangeably.
The
R    integral
           P of a function D on E with respect to a measure ν is defined as
 E
   Ddν   =    x∈E D(x)ν(x). For the integral over the entire domain we omit the
subscript E.
Lp spaces. Given a finite set Ω and a measure µ on ΩR one defines the p-th norm
of a real-valued function D defined on Ω as kDkp = Ω Ddµ
Convex combinations. Given a set of real-valued functions C defined on the
same domain, by convt (C) we denote the set of all convex combinations of length
at most t of members of C. That is,
       ( t           t
                                                                                         )
         X           X
 Ct =        αi Di :    αi = 1, αi > 0 for i = 1, . . . , t, Di ∈ C for i = 1, . . . , t
         i=1          i=1

Computational Entropy notions.
Definition 1 (Conditional HILL Pseudoentropy [11]). Let X, Z be a joint
distribution with the following property: there exists Y of conditional min-entropy
at least k given Z such that for all circuits D of size at most s we have | E D(X, Z)
− E D(Y, Z)| 6 . Then we say that X given Z has k bits of HILL min-entropy
                                    HILL,(s,)
of quality (s, ) and denote by H∞             (X|Z) > k.
Remark 2 (HILL entropy against different circuits classes). For conditional HILL
entropy all kinds of circuits: deterministic boolean, deterministic real valued and
randomized boolean (for the same size s), are equivalent [6].
Definition 2 (Conditional Metric Pseudoentropy [5]). Let X, Z be a joint
distribution with the following property: for every deterministic boolean (respec-
tively: deterministic real valued or boolean randomized) circuit D of size at
most s there exists Y of (conditional min entropy at least k given Z such that
| E D(X, Z; Y, Z) − E D(Y, Z)| 6 . Then we say that X given Z has k bits of de-
terministic (respectively: deterministic real valued or boolean randomized) metric
                                                   M,det{0,1},(s,)
min-entropy of quality (s, ) and denote by H∞                      (X|Z) (respectively:
   M,det[0,1],(s,)            M,rand{0,1},(s,)
H∞                  (X|Z) and H∞                 (X|Z)).
                                              An Improved Transformation...    71

3   Main Result
In this section we prove our main technical result which inmediately implies
Theorem 2.
Lemma 1 (Approximating long convex combinations with respect to
high-min-entropy distributions.). Let X be an n-bit random variable, be
Z be a correlated m-bit random variable, and let C be a class of [0, 1]-valued
function on {0, 1}n × {0, 1}m . Let D ∈ conv(C). Then for ` = 49(n + 1 − k)/δ 2
there exists D` ∈ conv` (C) such that
                             E |D(X) − D` (X)| 6 δ                             (1)
and simultaneously
                             E |D(X) − D` (Y )| 6 δ                            (2)
for every distribution Y jointly distributed with Z such that H∞ (Y |Z) > k.
Corollary 1. Lemma 1 implies Theorem 2
                             HILL,(s0 ,0 )
Proof (of Corollary 1). If H∞           (X|Z) < k then for every Y satisfying
e ∞ (Y |Z) > k we find D of size at most s0 such that | E D(X, Z) − E D(Y, Z)| >
H
0 . Replacing D by Dc of necessary we can assume that E D(X, Z)−E D(Y, Z) >
 for some D of size s0 + 1. By applying the min-max theorem we get that there
exists a convex combination D0 of circuits of size at most s0 + 1 such that
              E D(X, Z) − E D(Y, Z) > 0       ∀Y : H
                                                    e ∞ (Y |Z) > k

That combination might be very long. But applying Lemma    1 we can approxi-
mate it by a combination D0 of at most O (n + 1 − k)/δ 2 circuits of size s0 + 1
in such a way that the expectations with respect to X, Z and Y, Z differs at most
by δ/2. This way we obtain

         E D0 (X, Z) − E D0 (Y, Z) > 0 − 2 · δ/2   ∀Y : H
                                                         e ∞ (Y |Z) > k

which finishes the proof.                                                       t
                                                                                u
Now we prove our main approximation result
Proof (of Lemma 1). Consider the space of all functions on {0, 1}n+m . We start
by the following trivial observation
Claim 1. It suffices to show that for some D0 ∈ conv` (C) we have |D − D0 | ·
                                                                   R

d(µX + µY ) 6 δ for all Y such that He ∞ (Y |Z) > k.
By applying the Hölder Inequality, we immediately get
Claim 2. For every functions D, D0 and every p, q > 1 such that p1 + 1q = 1 we
have
           Z
                                                    µX,Z + µY,Z
             |D − D0 | · d(µX + µY ) 6 kD − D0 kp ·                         (3)
                                                         µ        q
72      M. Skorski

Now we give estimates on both factors on the right hand side of Equation (3).
Claim 3. If q ∈ [1, 2] then for any Y such that H
                                                e ∞ (Y |Z) > k we have

                     µX,Z + µY,Z                             1/q
                                         6 2q + 2(q−1)(n+1−k)                          (4)
                          µ          q

Proof (Of Claim 3). Recall the well-known inequality
Proposition 1. If a, b > 0and q > 1 then (a + b)q 6 2q−1 (aq + bq ).
From Proposition 1 it follows now that
                                                                      !
                  µX,Z + µY,Z            q−1    µX,Z       µY,Z
                                     62                  +                             (5)
                       µ         q               µ     q    µ     q

We shall estimate two terms in Equation (4) separately. Since µX,Z (x, z) <
µX,Z (x, z) + µU,Z (x, z) = µ(x, z) for all x, z we have
                                            Z
                                µX,Z
                                         < 1dµ = 2                       (6)
                                  µ q

                                                                          µX,Z +µY,Z
To bound the second term note that the functional µY,Z →                       µ           is
                                                                                       q
convex as a function of µY,Z (being a composition of an affine function and
the p-th norm). Therefore, the maximum among all distributions Y, Z satisfying
e ∞ (Y |Z) > k, which form a convex set, is attained at an extreme point. This
H
means that the maximum is attained for a distribution (Y ∗ , Z) such that the
distribution Y ∗ |Z=z is flat for every z and the conditional min-entropy of Y given
Z is exactly k. Since µ(x, z) = µU (x)µZ (z) and µY ∗ ,Z (x, z) = µY ∗ |Z=z (x)µZ (z)
we obtain
                            q    Z           q
                    µY,Z              µY ∗ ,Z
                              6                  dµ
                      µ q               µ
                                 Z Z  ∗ q
                                            µYZ=z
                                                         
                              =                      dµU dµZ
                                              µU
                                 Z                         
                                                      ∗
                              =      2(q−1)(n−H∞ (Y |Z=z) dµZ
                                         Z
                                                          ∗
                              =2  (q−1)n
                                             2−(q−1) H∞ (Y |Z=z) dµZ

By applying the Jensen Inequality to the function u → uq−1 (which is concave
by the assumption on q) we get
                     q           Z                     q−1
               µY,Z                           ∗
                       6 2(q−1)n     2− H∞ (Y |Z=z) dµZ
                µ q
                                             q−1
                       6 2(q−1)n 2− H∞ (Y |Z)      = 2(q−1)(n−k)         (7)
                                     e
                                                 An Improved Transformation...         73

Plugin Equation (7) and Equation (6) into Equation (5) yields
                        q
          µX,Z + µY,Z                            
                            6 2q−1 2 + 2(q−1)(n−k) = 2q + 2(q−1)(n+1−k) .
               µ        q


and Equation (4) follows.                                                               t
                                                                                        u

                                          D ∈ conv(C) and ` > 1 there exists
Claim 4. Suppose that p > 2. Then for any p
D` ∈ conv` (D) such that kD − D` kp < 1.74 p/`.

Proof. The proof relies on the following approximation result on rates of convex
approximation, which generalizes the famous Maurey-Johnes-Barron Theorem.

Lemma 2 (Convex approximation in Lp spaces [3]). Let E be an Lp space
with 1 6 p < +∞. Suppose that S ⊂ E, f ∈ conv(S) and let K > 0 be such that
for all g ∈ S we have kg − f kp 6 K. Then for any ` we have

                                                        KCp
                                min        kf − skp 6       1
                             s∈conv` (S)                `1− t
                                                            √                   √
where t = min(2, p) and Cp = 1 if 1 6 p 6 2, Cp =               2[Γ ((p + 1)/2)/ π]1/p for
2 < p < +∞.

Remark 3. The constant Cp can be estimated using the following approximation
for the gamma function [12], valid for x > 1:
            √          √                       √       √
                π(x/e)x 2x + 0.33 < Γ (x + 1) < π(x/e)x 2x + 0.36
                                √
From this we find that Cp < 0.87 p for all p > 2.

The claim follows by setting
                          R E to be the space of [0, 1]-valued functions on
{0, 1}n × {0, 1}m and K = 1dµ = 2.                                        t
                                                                          u

By Claim 3 and Claim 4 combined with Claim 2 and Claim 1 it suffices to find
p > 2 (which automatically ensures q ∈ [1, 2]) and ` such that

                         p                        1/q
                     1.74 p/` · 2q + 2(q−1)(n+1−k)      6 δ.

                                                                          p
If k > n−1 then we put p = q = 2. Then  √  it suffices to ensure that 1.74  2/`(22 +
  2 1/2
2 )     6 δ which is equivalent to 6.96 ` 6 δ. Suppose that k 6 n − 1. By the
                 r
inequality
        p (a+b) 6     ar +br valid
                                for a, b > 0 and 0 < r 6 1, we see that√ it suffices
                    (n+1−k)/p)
if 1.74 p/` 2 + 2               6 δ. For p = n + 1 − k we obtain 6.96 ` 6 δ. This
finishes the proof.                                                                t
                                                                                   u
74        M. Skorski

4      Application to the Dense Model Theorem
Dense Model Theorem. Given a pair of two distributions W and V over the
same finite domain we say that W is δ-dense in V if and only if Pr[W = x] 6
Pr[V = x]/δ 2 . The dense model theorem [16], specialized to the boolean case,
can be formulated as follows:
Theorem 3 (Dense Model Theorem.). Let D0 be a class of n-bit boolean
functions, R be uniform over {0, 1}n , X be an n-bit random variable and let X 0 be
δ-dense in X. If X and R are (D, )-indistinguishable then there exists a distribu-
tion R0 which is δ-dense in R such that X 0 and R0 are (D0 , 0 )-indistinguishable,
where 0 = (/δ)O(1) and D consists of all functions of the form g(D1 , . . . , D` )
where Di ∈ D0 , ` = poly(1/δ, 1/) and g is some function.
Informally, this statement reads as follows: if a distribution X 0 is dense in
a pseudorandom distribution X, then X 0 must be indistinguishable from a dis-
tribution dense in the uniform distribution. Note that the indistinguishability
parameters for X 0 are worse than for X: to achieve (D0 , 0 )-indistinguishably we
need to start with  smaller than 0 and a class D sufficiently more complicated
than D0 . Note also that for the statement to be computationally meaningful we
need g to be efficient.
Applications of the Dense Model Theorem. Efficient versions of the
Dense Model Theorem have found applications in differential privacy [13], pseu-
doentropy and leakage-resilient cryptography [2, 5], graph decompositions [14],
and further applications in additive combinatorics [9]. We refer the reader to [15]
for a survey.
Comparison of different formulations. Below we compare the different
versions of the Dense Model Theorem. Note that an equivalent statement in
language of pseudoentropy was given in [5].

Table 1: The quantitative comparison of different versions of the Dense Model Theorem

        Author     Function g       ` as complexity of D0 w.r.t D 0 vs 
        [16]       Inefficient      ` = poly(1/(/δ), log(1/δ))   0 = O(δ/)
        [8, 14]    Linear threshold poly(1/(/δ), log(1/δ))       0 = O(δ/)
        [6], [5]   Linear threshold ` = O(n/(/δ)2 )              0 = O(/δ)
        This paper Linear threshold ` = O(log(1/δ)/(/δ)  2
                                                                  0 = O(/δ)


Below we show how to derive from our Lemma 1 a version of the Dense Model
Theorem where n was replaced by log(1/δ), which is typically much smaller.
Corollary 2. Dense Model Theorem (Theorem 3) holds with 0 = O(/δ), g
being a linear threshold and ` = O(log(1/δ)/(/δ)2 .
2
     The term “δ-dense” comes from the fact that V can be written as a convex combi-
     nation of W with weight δ and some other distribution with weight 1 − δ
                                                 An Improved Transformation...   75

Proof. We show how to reduce the formulation of the Dense Model Theorem to
the statement about HILL entropy. We start by the following observation:
Claim 5. X 0 is δ-dense in X if and only if X 0 can be written as X|A for some
event A of probability δ.

Proof. of Claim Consider a random variable A ∈ {0, 1} jointly distributed with
X as follows: Pr[X = x, A = 1] = δ Pr[X 0 ]. By the assumption on X and X 0
we have Pr[X = x, A = 1] 6 1 and thus this distribution is well defined, in
particular we have Pr[A = 1] = δ and Pr[X|A = 1] = Pr[X 0 ]. In the other hand
               d
if we have X 0 = X|A then Pr[X 0 = x] = Pr[X = x, A]/ Pr[A] 6 Pr[X = x] 6
Pr[X = x]/ Pr[A] and hence X 0 is Pr[A]-dense in X.                     t
                                                                        u

The second fact we need is the so called leakage lemma for metric-entropy
Lemma 3 ( [7], reformulated). Let X be a random variable, A be an event
of probability δ, and let D be a class of [0, 1]-valued functions. Suppose that
there exists D such that E D(X|A) − E D(Y ) > 0 for all Y of min-entropy at
least k − log(1/ Pr[A]) and 0 = / Pr[A]. Then there exists a a function D0 being
a threshold of some D ∈ D (or its complement) such that E D0 (X)−E D0 (Y ) > .
The name “leakage lemma” is due to the fact that this implies
  M,det[0,1],s,)          M,D,s0 / Pr[A])
H∞                (X|A) > H∞                (X) − log(1/ Pr[A]) for s0 ≈ s. Now we
are ready to give the proof. Suppose contrary, that the Dense Model Theorem is
not true with the claimed parameters. Then for some event A of probability δ,
some 0 and every distribution Y of min-entropy n−log(1/δ) (which is equivalent
to be δ-dense in the uniform distribution) there exists D ∈ D or D ∈ 1 − D ∈ D
such that

                              E D(X|A) − E D(Y ) > 0

By applying a min-max theorem we get that there exists a long convex combi-
nation D̄ of functions from D ∪ (1 − D) such that

           E D̄(X|A) − E D̄(Y ) > 0      ∀Y : H∞ (Y ) > n − log(1/δ).

Now we use our Lemma 1, with the class D ∪ (1 − D) and δ replaced by 0 /3
to approximate D̄ by a convex combination D0 of length ` = O log(1/δ)/02 .
                                                                         

Then we get

          E D0 (X|A) − E D0 (Y ) > 0      ∀Y : H∞ (Y ) > n − log(1/δ).

Note that D0 is a linear threshold of ` functions from D. By Lemma 3 we replace
D0 by D00 which is again a linear threshold of ` functions from D and satisfies

                   E D00 (X) − E D00 (Y ) > 0    ∀Y : H∞ (Y ) > n.

Hence, we get a contradiction.                                                   t
                                                                                 u
76      M. Skorski

5    Conclusion
In this paper we improve the transformation between conditional Metric and
HILL entropy by replacing the dimension factor by the entropy deficiency. This
result immediately translates into a slightly improved version of the Dense Model
Theorem. An interesting question is the problem of finding complexity lower
bounds for that transformation.

References
 1. Barak, B., Shaltiel, R., Wigderson, A.: Computational Analogues of Entropy. In:
    Arora, S. et al. (eds.) RANDOM-APPROX. pp. 200-215, Springer (2003)
 2. Chung, K.-M., Kalai, Y.T., Liu, F.-H., Raz, R.: Memory Delegation. Cryptology
    ePrint Archive, Report 2011/273, http://eprint.iacr.org/ (2011)
 3. Donahue, M.J., Darken, C., Gurvits, L., Sontag, E.: Rates of convex approxima-
    tion in non-hilbert spaces. In: Constructive Approximation, vol. 13:2, pp. 187-220,
    Springer-Verlag (1997)
 4. Dodis, Y., Ostrovsky, R., Reyzin, L., Smith, A.: Fuzzy Extractors: How to Gene-
    rate Strong Keys from Biometrics and Other Noisy Data. In: SIAM J. Comput.
    (March 2008), vol. 38:1, pp. 97–139, Society for Industrial and Applied Mathema-
    tics, http://dx.doi.org/10.1137/060651380 (2008)
 5. Dziembowski, S., Pietrzak, K.: Leakage-Resilient Cryptography in the Standard
    Model. IACR Cryptology ePrint Archive, vol. 2008, http://eprint.iacr.org/
    2008/240 (2008)
 6. Fuller, B., Reyzin, L.: Computational Entropy and Information Leakage. Cryptol-
    ogy ePrint Archive, Report 2012/466, http://eprint.iacr.org/ (2012)
 7. Fuller, B., Reyzin, L.: A unified approach to deterministic encryption: New con-
    structions and a connection to computational entropy. TCC 2012, vol. 7194 of
    LNCS, pp. 582–599, Springer (2012)
 8. Gowers, W.T.: Decompositions, approximate structure, transference, and
    the Hahn-Banach theorem. ArXiv e-prints, http://adsabs.harvard.edu/abs/
    2008arXiv0811.3103G (2008)
 9. Gowers, W.T., Wolf, J.: Linear Forms and Higher-Degree Uniformity for Functions
    On Fpn . Geometric and Functional Analysis, vol. 21:1, pp. 36-69, SP Birkhuser
    Verlag Basel (2011)
10. Hastad, J., Impagliazzo, R., Levin, L.A., Luby, M.: A Pseudorandom Generator
    from any One-way Function. SIAM J. Comput., vol. 28:4, pp. 1364-1396 (1999)
11. Hsiao, Ch.-Y., Lu, Ch.-J., Reyzin, L.: Conditional Computational Entropy, or To-
    ward Separating Pseudoentropy from Compressibility. In: Proceedings of the 26th
    annual international conference on Advances in Cryptology (Berling, Heidelberg),
    EUROCRYPT ’07, pp. 169-186, Springer-Verlag (2007)
12. Mortici, Chr.: On Gospers formula for the gamma function. Journal of Mathema-
    tical Inequalities, vol. 5:4, 611-614 (2011)
13. Mironov, I., Pandey, O., Reingold, O., Vadhan, S.: Computational Differential
    Privacy. In: Proceedings of the 29th Annual International Cryptology Conference
    on Advances in Cryptology, CRYPTO ’09, pp. 126-142, Springer-Verlag (2009)
14. Reingold, O., Trevisan, L., Tulsiani, M., Vadhan, S.: Dense Subsets of Pseudo-
    random Sets. In: Proceedings of the 2008 49th Annual IEEE Symposium on
    Foundations of Computer Science, FOCS ’08, 76-85, IEEE Computer Society,
    http://dx.doi.org/10.1109/FOCS.2008.38 (2008)
                                              An Improved Transformation...        77

15. Trevisan, L.: Dense Model Theorems and Their Applications. In: Proceedings of
    the 8th Conference on Theory of Cryptography, TCC’11, 55-57, Springer-Verlag
    (2011)
16. Tao, T., Ziegler, T.: The primes contain arbitrarily long polynomial progressions.
    Acta Mathematica, vol. 201:2, pp. 213-305, Springer Netherlands (2008)
17. Vadhan, S., Zheng, C.J.: Characterizing pseudoentropy and simplifying pseudoran-
    dom generator constructions. In: Proceedings of the 44th symposium on Theory of
    Computing, STOC ’12, 817-836, ACM, New York, USA (2012)