<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Indistinguishability and Unpredictability Hardcore Lemmas: New Proofs with Applications to Pseudoentropy</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Maciej</forename><surname>Skorski</surname></persName>
							<email>maciej.skorski@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Cryptology and Data Security Group</orgName>
								<orgName type="institution">University of Warsaw</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Indistinguishability and Unpredictability Hardcore Lemmas: New Proofs with Applications to Pseudoentropy</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">48EF5DDD1C9BEB0FFF606EC8623E2EAD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:29+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Hardcore lemmas are results in complexity theory which state that average-case hardness must have a very hard "kernel", that is a subset of instances where the problem is extremely hard. Such results find important applications in hardness amplification. In this paper we revisit two classical results: (a) The hardcore lemma for unpredictability, proved first by Impagliazzo. It states that if a boolean function f is "moderately" hard to predict on average, then there must be a set of noticeable size on which f is "extremely" hard to predict. (b) The hardcore lemma for indistnguishability, proved by Maurer and Tesaro, states that for two random variables X and Y which are -computationally close, there exist events A and B of probability 1 − such that the distributions of X|A and Y |B are "almost" identical. We provide alternative proofs and some generalizations of these result in the nonuniform setting. As an interesting application, we show a strengthening of the transformation between two most popular pseudoentropy variants: HILL and Metric Entropy, and apply it to show how to extract pseudorandomness from a sequence of metric-entropy sources of poor quality. Comparing to the best known techniques we significantly improve security parameters.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Hardcore Lemmas and Their Applications</head><p>Unpredictability Hardcore Lemma. Suppose that we have a predicate f that is mildly hard to predict by a class of circuits; for every circuit D from this class, D(x) and f (x) agree on at most, lets say, a 0.99 fraction of inputs x. One of the reasons for that, which could intuitively explain this behavior, is the existence of a "kernel" for this hardness-a set of noticeable size on which f is extremely hard to predict. Quite surprisingly, this intuitive characterization is true. The first such result was proved by Impagliazzo <ref type="bibr" target="#b7">[8]</ref>. Below we present the tight improvement due to Holenstein.</p><p>Theorem 1 (Unpredictability Hardcore Lemma <ref type="bibr" target="#b6">[7]</ref>). Let f : {0, 1} n be a predicate such that f is -unpredictable by circuits of size s, that is</p><formula xml:id="formula_0">Pr x←{0,1} n [D(x) = f (x)] 1 − 2</formula><p>holds for all D of size at most s. Then for any δ ∈ (0, 1) there exists a "hardcore" set S of size 2 n such that f on S is 1 − δ unpredictable by circuits of size s = O sδ 2 /n , that is</p><formula xml:id="formula_1">Pr x←S [D(x) = f (x)] 1 + δ 2</formula><p>, for all D of size at most sδ 2 /32n.</p><p>Remark 1. Some authors use different conventions for -unpredictability. We follow the approach of <ref type="bibr" target="#b6">[7]</ref>. The definition above is intuitive since 1-unpredictability would means that f is totally unpredictable.</p><p>Note that the size of the hardcore set, guaranteed to be at least 2 n , is tight. Indeed, if the second part of the theorem is satisfied, i.e. f is almost unpredictable on a set of size , it implies that f , on average over the whole domain, cannot be predicted better than 1 − +δ 2 ≈ 1 − 2 . An uniform version, with the tight hardcore density, is given also in <ref type="bibr" target="#b6">[7]</ref>. Constructive versions of the hardcore lemma can be obtained by actually an arbitrary boosting algorithm <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b8">9]</ref>, however such results are typically not tight, without additional optimization. Indistinguishability Hardcore Lemma. It is know that if two distributions X 1 , X 2 have the statistical distance at most , then there exists events A 1 , A 2 of probability at least 1 − such that the distributions X 1 |A 1 and X 2 |A 2 are identical. Based on the reduction to the unpredictability hardcore lemma, Maurer and Tessaro proved the following computational generalization of this fact Theorem 2 (Indistinguishability Hardcore Lemma <ref type="bibr" target="#b10">[11]</ref>). Let X 1 and X 2 be distributions on {0, 1} n , with the computational distance against circuits of size s, that is</p><formula xml:id="formula_2">| E D(X 1 ) − E D(X 2 )| &lt;</formula><p>for all D of size s.</p><p>Then there exist events A 1 and A 2 of probability 1 − such that A 1 and A 2 are computationally indistinguishable, that is</p><formula xml:id="formula_3">| E D(X 1 |A 1 ) − E D(X 2 |A 2 )| δ for all D of size s = sδ 2 /128n.</formula><p>which states that if two distributions are (computationally) not too far away from each other then after conditioning on an event of noticeable probability they are almost indistinguishable. Since the lower bound 1 − on the probabilities of hardcore events is tight<ref type="foot" target="#foot_0">1</ref> , this theorem can be viewed as a characterization of computational indistinguishability.</p><p>Applications of hardcore lemmas. Hardcore lemmas are fundamental result in complexity theory and find applications in cryptography and learning theory. They are particularly important in the context of hardness amplification, i.e. transforming somewhat hard problems into hard problems. See for instance <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Our Results</head><p>An unpredictability hardcore lemma under arbitrary distributions.</p><p>We prove a nonuniform version of a hardcore lemma that is true when inputs for functions are sampled from arbitrary distribution, not necessarily uniform. Due to connections of machine learning theory and hardcore lemma, well explained in <ref type="bibr" target="#b8">[9]</ref>, it is clear that there is nothing special in the uniform distribution and qualitatively similar statements indeed could be derived for any distribution. However, our approach has the following advantages:</p><p>(a) The proof strategy is very simple and natural: we observe that it is straightforward to construct a hardcore for any fixed distinguisher and then we use the min-max theorem to "reverse" the quantifiers. We believe that the proof of this form can be useful to derive some complexity lower bounds for hardcore lemma, which is (besides boosting proofs) an open problem. (b) Our hardcore lemma is quantitatively tight, that is the weight of the hardcore event for -unpredictability is guaranteed to be and the loss in the complexity is O δ 2 /n for δ-closeness, which matches the best known result for the case of the uniform distribution. Actually we slightly improve security bounds with better explicit constants and replacing the dimension n by a smaller factor. (c) The only technical difficulty in the proof, the proof that if a hardcore can be constructed for any fixed boolean circuit then the same is true for real valued circuits, is overcome by the technique of explicitly characterizing the "worst case" measures. This technique, which might be of independent interest, allows us to give a relatively short and direct (without reducing to the unpredictability version) proof of the indistinguishability hardcore lemma and, more interestingly, a variant of the indistinguishability hardcore lemma dedicated for computational entropy.</p><p>A simplified reduction from indistinguishability hardcores to unpredictability hardcores. Basing on our generalized unpredictability hardcore lemma we show an alternative proof for the indistinguishability hardcore lemma of Maurer and Tessaro. In <ref type="bibr" target="#b10">[11]</ref> the reduction goes from the indistinuishablity hardcore lemma to the "standard" unpredictability hardcore lemma, that is where inputs are sampled from the uniform distribution. In contrary, we find it much easier and natural to reduce it to unpredictability of some predicate which explicitly depends on the distributions X 1 , X 2 -is simply equal to the sign of the difference between probability mass functions. In our reduction, besides better constants and decreasing the factor depending on the dimension, we also gain an additional factor in security if the statistical distance of X 1 and X 2 is small. This slightly improves the security bounds, however this improvement seems to be of limited applicability for cryptographic applications.</p><p>A direct proof of the Indistinguishability Hardcore Lemma. By adapting the proof given for the unpredictability case, we can derive the (nonuniform) Indistinguishability Hardcore Lemma of Maurer and Tessaro directly, that is without reducing it to unpredictability hardcore lemmas. This can be interesting in the context of lower bounds. Indeed, no lower bounds on unpredictability hardcore lemmas, if they were discovered, would imply lower bounds for the indistinguishability version based on this reduction.</p><p>An Indistinguishability Hardcore Lemma for Pseudoentropy. In some situations, for instance in extracting entropy, we do not really need our distribution X to be indistinugishable from a particular Y but rather from a class of distributions Y (which is a weaker requirement). To illustrate this, consider the following alternatives to formalize the statement "X almost has property P ".</p><p>(i) X is (s, δ)-close to having property P , if there exists a distribution Y with property P such that for every circuit D of size s, we have ∆ D (X; Y ) δ (ii) X is (s, δ)-close to having property P , if for every D of size s there exists a distribution Y with property P such that we have ∆ D (X; Y ) δ.</p><p>where</p><formula xml:id="formula_4">∆ D (X; Y ) = E D(X) − E D(Y )</formula><p>is the advantage of the attacker D. In condition (i) we have the standard computational indistinugishability of X and Y . In turn, the second condition can be thought of as indistinguishability between X and the set of all distributions Y having property P , since it means that there is no D that separates X and all Y . Clearly condition (ii) is strictly weaker, thought it is easy to see that for convex properties P (i.e. closed under taking a convex combinations) both are equivalent up to the loss of a factor O δ 2 in circuit size <ref type="bibr" target="#b1">[2]</ref>. Surprisingly it turns out that for many applications the weaker definition is good enough. If, for instance, P means "distribution has min-entropy at least k", then condition (ii), provided that it holds for probabilistic distinugishers, is strong enough to ensure that any (k, )-extractor applied to X yields an -pseudorandom distribution. The concept of "weak" indistinguishability, i.e. indistinugishability in the sense of (ii), is very useful in studying computational generalizations of entropy. Set the property P to be "having min-entropy at least k". For case (i), we obtain the notion of the HILL entropy <ref type="bibr" target="#b5">[6]</ref>: X has k bits of pseudoentropy, with quality (s, ), if there is a distribution Y with k-bits of min-entropy such that no circuit of size s can distinguish X and Y with the advantage better than . In case (ii) we obtain a relaxed notion called Metric Pseudoentropy <ref type="bibr" target="#b1">[2]</ref>: no adversary of size s can distinguish between X and all distributions of minentropy at least k. As mentioned, metric pseudoentropy is very useful and widely used as a convenient substitute of HILL and find many application in studying pseudorandomness <ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b12">13]</ref>. It is known <ref type="bibr" target="#b1">[2]</ref> that metric entropy with parameters (s, ) can be converted into HILL entropy with no loss in the amount and the parameters (s , ) = (O s • δ<ref type="foot" target="#foot_1">2</ref> /n , + δ) for any δ. Applying our techniques we obtain a nice and much stronger version of this transformation: if X has metric entropy of quality (s, ) (even against weakest deterministic circuits) then after conditioning on an event of probability 1− , it the same amount of HILL entropy of quality (δ, O s • δ 2 /n ).</p><p>Application: extracting pseudorandomness from pseudoentropy of poor quality. Using our generalized indistinguishability hardcore lemma, we prove that for a sequence of independent distributions X 1 , . . . , X , each having metric-entropy k with parameters (s, ) for some large and against deterministic circuits of size s, the concatenated string X = X 1 , X 2 , . . . , X has HILL entropy roughly (1 − ) k with parameters (s , δ ) = (δ, sδ 2 −2 /n). In other words, for a metric pseudoentropy source of quality (s, ) we achieve, sampling many times, the entropy extraction rate α = 1 − with good security. Comparing to the state of art we save a quite large factor δ 2 in security 2 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3">Outline of the Paper</head><p>Section 2 provides necessary definitions for hardness of unpredictability, computational indistinguishabilty and computational entropy. In Section 3 we present a generalization of the unpredictability hardcore lemma and a slightly simplified proof of the indistinguishability hardcore lemma. A hardcore lemma dedicated for pseudoentropy is given in Section 4. An application to the problem of extracting from a pseudoentropy source of very bad quality is discussed in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminaries</head><p>Computational and Statistical Indistinguishability. Let X and Y be two random variables taking values in the same space. The advantage of D in distinguishing between X and Y is defined to be ∆</p><formula xml:id="formula_5">D (X; Y ) = E D(X)−E D(Y ).</formula><p>The statistical distance between two random variables X and Y , is defined as</p><formula xml:id="formula_6">∆(X; Y ) = 1 2 x | Pr[X = x] − Pr[Y = x]</formula><p>| and is equal to the maximum of ∆ D (X; Y ) over all [0, 1]-valued functions D. The computational distance between X and Y is defined as max D∈D ∆ D (X; Y ) where D is a fixed class of boolean functions. We say that X and Y are (s, )-close or (s, )-indistinguishable if</p><formula xml:id="formula_7">∆ D (X; Y )</formula><p>for all D of size at most s.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Hardness of Unpredictability. A boolean funciton</head><formula xml:id="formula_8">f : {0, 1} n → {0, 1} is said to be (s, δ)-unpredictable if Pr x← [D(x) = f (x)] 1 − δ/2</formula><p>for all D of size at most s. We also say that f is δ-hard against circuits of size s. We say that f is (s, δ)-unpredictable under the distribution</p><formula xml:id="formula_9">V if Pr x←V [D(x) = f (x)]</formula><p>1 − δ/2 for all D of size at most s.</p><p>Computational Entropy. There are many ways to define computational analogues of entropy. We follow the most popular approach, which is based on the concept of computational indistinguishability.</p><p>Definition 1 (HILL Pseudoentropy <ref type="bibr" target="#b5">[6]</ref>). Let X be a distribution with the following property: there exists Y of min-entropy at least k such that for all circuits D of size at most s we have |∆ D (X; Y )| . Then we say that X has k bits of HILL entropy of quality (s, ) and denote by H HILL (X) s, k.</p><p>It is known that for HILL Entropy all kind of circuits: deterministic boolean, deterministic real valued and randomized boolean, are equivalent (with the same size s). The following definition differs in the order of quantifiers Definition 2 (Metric Pseudoentropy <ref type="bibr" target="#b1">[2]</ref>). Let X be a distribution with the following property: for every deterministic boolean (respectively: deterministic real valued or boolean randomized) circuit D of size at most s there exists Y of min-entropy at least k such that |∆ D (X; Y )| . Then we say that X has k bits of deterministic (respectively: deterministic real valued or boolean randomized) metric entropy of quality (s, ) and denote by H Metric,det{0,1} (X) s, k (respectively: H Metric,det[0,1] (X) and H Metric,rand (X)).</p><p>3 Hardcore Lemmas</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Approximating Convex Hulls</head><p>The following standard facts, derived by the Hoeffding-Chernoff Inequality, are useful when we want to approximate possibly long convex combinations of functions by a combination of few functions; for instance, when we use the min-max theorem and need to approximate any mixed strategy by an efficient strategy.</p><p>Lemma 1 ([12], Lemma 2.1). Let X be a finite domain, G be any set of functions g : X → [−1, 1] and let g be a convex combinations of functions from G. Then for any ∈ (0, 1) and for some k 4 2 log 2 , there exist functions g 1 , . . . , g k such that</p><formula xml:id="formula_10">E x←ν g(x) − 1 k k i=1 g i (x) Lemma 2 ([2]</formula><p>). Let X be a finite domain, ν be a distribution on X and let G be any set of functions g : X → [−1, 1] and let g be a convex combinations of functions from G. Then for any ∈ (0, 1) and for some k</p><formula xml:id="formula_11">log |X | 2 2 , there exist functions g 1 , . . . , g k such that max x∈X g(x) − 1 k k i=1 g i (x)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Hardcore Lemma for Unpredictability under Arbitrary Distributions</head><p>Below we prove a hardcore lemma, with the optimal weight of the hardcore, valid for an arbitrary distribution. We also obtain better constants than in Theorem 1 and an improvement for the case when is bounded away from 0: we replace then the dimension n by log(1/δ) which is typically much smaller.</p><p>Theorem 3 (Unpredictability Hardcore Lemma for arbitrary distributions). Let V be an arbitrary distribution on {0, 1} n and suppose that an n-bit boolean function f is (s, )-unpredictable under sampling from a distribution V . Then for any δ there exists an event A of probability at least 2 such that f is</p><formula xml:id="formula_12">(s , 1−δ)-unpredictable under V |A, where s = max s • 2δ 2 /n, s • 4 2 δ 2 /log(4/ δ) .</formula><p>Note that f is essentially almost unbiased under V |A: by applying trivial distinuguishers D ≡ 1 and D ≡ 0 we get</p><formula xml:id="formula_13">1 2 − δ Pr[f (V |A) = 1] 1 2 + δ.</formula><p>For some technical reasons we need the following observation, which states that the hardcore event "preserves" unbiased predicates.</p><p>Corollary 1 (Unpredictability Hardcore Lemma for unbiased predicates). Suppose that Theorem 3 holds for f and V such that P (f (V ) = −1) = 1 2 = P (f (V ) = 1). Then the hardcore event A can be chosen in such a way that</p><formula xml:id="formula_14">P (f (V |A) = −1) = P (f (V |A) = 1) = 1</formula><p>2 , with the additional loss of the factor 3 in circuit size.</p><p>The proof of Corollary 1 appears in the full version of the paper. It is relatively simple and uses the idea of "mass-shifting". The proof of Theorem 3 consists of the three important steps: (a) the trivial observation that for any fixed boolean function D of size s we can find a hardcore event, (b) the observation that we can find a hardcore for any [0, 1]-valued function of approximately the same size s and (c) using the min-max theorem and approximation lemmas to switch the order of quantifiers to find a one hardcore event that works with all functions of size s. Only step (b) is non-trivial and novel (and allows us to slightly improve Hollentein's bounds). We prove (b) by assuming that there exists a function D for which one cannot find a suitable hardcore measure. Then we argue that if this is true, we cannot find a hardcore measure for some threshold transformation of D. We characterize the measure which is "closest" to be a hardcore measure and use it to show that the same measure is optimal for all threshold transformations of D. By taking an appropriate threshold we conclude that we cannot find a hardcore measure for some threshold versions of D, which is impossible since it is now boolean and this contradicts the assumptions. The proof appears in the full version of the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Hardcore Lemma for Indistinguishability -Reduction to Unpredictability Case</head><p>The following lemma shows that indistinguishability of two distributions is equivalent to the hardness of predicting some boolean function, which explicitly depends on these distributions. This function is quite natural: it equals the sign of the difference between the probability mass functions.</p><p>Lemma 3. Let D be a class of boolean functions, X, Y ∈ {0, 1} n be random variables, and let ∆ = ∆(X, Y ) be different than 0. Then the following are equivalent:</p><p>(a) X and Y are (D, )-indistinguishable (b) f (x) is (D, 1 − /∆)-unpredictable under V , where f (x) is the indicator of the set {x : P X (x) &gt; P Y (x)} and the distribution of V is given by P</p><formula xml:id="formula_15">V (x) = |P X (x) − P Y (x)| /2∆.</formula><p>Proof. For any boolean D we obtain</p><formula xml:id="formula_16">E D(X) − E D(Y ) = x (P X (x) − P Y (x))D(x) = 2∆ (Pr[f (V ) = 1] E[D(V )|f (V ) = 1] − Pr[f (V ) = 0] E[D(V )|f (V ) = 0]) Observe that Pr[f (V ) = 1] = Pr[f (V ) = 0] = 1 2 . Therefore E D(X) − E D(Y ) = 2∆ − 1 2 + 1 2 E[D(V )|f (V ) = 1] + 1 2 E[(1 − D(V ))|f (V ) = 0] .</formula><p>Since D is boolean, the last equation is equivalent to</p><formula xml:id="formula_17">E D(X) − E D(Y ) = 2∆ Pr[D(V ) = f (V )] − 1 2 ,</formula><p>which finishes the proof.</p><p>Based on Lemma 3 we prove the following result Theorem 4 (Indistinguishability Hardcore Lemma). Suppose that X and Y are arbitrary (s, ) indistinguishable by boolean circuits Then there exists an events A(X), A(Y ) both of equal probability at least 1 − such that X|A(X) and</p><formula xml:id="formula_18">Y |A(Y ) are (O s • δ 2 /n , ∆(X; Y ) • δ) indistinguishable.</formula><p>Proof. From the construction of V , we know that f is 1− /∆(X, Y )-unpredictable under V . From Theorem 3 we obtain that there exists a hardcore A with probability at least 1− /∆(X, Y ) such that f is extremely unpredictable under V |A. This hardcore event can be described as follows: there exists a measure M = M A that satisfies M (x) P V (x) and P(A) = µ(M ) 1− /∆(X, Y ) and such that f (x) is unpredictable for sampling according to M , i.e.</p><formula xml:id="formula_19">P x←M (D(x) = f (x)) &lt; 1/2 + δ.</formula><p>The distribution V |A is then defined by P V |A = P M . Consider the events S − = {x : f (x) = 0} and S + = {x : f (x) = 1}. From the definition of V and f it follows that P V (S − ) = P V (S + ) = 1 2 . As shown in Corollary 1, the sets S + , S − can be assumed to be perfectly unbiased also under V |A. Define now two measures M 0 = M X and M 1 = M Y as follows:</p><formula xml:id="formula_20">M 0 (x) = P X (x) − 2∆(X, Y ) (P V (x) − M (x)) if P X (x) &gt; P Y (x) P X (x) otherwise<label>(1)</label></formula><p>and similarly,</p><formula xml:id="formula_21">M 1 (x) = P Y (x) − 2∆(X, Y ) (P V (x) − M (x)) if P X (x) &lt; P Y (x) P Y (x) otherwise<label>(2)</label></formula><p>Note that both measures are well defined since</p><formula xml:id="formula_22">P V (x) = |P X (x) − P Y (x)| / 2∆(X, Y ) and M (x) P V (x).</formula><p>Then from the definition of (V, A) and the definition of f it follows that</p><formula xml:id="formula_23">µ (M 0 ) = 1 − 2∆(X, Y ) x: f (x)=1 P V (x) + 2∆(X, Y ) x: f (x)=1 M (x) = 1 − ∆(X, Y ) + 2∆(X, Y )P(A) • P V |A S + = 1 − ∆(X, Y )P(A c ) 1 −<label>(3)</label></formula><p>and similarly that the same estimate holds for µ (M 1 ). Observe also that since S + and S − are perfectly unbiased with respect to M , and since the same holds for V , we have µ (M 0 ) = µ (M 1 ). These measures give rise to the joint distributions X, A(X) and Y, A(Y ) for some events A(X), A(Y ) with probabilities at least µ (M 0 ) = µ (M 1 ). It remains to calculate the advantage in distinguishing. Let V and f be a distrubtion and a predicate corresponding to X|A(X) and Y |A(Y ) according to the statement of Lemma 3. Observe that M 0 (x) &gt; M 1 (x) if and only if f (x) = 1, hence f (x) = f (x). Since |M 0 (x) − M 1 (x)| = 2∆(X, Y )M (x) for every x, we get P V (x) = M (x)/µ(M ) = P V |A (x) and ∆(X|A(X), Y |A(Y )) = ∆(X, Y ). Therefore</p><formula xml:id="formula_24">∆ D (X|A(X), Y |A(Y )) = ∆(X, Y ) • (2P x←V (D(x) = f (x)) − 1) = ∆(X, Y ) • 2P x←V |A (D(x) = f (x)) − 1 &lt; ∆(X, Y ) • δ,<label>(4)</label></formula><p>and we have finished the proof.</p><p>Remark 2. We note that without Corollary 1 we would obtain a slightly weaker version of the indistinguishability hardcore lemma where the probability of the hardcore events is guaranteed to be at least 1 − − δ, which is very close to the optimal 1 − and equally good in applications.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Indistinguishability Hardcore Lemma for Pseudoentropy</head><p>In this section we prove the following theorem, discussed in the introduction, which gives the existence of a "HILL-entropy-hardcore" for metric pseudoentropy.</p><p>Theorem 5 (Indistinguishability Hardcore Lemma for pseudoentropy).</p><p>Suppose that H Metric,det{0,1} s, (X) k. Then for any δ and s = O s • δ 2 /n there exists an event A of probability 1 − such that H HILL s ,δ (X|A) k − log(1/(1 − )). This theorem shows that metric entropy not only can be converted to HILL entropy with the loss of factor δ in advantage and δ 2 in circuit size; It has a hardcore of HILL entropy with the same quality parameters.</p><p>Remark 3. The constant hidden in the big "O" term is at most 2/3. Before we give the proof, let us observe that this result implies the transformation between metric and HILL entropy (up to the lose of at most one bit) Corollary 2 (Metric entropy -HILL entropy transformation <ref type="bibr" target="#b1">[2]</ref>). Suppose that H Metric,det{0,1} s, (X) k. Then H HILL s , (X) k where s = O s • δ 2 /n and = + δ.</p><p>Proof (Proof of Corollarz 2). We apply Theorem 5 obtaining a distribution Y |A which is (s , δ)-indistinguishable from X|A, and then we define Pr</p><formula xml:id="formula_25">[Y = x] = Pr[A] • Pr[Y = x|A] + 2 −n Pr[A c ]. Note that H ∞ (Y ) k − 1 and Y is (s , + δ)- indistinguishable from X.</formula><p>We remark that one can actually show without the loss of 1 bit, because Theorem 5 actually is slightly stronger that stated, namely H HILL s ,δ (X|A) k −log(1/(1− )) can be replaced by the following: X|A is (s , δ)indistinguishable from Y |A where Y has k bits of min-entropy.</p><p>The proof strategy for Theorem 5 is exactly the same as in the case of Theorem 3; the proof appears in the full version of this paper. Note that the result in Theorem 5 with much worse parameters follows by converting metric-entropy into HILL entropy using Corollary 2 and then applying Theorem 2. This way we lose δ 4 in circuit size.</p><p>Corollary 3 (Direct proof of the Indistinguishability Hardcore Lemma). The proof of Theorem 5 can be easily adapted to give a direct proof of Theorem 4 without reducing it to Theorem 3. Namely, in the proof we replace the condition M 2 2 −k by M 2 P Y .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Applications: Extracting from Metric Pseudoentropy of Poor Quality</head><p>Suppose that we have a source of metric pseudoentropy that produces samples secure against deterministic adversaries of high complexity but only with a very big advantage (for instance, = 0.25). Since the metric entropy is only against deterministic adversaries, for which it is not known if we can extract pseudorandomness directly 3 , one needs to convert in into the HILL entropy. However, it still does not solve the problem of large . In the next step one can use Theorem 2 to prove that a concatenated sequence of many samples has large HILL entropy 4 , with the rate of roughly 1 − . This approach loses O δ 4 in security.</p><p>Below we show that these two steps can be done at the same time which allows us to save a factor of O δ 2 in security.</p><p>Theorem 6. Suppose that X i , for i = 1, . . . , , are independent n-bit random variables such that H Proof. Fix δ and let s = O s • δ 2 /n . We apply Theorem 5 to X i , for i = 1, , . . . , , obtaining hardcore events A i of probability at least 1 − such that H HILL s ,δ (X i |A i ) k − log(1/(1 − )). By the Chernoff Bound we know that the probability that m = (1 − − γ) of them happen simultaneously, is at least 1 − 2 exp(−2 γ 2 ). The result follows now by the observation that concatenating random variables Y 1 , . . . , Y of HILL entropy k 1 , . . . , k with parameters (s , δ) yields a distribution of HILL entropy k 1 + k 2 + . . . + k with parameters (s , δ) (the proof if by standard hybrid technique).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion</head><p>An interesting open problem is to check if the indistinguishability hardcore lemma can be derived from the unpredictability hardcore lemma, that is show the reduction in other direction than in <ref type="bibr" target="#b10">[11]</ref> and this paper. Another problem worth of mentioning is the question about the lower bounds on the necessary loss in security for hardcore lemmas. To the best of our knowledge, nothing is known about negative results so far, except the lower bounds for proofs based on boosting, which follow from general machine learning theory <ref type="bibr" target="#b8">[9]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>k. Then for any γ &gt; 0 we haveH HILL s ,δ (X) (1 − − γ) (k − log(1/(1 − ))),where s = O s • δ 2 /n 2 and δ = δ + 2 exp(−2 γ 2 )</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">By the similar reasoning as in the unpredictability case.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">We note that the following issues makes this problem challenging: (a) since is large, no hybrid technique can be applied and (b) pseudoentropy is only against deterministic adversaries so no extractor can be directly applied.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">The problem of randomized vs deterministic adversaries is the matter of metric entropy only; as already mentioned, for the HILL entropy all kind of circuits are equivalent.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The uniform hardcore lemma via approximate bregman projections</title>
		<author>
			<persName><forename type="first">B</forename><surname>Barak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kale</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA &apos;09</title>
				<meeting>the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA &apos;09<address><addrLine>Philadelphia, PA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Society for Industrial and Applied Mathematics</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="1193" to="1200" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Computational analogues of entropy</title>
		<author>
			<persName><forename type="first">B</forename><surname>Barak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Shaltiel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wigderson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">RANDOM-APPROX</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">S</forename><surname>Arora</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Jansen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">D P</forename><surname>Rolim</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Sahai</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="volume">2764</biblScope>
			<biblScope unit="page" from="200" to="215" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A unified approach to deterministic encryption: New constructions and a connection to computational entropy</title>
		<author>
			<persName><forename type="first">F</forename><surname>Benjamin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Leonid</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">TCC 2012</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">7194</biblScope>
			<biblScope unit="page" from="582" to="599" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Leakage-resilient cryptography in the standard model</title>
		<author>
			<persName><forename type="first">S</forename><surname>Dziembowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Pietrzak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IACR Cryptology ePrint Archive</title>
		<imprint>
			<biblScope unit="page">240</biblScope>
			<date type="published" when="2008">2008. 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">O</forename><surname>Goldreich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Nisan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wigderson</surname></persName>
		</author>
		<title level="m">Studies in complexity and cryptography. chapter On Yao&apos;s XOR-lemma</title>
				<meeting><address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="273" to="301" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A pseudorandom generator from any one-way function</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hastad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Impagliazzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Levin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Luby</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">SIAM J. Comput</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1364" to="1396" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Key agreement from weak bit agreement</title>
		<author>
			<persName><forename type="first">T</forename><surname>Holenstein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Thirty-seventh Annual ACM Symposium on Theory of Computing, STOC &apos;05</title>
				<meeting>the Thirty-seventh Annual ACM Symposium on Theory of Computing, STOC &apos;05<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="664" to="673" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Hard-core distributions for somewhat hard problems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Impagliazzo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 36th Annual Symposium on Foundations of Computer Science, FOCS &apos;95</title>
				<meeting>the 36th Annual Symposium on Foundations of Computer Science, FOCS &apos;95<address><addrLine>Washington, DC, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="1995">1995</date>
			<biblScope unit="page">538</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Boosting and hard-core sets</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Klivans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Servedio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 40th Annual Symposium on Foundations of Computer Science, FOCS &apos;99</title>
				<meeting>the 40th Annual Symposium on Foundations of Computer Science, FOCS &apos;99<address><addrLine>Washington, DC, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page">624</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Amplification of chosen-ciphertext security</title>
		<author>
			<persName><forename type="first">H</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tessaro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Cryptology EUROCRYPT 2013</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">Thomas</forename><surname>Johansson</surname></persName>
		</editor>
		<editor>
			<persName><surname>Phongq</surname></persName>
		</editor>
		<editor>
			<persName><surname>Nguyen</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">7881</biblScope>
			<biblScope unit="page" from="503" to="519" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A hardcore lemma for computational indistinguishability: Security amplification for arbitrarily weak prgs with optimal stretch</title>
		<author>
			<persName><forename type="first">U</forename><surname>Maurer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tessaro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th International Conference on Theory of Cryptography, TCC&apos;10</title>
				<meeting>the 7th International Conference on Theory of Cryptography, TCC&apos;10<address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="237" to="254" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Regularity, boosting, and efficiently simulating every high-entropy distribution</title>
		<author>
			<persName><forename type="first">L</forename><surname>Trevisan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tulsiani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Vadhan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2009 24th Annual IEEE Conference on Computational Complexity, CCC &apos;09</title>
				<meeting>the 2009 24th Annual IEEE Conference on Computational Complexity, CCC &apos;09<address><addrLine>Washington, DC, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="126" to="136" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Characterizing pseudoentropy and simplifying pseudorandom generator constructions</title>
		<author>
			<persName><forename type="first">S</forename><surname>Vadhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Zheng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 44th symposium on Theory of Computing, STOC &apos;12</title>
				<meeting>the 44th symposium on Theory of Computing, STOC &apos;12<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="817" to="836" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
