=Paper= {{Paper |id=Vol-2335/paper5 |storemode=property |title=Generating Artificial Data for Private Deep Learning |pdfUrl=https://ceur-ws.org/Vol-2335/1st_PAL_paper_7.pdf |volume=Vol-2335 |authors=Aleksei Triastcyn,Boi Faltings }} ==Generating Artificial Data for Private Deep Learning== https://ceur-ws.org/Vol-2335/1st_PAL_paper_7.pdf
                         Generating Artificial Data for Private Deep Learning

                                              Aleksei Triastcyn and Boi Faltings
                                               Artificial Intelligence Laboratory
                                           Ecole Polytechnique Fédérale de Lausanne
                                                     Lausanne, Switzerland
                                     {aleksei.triastcyn, boi.faltings}@epfl.ch



                            Abstract                                 Sensitive Data

                                                                                                          Real
  In this paper, we propose generating artificial data that retain                         Critic
  statistical properties of real data as the means of providing                                           Fake

  privacy for the original dataset. We use generative adversarial
  networks to draw privacy-preserving artificial data samples                                                                 ML

  and derive an empirical method to assess the risk of infor-                                           Artificial Data
                                                                        Labels
  mation disclosure in a differential-privacy-like way. Our ex-                          Generator
  periments show that we are able to generate labelled data of           Noise
  high quality and use it to successfully train and validate su-
  pervised models. Finally, we demonstrate that our approach
  significantly reduces vulnerability of such models to model        Figure 1: Architecture of our solution. Sensitive data is used
  inversion attacks.
                                                                     to train a GAN to produce a private artificial dataset, which
                                                                     then can be used by any ML model.
                     1     Introduction
Following recent advancements in deep learning, more and
more people and companies get interested in putting their            each of them. Moreover, most of these methods assume (im-
data in use and employ machine learning (ML) to generate             plicitly or explicitly) access to public data of similar nature,
a wide range of benefits that span financial, social, medi-          which may not be possible in areas like medicine.
cal, security, and other aspects. At the same time, however,            In contrast, we study the task of privacy-preserving data
such models are able to capture a fine level of detail in            release, which has many immediate advantages. First, any
training data, potentially compromising privacy of individ-          ML model could be trained on released data without ad-
uals whose features sharply differ from others. Recent re-           ditional assumptions. Second, data from different sources
search (Fredrikson, Jha, and Ristenpart 2015) suggests that          could be easily pooled to build stronger models. Third,
even without access to internal model parameters it is possi-        released data could be traded on data markets1 , where
ble to recover (up to a certain degree) individual examples,         anonymisation and protection of sensitive information is one
e.g. faces, from the training set.                                   of the biggest obstacles. Finally, data publishing would fa-
   The latter result is especially disturbing knowing that deep      cilitate transparency and reproducibility of research studies.
learning models are becoming an integral part of our lives,             In particular, we are interested in solving two problems.
making its way to phones, smart watches, cars, and appli-            First, how to preserve high utility of data for ML algorithms
ances. And since these models are often trained on cus-              while protecting sensitive information in the dataset. Sec-
tomers’ data, such training set recovery techniques endanger         ond, how to quantify the risk of recovering private informa-
privacy even without access to the manufacturer’s servers            tion from the published dataset, and thus, the trained model.
where these models are being trained.                                   The main idea of our approach is to use generative adver-
   One direction to tackle this problem is enforcing privacy         sarial networks (GANs) (Goodfellow et al. 2014) to create
during training (Abadi et al. 2016; Papernot et al. 2016;            artificial datasets to be used in place of real data for train-
2018). We will refer to these techniques as model release            ing. This method has a number of advantages over the ear-
methods. While these approaches perform well in ML tasks             lier work (Abadi et al. 2016; Papernot et al. 2016; 2018;
and provide strong privacy guarantees, they are often restric-       Bindschaedler, Shokri, and Gunter 2017). First of all, our so-
tive. First and foremost, releasing a single trained model           lution allows releasing entire datasets, thereby possessing all
does not provide much flexibility in the future. For in-             the benefits of private data release as opposed to model re-
stance, it would significantly reduce possibilities for com-         lease. Second, it achieves high accuracy without pre-training
bining models trained on data from different sources. Eval-
uating a variety of such models and picking the best one is            1
                                                                         https://www.datamakespossible.com/
also complicated by the need of adjusting private training for       value-of-data-2018/dawn-of-data-marketplace
on similar public data. Third, it is more intuitive and flexible,   a model in a distributed manner by communicating sani-
e.g. it does not require a complex distributed architecture.        tised updates from participants to a central authority. Such
   To estimate potential privacy risks, we design an ex post        a method, however, yields high privacy losses (Abadi et al.
analysis framework for generated data. We use KL diver-             2016; Papernot et al. 2016). An alternative technique sug-
gence estimation and Chebyshev’s inequality to find statisti-       gested by Papernot et al. (2016), also uses disjoint training
cal bounds on expected privacy loss for a dataset in question.      sets and builds an ensemble of independently trained teacher
   Our contributions in this paper are the following:               models to transfer knowledge to a student model by labelling
• we propose a novel, yet simple, approach for private data         public data. This result has been extended in (Papernot et al.
  release, and to the best of our knowledge, this is the first      2018) to achieve state-of-the-art image classification results
  practical solution for complex real-world data;                   in a private setting (with single-digit DP bounds). A differ-
                                                                    ent approach is taken by Abadi et al. (2016). They suggest
• we introduce a new framework for statistical estimation           using differentially private stochastic gradient descent (DP-
  of potential privacy loss of the released data;                   SGD) to train deep learning models in a private manner. This
• we show that our method achieves learning performance             approach achieves high accuracy while maintaining low DP
  of model release methods and is resilient to model inver-         bounds, but may also require pre-training on public data.
  sion attacks.                                                        A more recent line of research focuses on private data
                                                                    release and providing privacy via generating synthetic
   The rest of the paper is structured as follows. In Section 2,
                                                                    data (Bindschaedler, Shokri, and Gunter 2017; Huang et al.
we give an overview of related work. Section 3 contains
                                                                    2017; Beaulieu-Jones et al. 2017). In this scenario, DP is
some preliminaries. In Section 4, we describe our approach
                                                                    hard to guarantee, and thus, such models either relax the
and privacy estimation framework, and discuss its limita-
                                                                    DP requirements or remain limited to simple data. In (Bind-
tions. Experimental results and implementation details are
                                                                    schaedler, Shokri, and Gunter 2017), authors use a graph-
presented in Section 5, and Section 6 concludes the paper.
                                                                    ical probabilistic model to learn an underlying data distri-
                                                                    bution and transform real data points (seeds) into synthetic
                    2    Related Work                               data points, which are then filtered by a privacy test based
In recent years, as machine learning applications become a          on a plausible deniability criterion. This procedure would be
commonplace, a body of work on security of these methods            rather expensive for complex data, such as images. Huang et
grows at a rapid pace. Several important vulnerabilities and        al. (2017) introduce the notion of generative adversarial pri-
corresponding attacks on ML models have been discovered,            vacy and use GANs to obfuscate real data points w.r.t. pre-
raising the need of devising suitable defences. Among the           defined private attributes, enabling privacy for more realistic
attacks that compromise privacy of training data, model in-         datasets. Finally, a natural approach to try is training GANs
version (Fredrikson, Jha, and Ristenpart 2015) and member-          using DP-SGD (Beaulieu-Jones et al. 2017; Xie et al. 2018;
ship inference (Shokri et al. 2017) received high attention.        Zhang, Ji, and Wang 2018). However, it proved extremely
   Model inversion (Fredrikson, Jha, and Ristenpart 2015)           difficult to stabilise training
                                                                                              √     with the necessary amount of
is based on observing the output probabilities of the target        noise, which scales as m w.r.t. the number of model pa-
model for a given class and performing gradient descent on          rameters m. It makes these methods inapplicable to more
an input reconstruction. Membership inference (Shokri et al.        complex datasets without resorting to unrealistic (at least for
2017) assumes an attacker with access to similar data, which        some areas) assumptions, like access to public data from the
is used to train a ”shadow” model, mimicking the target, and        same distribution.
an attack model. The latter predicts if a certain example has          Similarly, our approach uses GANs, but data is generated
already been seen during training based on its output proba-        without real seeds or applying noise to gradients. Instead, we
bilities. Note that both attacks can be performed in a black-       verify experimentally that out-of-the-box GAN samples can
box setting, without access to the model internal parameters.       be sufficiently different from real data, and expected privacy
   To protect privacy while still benefiting from the use of        loss is empirically bounded by single-digit numbers.
statistics and ML, many techniques have been developed
over the years, including k-anonymity (Sweeney 2002), l-
diversity (Machanavajjhala et al. 2007), t-closeness (Li,                               3   Preliminaries
Li, and Venkatasubramanian 2007), and differential privacy          This section provides necessary definitions and background.
(DP) (Dwork 2006). The latter has been recognised as a rig-         Let us commence with approximate differential privacy.
orous standard and is widely accepted by the research com-
munity. Its generic formulation, however, makes it hard to          Definition 1. A randomised function (mechanism) M :
achieve and to quantify potential privacy loss of the already       D → R with domain D and range R satisfies (ε, δ)-
trained model. To overcome this, we build upon notions of           differential privacy if for any two adjacent inputs d, d0 ∈ D
empirical DP (Abowd, Schneider, and Vilhuber 2013) and              and for any outcome o ∈ R the following holds:
on-average KL privacy (Wang, Lei, and Fienberg 2016).
   Most of the ML-specific literature in the area concentrates               Pr [M(d) = o] ≤ eε Pr [M(d0 ) = o] + δ.           (1)
on the task of privacy-preserving model release. One take on
the problem is to distribute training and use disjoint datasets.    Definition 2. Privacy loss of a randomised mechanism M :
For example, Shokri and Shmatikov (2015) propose to train           D → R for inputs d, d0 ∈ D and outcome o ∈ R takes the
following form:                                                     that generated samples will not repeat the input. To alle-
                                                                    viate this problem, we propose to enforce differential pri-
                                    Pr [M(d) = o]                   vacy on the output of the discriminator (critic). This is done
           L(M(d)kM(d0 )) = log                     .        (2)
                                    Pr [M(d0 ) = o]                 by employing the Gaussian noise mechanism (Dwork and
Definition 3. The Gaussian noise mechanism achieving                Roth 2014) at the second-to-last layer: clipping the L2 norm
(ε, δ)-DP, for a function f : D → Rm , is defined as                of the input and adding Gaussian noise. To be more spe-
                                                                    cific, activations a(x) of the second-to-last layer become
                  M(d) = f (d) + N (0, σ 2 ),                (3)    ã(x) = a(x)/ max(ka(x)k2 , 1) + N (0; σ 2 ). We refer to this
              q                                                     version of the critic as DP critic.
where σ > C       2 log 1.25                                           Note that if the chosen GAN loss function was directly
                          δ /ε and C is the L2-sensitivity of f .
                                                                    differentiable w.r.t. generator output, i.e. if critic could be
   For more details on differential privacy and the Gaussian        treated as a black box, this modification would enforce the
mechanism, we refer the reader to (Dwork and Roth 2014).            same DP guarantees on generator parameters, and conse-
   In our privacy estimation framework, we also use some            quently, all generated samples. Unfortunately, this is not the
classical notions from probability and information theory.          case for practically all existing versions of GANs, including
Definition 4. The Kullback–Leibler (KL) divergence be-              WGAN-GP (Gulrajani et al. 2017) used in our experiments.
tween two continuous probability distributions P and Q with            As our evaluation shows, this modification has a number
corresponding densities p, q is given by:                           of advantages. First, it improves diversity of samples and de-
                         Z +∞                                       creases similarity with real data. Second, it allows to prolong
                                          p(x)                      training, and hence, obtain higher quality samples. Finally,
          DKL (P kQ) =           p(x) log      dx.      (4)
                           −∞             q(x)                      in our experiments, it significantly improves the ability of
                                                                    GANs to generate samples conditionally.
  Note that KL divergence between the distributions of
M(d) and M(d0 ) is nothing but the expectation of the pri-          4.2   Privacy Estimation Framework
vacy loss random variable E[L(M(d)kM(d0 )) ].
  Finally, Chebyshev’s inequality is used to obtain tail            Our framework builds upon ideas of empirical DP
bounds. In particular, as we expect the distribution to be          (EDP) (Abowd, Schneider, and Vilhuber 2013; Schneider
asymmetric, we use the version with semi-variances (Berck           and Abowd 2015) and on-average KL privacy (Wang, Lei,
and Hihn 1982) to get a sharper bound:                              and Fienberg 2016). The first can be viewed as a measure of
                                                                    sensitivity on posterior distributions of outcomes (Charest
                                            2
                                         1 σ+                       and Hou 2017) (in our case, generated data distributions),
               Pr(x ≥ E[x] + kσ) ≤            ,              (5)
                                         k σ2
                                          2                         while the second relaxes DP notion to the case of an average
        2
               R +∞                                                 user.
where σ+  =       E[x]
                         p(x)(x − E[x])2 dx is the upper semi-         As we don’t have access to exact posterior distributions, a
variance.                                                           straightforward EDP procedure in our scenario would be the
                                                                    following: (1) train GAN on the original dataset D; (2) re-
                    4      Our Approach                             move a random sample from D; (3) re-train GAN on the up-
In this section, we describe our solution, its further improve-     dated set; (4) estimate probabilities of all outcomes and the
ments, and provide details of the privacy estimation frame-         maximum privacy loss value; (5) repeat (1)–(4) sufficiently
work. We then discuss limitations of the method. More back-         many times to approximate ε, δ.
ground on privacy can be found in (Dwork and Roth 2014).               If the generative model is simple, this procedure can
   The main idea of our approach is to use artificial data for      be used without modification. Otherwise, for models like
learning and publishing instead of real (see Figure 1 for a         GANs, it becomes prohibitively expensive due to repetitive
general workflow). The intuition behind it is the following.        re-training (steps (1)–(3)). Another obstacle is estimating the
Since it is possible to recover training examples from ML           maximum privacy loss value (step (4)). To overcome these
models (Fredrikson, Jha, and Ristenpart 2015), we need to           two issues, we propose the following.
limit the exposure of real data during training. While this can        First, to avoid re-training, we imitate the removal of ex-
be achieved by DP training (e.g. DP-SGD), it would have the         amples directly on the generated set D. e We define a similar-
limitations mentioned earlier. Moreover, certain attacks can        ity metric sim(x, y) between two data points x and y that
still be successful if DP bounds are loose (Hitaj, Ateniese,        reflects important characteristics of data (see Section 5 for
and Pérez-Cruz 2017). Removing real data from the train-           details). For every randomly selected real example i, we re-
ing process altogether would add another layer of protection        move k nearest artificial neighbours to simulate absence of
and limit the information leakage to artificial samples. What       this example in the training set and obtain D  e −i . Our intu-
remains to show is that artificial data is sufficiently different   ition behind this operation is the following. Removing a real
from real.                                                          example would result in a lower probability density in the
                                                                    corresponding region of space. If this change is picked up
4.1   Differentially Private Critic                                 by a GAN, which we assume is properly trained (e.g. there
Despite the fact that the generator does not have access            is no mode collapse), the density of this region in the gen-
to real data in the training process, one cannot guarantee          erated examples space should also decrease. The number of
neighbours k is a hyper-parameter. In our experiments, it is       Table 1: Accuracy of student models for non-private base-
chosen heuristically by computing KL divergence between            line, PATE (Papernot et al. 2016), and our method.
the real and artificial data distributions and assuming that all
the difference comes from one point.                                      Dataset   Non-private     PATE     Our approach
   Second, we propose to relax the worst-case privacy loss                MNIST       99.2%        98.0%          98.3%
bound in step (4) by the expected-case bound, in the same                 SVHN        92.8%        82.7%          87.7%
manner as on-average KL privacy. This relaxation allows us
to use a high-dimensional KL divergence estimator (Pérez-
Cruz 2008) to obtain the expected privacy loss for every pair      Table 2: Empirical privacy parameters: expected privacy loss
                                 e −i ). There are two major ad-   bound µ and probability γ of exceeding it.
of adjacent datasets (D  e and D
vantages of this estimator: it converges almost surely to the
true value of KL divergence; and it does not require inter-               Dataset   Method                       µ        γ
mediate density estimates to converge to the true probabil-                         WGAN-GP                    5.80
ity measures. Also since this estimator uses nearest neigh-               MNIST
                                                                                    WGAN-GP (DP critic)        5.36
bours to approximate KL divergence, our heuristic described
                                                                                    WGAN-GP                   13.16
above is naturally linked to the estimation method.                       SVHN                                         10−5
                                                                                    WGAN-GP (DP critic)        4.92
   Finally, after obtaining sufficiently many samples of dif-
ferent pairs (D,e D e −i ), we use Chebyshev’s inequality to                        WGAN-GP                    6.27
                                                                          CelebA
bound the probability γ = Pr(E[L(M(D)kM(D0 )) ] ≥ µ) of                             WGAN-GP (DP critic)        4.15
the expected privacy loss (Dwork and Rothblum 2016) ex-
ceeding a predefined threshold µ. To deal with the problem
of insufficiently many samples, one could use a sample ver-        5.1     Experimental Setting
sion of inequality (Saw, Yang, and Mo 1984) at the cost of          We evaluate our method in two major ways. First, we show
looser bounds.                                                      that not only is it feasible to train ML models purely on gen-
                                                                    erated data, but it is also possible to achieve high learning
4.3   Limitations                                                   performance (Section 5.3). Second, we compute empirical
                                                                    bounds on expected privacy loss and evaluate the effective-
Our empirical privacy estimator could be improved in a              ness of artificial data against model inversion attacks (Sec-
number of ways. For instance, providing worst-case privacy          tion 5.4).
loss bounds would be largely beneficial. Furthermore, sim-             Learning performance experiments are set up as follows:
ulating the removal of training examples currently depends
on heuristics and the chosen similarity metric, which may          1. Train a generative model (teacher) on the original dataset
not lead to representative samples and therefore, poor guar-           using only the training split.
antees.                                                            2. Generate an artificial dataset by the obtained model and
   We provide bounds on expected privacy loss based on ex              use it to train ML models (students).
post analysis of the artificial dataset, which is not equiv-       3. Evaluate students on a held-out test set.
alent to the traditional formulation of DP and has certain
                                                                       Note that there is no dependency between teacher and stu-
limitations (Charest and Hou 2017) (e.g. it only concerns
                                                                    dent models. Moreover, student models are not constrained
a given dataset). Nevertheless, it may be useful in the sit-
                                                                    to neural networks and can be implemented as any type of
uations where strict privacy guarantees are not required or
                                                                    machine learning algorithm.
cannot be achieved by existing methods, or when one wants
                                                                       We choose three commonly used image datasets for our
to get a better idea about expected privacy loss rather than
                                                                    experiments: MNIST, SVHN, and CelebA. MNIST is a
the highly unlikely worst-case.
                                                                    handwritten digit recognition dataset consisting of 60000
   Lastly, all existing limitations of GANs (or generative          training examples and 10000 test examples, each example
models in general), such as training instability or mode col-       is a 28x28 size greyscale image. SVHN is also a digit recog-
lapse, will apply to this method. Hence, at the current state       nition task, with 73257 images for training and 26032 for
of the field, our approach may be difficult to adapt to inputs      testing. The examples are coloured 32x32 pixel images of
other than image data. Yet, there is still a number of privacy-     house numbers from Google Street View. CelebA is a fa-
sensitive applications, e.g. medical imaging or facial analy-       cial attributes dataset with 202599 images, each of which
sis, that could benefit from our technique. And as generative       we crop to 128x128 and then downscale to 48x48.
methods progress, new uses will be possible.
                                                                   5.2     Implementation Details
                      5   Evaluation                               For our experiments, we use Python and Pytorch frame-
                                                                   work.2 We implement, with some minor modifications, a
In this section, we describe the experimental setup and im-        Wasserstein GAN with gradient penalty (WGAN-GP) by
plementation, and evaluate our method on MNIST (LeCun              Gulrajani et al. (2017). More specifically, the critic consists
et al. 1998), SVHN (Netzer et al. 2011), and CelebA (Liu et
                                                                      2
al. 2015) datasets.                                                       http://pytorch.org
Figure 2: Results of the model inversion attack. Top to bot-
tom: real target images, reconstructions from non-private
model, our method, and DP model.
                                                                   Figure 3: Privacy-accuracy trade-off curve and correspond-
Table 3: Face detection and recognition rates (pairs with dis-     ing image reconstructions from a multi-layer perceptron
tances below 0.99) for non-private, our method, and DP.            trained on artificial MNIST dataset.

                   Non-private      Our approach        DP
                                                                   presented in Section 4. Specifically, based on recent ideas
   Detection          63.6%              1.3%          0.0%
                                                                   in image qualitative evaluation, e.g. FID and Inception
  Recognition         11.0%              0.3%           −
                                                                   Score, we compute image features by the Inception V3 net-
                                                                   work (Szegedy et al. 2016) and use inverse distances be-
                                                                   tween features as sim function. We implement the KL diver-
of four convolutional layers with SELU (Klambauer et al.           gence estimator (Pérez-Cruz 2008) and use k-d trees (Bent-
2017) activations (instead of ReLU) followed by a fully con-       ley 1975) for fast nearest neighbour searches. For privacy
nected linear layer which outputs a d-dimensional feature          evaluation, we implement the model inversion attack.
vector (d = 64). For the DP critic, we implement the Gaus-
sian noise mechanism (Dwork and Roth 2014) by clipping             5.3   Learning Performance
the L2-norm of this feature vector to C = 1 and adding
Gaussian noise with σ = 1.5 (we refer to it as DP layer).          First, we evaluate the generalisation ability of a student
Finally, it is passed through a linear classification layer. The   model trained on artificial data. More specifically, we train
generator starts with a fully connected linear layer that trans-   a student model on generated data and report test classifica-
forms noise and labels into a 4096-dimensional feature vec-        tion accuracy on a held-out real set.
tor which is then passed through a SELU activation and three           As noted above, most of the work on privacy-preserving
deconvolution layers with SELU activations. The output of          ML focuses on model release methods and assumes (explic-
the third deconvolution layer is downsampled by max pool-          itly or implicitly) access to similar ”public” data in one form
ing and normalised with a tanh activation function.                or another (Abadi et al. 2016; Papernot et al. 2016; 2018;
   Similarly to the original paper, we use a classical WGAN        Zhang, Ji, and Wang 2018). On the other hand, existing data
value function with the gradient penalty that enforces Lips-       release solutions struggle with high-dimensional data (Zhu
chitz constraint on a critic. We also set the penalty parameter    et al. 2017). It limits the choice of methods for comparison.
λ = 10 and the number of critic iterations ncritic = 5. Fur-           We chose to compare learning performance with the cur-
thermore, we modify the architecture to allow for condition-       rent state-of-the-art model release technique, PATE by Pa-
ing WGAN on class labels. Binarised labels are appended to         pernot et al. (2018), which uses a relatively small set of un-
the input of the generator and to the linear layer of the critic   labelled ”public” data. Since our approach does not require
after convolutions. Therefore, the generator can be used to        any ”public” data, in order to make the evaluation more ap-
create labelled datasets for supervised learning.                  propriate, we pick the results of PATE corresponding to the
   Both networks are trained using Adam (Kingma and Ba             least number of labelling queries.
2015) with learning rate 10−4 , β1 = 0, β2 = 0.9, and a                Table 1 shows test accuracy for the non-private base-
batch size of 64.                                                  line model (trained on the real training set), PATE, and our
   The student network is constructed of two convolu-              method. We observe that artificial data allows us to achieve
tional layers with ReLU activations, batch normalisation and       98.3% accuracy on MNIST and 87.7% accuracy on SVHN,
max pooling, followed by two fully connected layers with           which is comparable or better than corresponding results of
ReLU, and a softmax output layer. Note that this network           PATE. These results demonstrate that our approach does not
does not achieve state-of-the-art performance on the used          compromise learning performance, and may even improve
datasets, but we are primarily interested in evaluating the        it, while enabling the full flexibility of data release methods.
relative performance drop compared to a non-private model.             Additionally, we train a simple logistic regression model
   To estimate privacy loss, we carry out the procedure            on artificial MNIST samples, and obtain 91.69% accuracy,
         (a) Generated                      (b) Real                        (a) Generated                      (b) Real

 Figure 4: Generated and closest real examples for SVHN.           Figure 5: Generated and closest real examples for CelebA.


compared to 92.58% on the original data, confirming that           rows depict reconstructed images from a non-private model,
student models are not restricted to a specific type.              a model trained on GAN samples, and DP model, corre-
   Furthermore, we observe that one could use artificial data      spondingly. One can observe a clear information loss in re-
for validation and hyper-parameter tuning. In our experi-          constructed images going from non-private model, to arti-
ments, correlation coefficients between real and artificial        ficial data, to DP. The latter is superior in decoupling the
validation losses range from 0.7197 to 0.9972 for MNIST            model and the training data, and is a preferred choice in
and from 0.8047 to 0.9810 for SVHN.                                the model release setting and/or if public data is accessible
                                                                   for pre-training. The non-private model, albeit trained with
5.4   Privacy Analysis                                             abundant data (∼200K images) reveals facial features, such
Using the privacy estimation framework (see Section 4), we         as skin and hair colour, expression, etc. Our method, de-
fix the probability γ of exceeding the expected privacy loss       spite failing to conceal general shapes in training images (i.e.
bound µ in all experiments to 10−5 and compute the corre-          faces), seems to achieve a trade-off, hiding most of the spe-
sponding µ for each dataset and two versions of WGAN-GP            cific features. The obtained reconstructions are either very
(vanilla and with DP critic). Table 2 summarises our find-         noisy (columns 1, 2, 6, 8), much like DP, or converge to
ings. It is worth noting, that our µ should not be viewed as       some average feature-less faces (columns 4, 5, 7).
an empirical estimation of ε of DP, since the former bounds           We also analyse real and reconstructed image pairs using
expected privacy loss while the latter–maximum. These two          OpenFace (Amos et al. 2016) (see Table 3). It confirms our
quantities, however, in our experiments turn out to be similar     initial findings: in images reconstructed from a non-private
to deep learning DP bounds found in recent literature (Abadi       model, faces were detected (recognised) 63.6% (11%) of
et al. 2016; Papernot et al. 2018). This may be explained by       the time, while for our method, detection succeeded only
tight concentration of privacy loss random variable (Dwork         in 1.3% of cases and recognition rate was 0.3%, well within
and Rothblum 2016) or loose estimation. Additionally, DP           state-of-the-art error margins. For DP both rates were at 0%.
critic helps to bring down µ values in all cases.                     To evaluate our privacy estimation method, we look at
    The lack of theoretical privacy guarantees for our method      how the privacy loss bound µ correlates with the success
neccesitates assessing the strength of provided protection.        of the attack. Figure 3 depicts the privacy-accuracy trade-off
We perform this evaluation by running the model inversion          curve for an MLP (64-32-10) trained on artificial data. In this
attack (Fredrikson, Jha, and Ristenpart 2015) on a student         setting, we use a stacked denoising autoencoder to compress
model. Note that we also experimented with another well-           images to 64-dimensional feature vectors and facilitate the
known attack on machine learning models, the membership            attack performance. Along the curve, we plot examples of
inference (Shokri et al. 2017). However, we did not include        the model inversion reconstruction at corresponding points.
it in the final evaluation, because of the poor attacker’s per-    We see that with growing µ, meaning lower privacy, both
formance in our setting (nearly random guess accuracy for          model accuracy and reconstruction quality increase.
given datasets and models even without any protection).               Finally, as an additional measure, we perform visual in-
    In order to run the attack, we train a student model (a sim-   spection of generated examples and corresponding nearest
ple multi-layer perceptron with two hidden layers of 1000          neighbours in real data. Figures 4 and 5 depict generated
and 300 neurons) in three settings: real data, artificial data     and the corresponding most similar real images from SVHN
generated by GAN (with DP critic), and real data with differ-      and CelebA datasets. We observe that, despite general vi-
ential privacy (using DP-SGD with a small ε < 1). As facial        sual similarity, generated images differ from real examples
recognition is a more privacy-sensitive application, and pro-      in details, which is normally more important for privacy. For
vides a better visualisation of the attack, we picked CelebA       SVHN, digits vary either in shape, colour or surroundings.
attribute prediction task to run this experiment.                  A lot of pairs come from different classes. For CelebA, the
    Figure 2 shows the results of the model inversion attack.      pose and lighting may be similar, but such details as gender,
The top row presents the real target images. The following         skin colour, facial features are usually significantly different.
                     6    Conclusions                               Berck, P., and Hihn, J. M. 1982. Using the semivariance to
We investigate the problem of private data release for com-         estimate safety-first rules. American Journal of Agricultural
plex high-dimensional data. In contrast to commonly stud-           Economics 64(2):298–300.
ied model release setting, this approach enables important          Bindschaedler, V.; Shokri, R.; and Gunter, C. A. 2017. Plau-
advantages and applications, such as data pooling from mul-         sible deniability for privacy-preserving data synthesis. Pro-
tiple sources, simpler development process, and data trading.       ceedings of the VLDB Endowment 10(5).
    We employ generative adversarial networks to produce ar-        Charest, A.-S., and Hou, Y. 2017. On the meaning and
tificial privacy-preserving datasets. The choice of GANs as         limits of empirical differential privacy. Journal of Privacy
a generative model ensures scalability and makes the tech-          and Confidentiality 7(3):3.
nique suitable for real-world data with complex structure.
Unlike many prior approaches, our method does not as-               Dwork, C., and Roth, A. 2014. The algorithmic founda-
sume access to similar publicly available data. In our experi-      tions of differential privacy. Foundations and Trends R in
ments, we show that student models trained on artificial data       Theoretical Computer Science 9(3–4):211–407.
can achieve high accuracy on MNIST and SVHN datasets.               Dwork, C., and Rothblum, G. N. 2016. Concentrated differ-
Moreover, models can also be validated on artificial data.          ential privacy. arXiv preprint arXiv:1603.01887.
    We propose a novel technique for estimating privacy of          Dwork, C. 2006. Differential privacy. In 33rd Interna-
released data by empirical bounds on expected privacy loss.         tional Colloquium on Automata, Languages and Program-
We compute privacy bounds for samples from WGAN-GP                  ming, part II (ICALP 2006), volume 4052, 1–12. Venice,
on MNIST, SVHN, and CelebA, and demonstrate that ex-                Italy: Springer Verlag.
pected privacy loss is bounded by single-digit values. To
                                                                    Fredrikson, M.; Jha, S.; and Ristenpart, T. 2015. Model
evaluate provided protection, we run a model inversion at-
                                                                    inversion attacks that exploit confidence information and
tack and show that training with GAN reduces information
                                                                    basic countermeasures. In Proceedings of the 22nd ACM
leakage (e.g. face detection drops from 63.6% to 1.3%) and
                                                                    SIGSAC Conference on Computer and Communications Se-
that attack success correlates with estimated privacy bounds.
                                                                    curity, 1322–1333. ACM.
    Additionally, we introduce a simple modification to the
critic: differential privacy layer. Not only does it improve        Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.;
privacy loss bounds and ensures DP guarantees for the critic        Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y.
output, but it also acts as a regulariser, improving stability of   2014. Generative adversarial nets. In Advances in Neural
training, and quality and diversity of generated images.            Information Processing Systems, 2672–2680.
    Considering the rising importance of privacy research and       Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; and
the lack of good solutions for private data publishing, there is    Courville, A. C. 2017. Improved training of wasserstein
a lot of potential future work. In particular, a major direction    gans. In Advances in Neural Information Processing Sys-
of advancing current work would be achieving differential           tems, 5769–5779.
privacy guarantees for generative models while still preserv-       Hitaj, B.; Ateniese, G.; and Pérez-Cruz, F. 2017. Deep
ing high utility of generated data. A step in another direction     models under the gan: information leakage from collabo-
would be to improve the privacy estimation framework, e.g.          rative deep learning. In Proceedings of the 2017 ACM
by bounding maximum privacy loss, or finding a more prin-           SIGSAC Conference on Computer and Communications Se-
cipled way of sampling from outcome distributions.                  curity, 603–618. ACM.
                         References                                 Huang, C.; Kairouz, P.; Chen, X.; Sankar, L.; and Rajagopal,
                                                                    R. 2017. Context-aware generative adversarial privacy. En-
Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H. B.;                 tropy 19(12):656.
Mironov, I.; Talwar, K.; and Zhang, L. 2016. Deep learning
with differential privacy. In Proceedings of the 2016 ACM           Kingma, D., and Ba, J. 2015. Adam: A method for stochas-
SIGSAC Conference on Computer and Communications Se-                tic optimization. In Proceedings of the 3rd International
curity, 308–318. ACM.                                               Conference for Learning Representations.
Abowd, J. M.; Schneider, M. J.; and Vilhuber, L. 2013. Dif-         Klambauer, G.; Unterthiner, T.; Mayr, A.; and Hochreiter,
ferential privacy applications to bayesian and linear mixed         S. 2017. Self-normalizing neural networks. In Advances in
model estimation. Journal of Privacy and Confidentiality            Neural Information Processing Systems, 972–981.
5(1):4.                                                             LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998.
Amos, B.; Ludwiczuk, B.; Satyanarayanan, M.; et al. 2016.           Gradient-based learning applied to document recognition.
Openface: A general-purpose face recognition library with           Proceedings of the IEEE 86(11):2278–2324.
mobile applications.                                                Li, N.; Li, T.; and Venkatasubramanian, S. 2007. t-
Beaulieu-Jones, B. K.; Wu, Z. S.; Williams, C.; and Greene,         closeness: Privacy beyond k-anonymity and l-diversity. In
C. S. 2017. Privacy-preserving generative deep neural net-          Data Engineering, 2007. ICDE 2007. IEEE 23rd Interna-
works support clinical data sharing. bioRxiv 159756.                tional Conference on, 106–115. IEEE.
Bentley, J. L. 1975. Multidimensional binary search trees           Liu, Z.; Luo, P.; Wang, X.; and Tang, X. 2015. Deep learning
used for associative searching. Communications of the ACM           face attributes in the wild. In Proceedings of International
18(9):509–517.                                                      Conference on Computer Vision (ICCV).
Machanavajjhala, A.; Kifer, D.; Gehrke, J.; and Venkita-
subramaniam, M. 2007. l-diversity: Privacy beyond k-
anonymity. ACM Transactions on Knowledge Discovery
from Data (TKDD) 1(1):3.
Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and
Ng, A. Y. 2011. Reading digits in natural images with unsu-
pervised feature learning. In NIPS workshop on deep learn-
ing and unsupervised feature learning, volume 2011, 5.
Papernot, N.; Abadi, M.; Erlingsson, Ú.; Goodfellow, I.; and
Talwar, K. 2016. Semi-supervised knowledge transfer for
deep learning from private training data. arXiv preprint
arXiv:1610.05755.
Papernot, N.; Song, S.; Mironov, I.; Raghunathan, A.; Tal-
war, K.; and Erlingsson, Ú. 2018. Scalable private learning
with pate. arXiv preprint arXiv:1802.08908.
Pérez-Cruz, F. 2008. Kullback-leibler divergence estimation
of continuous distributions. In Information Theory, 2008.
ISIT 2008. IEEE International Symposium on, 1666–1670.
IEEE.
Saw, J. G.; Yang, M. C.; and Mo, T. C. 1984. Chebyshev
inequality with estimated mean and variance. The American
Statistician 38(2):130–132.
Schneider, M. J., and Abowd, J. M. 2015. A new method for
protecting interrelated time series with bayesian prior distri-
butions and synthetic data. Journal of the Royal Statistical
Society: Series A (Statistics in Society) 178(4):963–975.
Shokri, R., and Shmatikov, V. 2015. Privacy-preserving
deep learning. In Proceedings of the 22nd ACM SIGSAC
conference on computer and communications security,
1310–1321. ACM.
Shokri, R.; Stronati, M.; Song, C.; and Shmatikov, V. 2017.
Membership inference attacks against machine learning
models. In Security and Privacy (SP), 2017 IEEE Sympo-
sium on, 3–18. IEEE.
Sweeney, L. 2002. K-anonymity: A model for protecting
privacy. Int. J. Uncertain. Fuzziness Knowl.-Based Syst.
10(5):557–570.
Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna,
Z. 2016. Rethinking the inception architecture for computer
vision. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 2818–2826.
Wang, Y.-X.; Lei, J.; and Fienberg, S. E. 2016. On-average
kl-privacy and its equivalence to generalization for max-
entropy mechanisms. In International Conference on Pri-
vacy in Statistical Databases, 121–134. Springer.
Xie, L.; Lin, K.; Wang, S.; Wang, F.; and Zhou, J. 2018.
Differentially private generative adversarial network. arXiv
preprint arXiv:1802.06739.
Zhang, X.; Ji, S.; and Wang, T. 2018. Differentially pri-
vate releasing via deep generative model. arXiv preprint
arXiv:1801.01594.
Zhu, T.; Li, G.; Zhou, W.; and Philip, S. Y. 2017. Dif-
ferentially private data publishing and analysis: a survey.
IEEE Transactions on Knowledge and Data Engineering
29(8):1619–1638.