=Paper= {{Paper |id=Vol-2573/invited1 |storemode=property |title=Perfectly Privacy-Preserving AI What Is It and How Do We Achieve It? |pdfUrl=https://ceur-ws.org/Vol-2573/PrivateNLP_InvitedTalk1.pdf |volume=Vol-2573 |authors=Patricia Thaine,Gerald Penn |dblpUrl=https://dblp.org/rec/conf/wsdm/ThaineP20 }} ==Perfectly Privacy-Preserving AI What Is It and How Do We Achieve It? == https://ceur-ws.org/Vol-2573/PrivateNLP_InvitedTalk1.pdf
                                       Perfectly Privacy-Preserving AI
                                      What is it and how do we achieve it?
                                Patricia Thaine                                                               Gerald Penn
                             University of Toronto                                                       University of Toronto
                            pthaine@cs.toronto.edu                                                       gpenn@cs.toronto.edu

ABSTRACT
Many AI applications need to process huge amounts of sensitive
information for model training, evaluation, and real-world integra-
tion. These tasks include facial recognition, speaker recognition,
text processing, and genomic data analysis. Unfortunately, one of
the following two scenarios occur when training models to perform
the aforementioned tasks: either models end up being trained on
sensitive user information, making them vulnerable to malicious
actors, or their evaluations are not representative of their abilities
since the scope of the test set is limited. In some cases, the models
never get created in the first place.
   There are a number of approaches that can be integrated into AI
algorithms in order to maintain various levels of privacy. Namely,
differential privacy, secure multi-party computation, homomorphic
encryption, federated learning, secure enclaves, and automatic data
de-identification. We will briefly explain each of these methods and                   Figure 1: The Four Pillars of perfectly privacy-preserving AI.
describe the scenarios in which they would be most appropriate.
   Recently, several of these methods have been applied to ma-
chine learning models. We will cover some of the most interesting                      knowledge, there have been no guides published regarding what
examples of privacy-preserving ML, including the integration of                        it means to have perfectly privacy-preserving AI. We introduce
differential privacy with neural networks to avoid unwanted infer-                     the four pillars required to achieve perfectly privacy-preserving AI
ences from being made of a network’s training data.                                    and discuss various technologies that can help address each of the
   Finally, we will discuss how the privacy-preserving machine                         pillars. We back our claims up with relatively new research in the
learning approaches that have been proposed so far would need                          quickly growing subfield of privacy-preserving machine learning.
to be combined in order to achieve perfectly privacy-preserving
machine learning.                                                                      2   THE FOUR PILLARS OF
                                                                                           PERFECTLY-PRIVACY PRESERVING AI
1    MOTIVATION                                                                        During our research, we identified four pillars of privacy-preserving
Data privacy has been called “the most important issue in the next                     machine learning (Figure 1). These are:
decade,”1 and has taken center stage thanks to legislation like the Eu-                   (1) Training Data Privacy: The guarantee that a malicious
ropean Union’s General Data Protection Regulation (GDPR) and the                              actor will not be able to reverse-engineer the training data.
California Consumer Privacy Act (CCPA). Companies, developers,                            (2) Input Privacy: The guarantee that a user’s input data can-
and researchers are scrambling to keep up with the requirements2 .                            not be observed by other parties, including the model creator.
In particular, “Privacy by Design”3 is integral to the GDPR and                           (3) Output Privacy: The guarantee that the output of a model
will likely only gain in popularity this decade. When using privacy                           is not visible by anyone except for the user whose data is
preserving techniques, legislation suddenly becomes less daunting,                            being inferred upon.
as does ensuring data security which is central to maintaining user                       (4) Model Privacy: The guarantee that the model cannot be
trust. Data privacy is a central issue to training and testing AI mod-                        stolen by a malicious party.
els, especially ones that train and infer on sensitive data. Yet, to our                  While 1–3 deal with protecting data creators, 4 is meant to pro-
1 https://www.forbes.com/sites/marymeehan/2019/11/26/data-privacy-will-be-the          tect the model creator.
-most-important-issue-in-the-next-decade/#3211e2821882
2 https://www.theverge.com/2019/12/31/21039228/california-ccpa-facebook-microso
                                                                                       3   TRAINING DATA PRIVACY
ft-gdpr-privacy-law-consumer-data-regulation
3 https://www.ipc.on.ca/wp-content/uploads/resources/7foundationalprinciples.pdf       While it may be slightly more difficult to gather information about
                                                                                       training data and model weights than it is from plaintext (the tech-
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons   nical term for unencrypted) input and output data, recent research
License Attribution 4.0 International (CC BY 4.0).
PrivateNLP ’20, February 7, 2020, Houston, TX, USA
                                                                                       has demonstrated that reconstructing training data and reverse-
© 2020                                                                                 engineering models is not as huge challenge as one would hope.
Evidence                                                                    (1) Differentially Private Stochastic Gradient Descent
                                                                                (DPSGD) [? ? ]: While differential privacy was originally
   In [? ], Carlini and Wagner calculate just how quickly generative            created to allow one to make generalizations about a dataset
sequence models (e.g., character language models) can memorize                  without revealing any personal information about any in-
rare information within a training set. Carlini and Wagner train                dividual within the dataset, the theory has been adapted to
a character language model on the Penn Treebank with a “secret”                 preserve training data privacy within deep learning systems.
inserted into it once: “the random number is ooooooooo” where               (2) Papernot’s PATE [? ]: Professor Papernot created PATE as
ooooooooo is a meant to be a (fake) social security number. They                a more intuitive alternative to DPSGD. PATE can be thought
show how the exposure of a secret which they hide within their                  of as an ensemble approach and works by training multiple
copy of the Penn Treebank Dataset (PTD). They train a charac-                   models on iid subsets of the dataset. At inference, if the
ter language model on 5% of the PTD and calculate the network’s                 majority of the models agree on the output, then the output
amount of memorization. Memorization peaks when the test set                    doesn’t reveal any private information about the training
loss is lowest. This coincides with peak exposure of the secret.                data and can therefore be shared.

Metrics                                                                 4    INPUT AND OUTPUT PRIVACY
                                                                        Input user data and resulting model outputs inferred from that data
   So how can we quantify how likely it is that a secret can be         should not be visible to any parties except for the user in order
reverse-engineered from model outputs? [? ] develops a metric           to comply with the four pillars of perfectly privacy-preserving AI.
known as exposure:                                                      Preserving user data privacy is not only beneficial for the users
                                                                        themselves, but also for the companies processing potentially sensi-
          exposure𝜃 (𝑠 [𝑟 ]) = log2 (|𝑅|) − log2 (rank𝜃 (𝑠 [𝑟 ]))       tive information. Privacy goes hand in hand with security. Having
                                                                        proper security in place means that data leaks are much less likely
    Given a canary 𝑠 [𝑟 ], a model with parameters 𝜃 , and the ran-     to occur, leading to the ideal scenario: no loss of user trust and no
domness space 𝑅, the exposure s[r] is and the rank is the index         fines for improper data management.
at which the true secret (or canary) is among all possible secrets
given the model’s perplexity for the inputs. The smaller the index,     Evidence
the greater the likelihood that the sequence appears in the training
data, so the goal is to minimize the exposure of a secret, which is         This is important to ensure that private data do not:
something that Carlini and Wagner. achieve by using differentially
                                                                             • get misused (e.g., location tracking as reported in the NYT4 )
private gradient descent (see Solutions below). Another exposure
                                                                             • fall into the wrong hands due to, say, a hack, or
metric is presented in [? ], in which the authors calculate how much
                                                                             • get used for tasks that a user had either not expected or
information can be leaked from a latent representation of private
                                                                               explicitly consented to (e.g., Amazon admits employees listen
data sent over an insecure channel. While this paper falls more into
                                                                               to Alexa conversations5 ).
the category of input data privacy analysis, it’s still worth looking
at the metric proposed to compare it with the one presented in [?          While it is standard for data to be encrypted in transit and (if a
]. In fact, they propose two privacy metrics. One for demographic       company is responsible) at rest as well, data is vulnerable when it
variables such as sentiment analysis and blog post topic classifica-    is decrypted for processing.
tion, and one for named entities such as news topic classification.
Their privacy metrics are:                                              Solutions

   (1) Demographic variables: “1 − 𝑋 , where 𝑋 is the average of
                                                                            (1) Homomorphic Encryption: homomorphic encryption al-
       the accuracy of the attacker on the prediction of gender and
                                                                                lows for non-polynomial operations on encrypted data. For
       age,”
                                                                                machine learning, this means training and inference can be
   (2) Named entities: “1 − 𝐹 , where 𝐹 is an F-score computed over
                                                                                performed directly on encrypted data. Homomorphic en-
       the set of binary variables in 𝑧 that indicate the presence of
                                                                                cryption has successfully been applied to random forests,
       named entities in the input example,” where “𝑧 is a vector of
                                                                                naive Bayes, and logistic regression [? ]. [? ] designed low-
       private information contained in a [natural language text].”
                                                                                degree polynomial algorithms that classify encrypted data.
   When looking at the evidence, it’s important to keep in mind                 More recently, there have been adaptations of deep learning
that this subfield of AI (privacy-preserving AI) is brand-spanking              models to the encrypted domain [? ? ? ].
new, so there are likely a lot of potential exploits that either have       (2) Secure Multi-Party Computation (MPC): the idea be-
not been analyzed or even haven’t been thought of yet.                          hind MPC is that two or more parties’ who do not trust
                                                                                each other can transform their inputs into “nonsense” which
Solutions                                                                       gets sent into a function whose output is only sensical when
                                                                        4 https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-p
   There are two main proposed solutions for the problem of train-
                                                                        hone.html
ing data memorization which not only guarantee privacy, but also        5 https://www.independent.co.uk/life-style/gadgets-and-tech/news/amazon-alexa-e
improve the generalizability of machine learning models:                cho-listening-spy-security-a8865056.html
        the correct number of inputs are used. Among other appli-           6    SATISFYING ALL FOUR PILLARS
        cations, MPC has been used for genomic diagnosis using              As can be seen from the previous sections, there is no blanket
        the genomic data owned by different hospitals [? ], and lin-        technology that will cover all privacy problems. Rather, to have
        ear regression, logistic regression, and neural networks for        perfectly privacy-preserving AI (something that both the research
        classifying MNIST images [? ]. [? ] is a prime example of           community and industry have yet to achieve), one must combine
        the kind of progress that can be made by having access to           technologies:
        sensitive data if privacy is guaranteed. There are a number of
                                                                                 • Homomorphic Encryption + Differential Privacy
        tasks which cannot be accomplished with machine learning
                                                                                 • Secure Multi-Party Computation + Differential Privacy
        given to the lack of data required to train classification and
                                                                                 • Federated Learning + Differential Privacy + Secure Multi-
        generative models. Not because the data isn’t out there, but
                                                                                   Party Computation
        because the sensitive nature of the information means that
                                                                                 • Homomorphic Encryption + PATE
        it cannot be shared or sometimes even collected, spanning
                                                                                 • Secure Multi-Party Computation + PATE
        medical data or even speaker-specific metadata which might
                                                                                 • Federated Learning + PATE + Homomorphic Encryption
        help improve automatic speech recognition systems (e.g.,
        age group, location, first language).                                  Other combinations also exist, including some with alternative
    (3) Federated Learning: federated learning is basically on-             technologies that do not have robust mathematical guarantees yet;
        device machine learning. It is only truly made private when         namely, (1) secure enclaves (e.g., Intel SGX) which allow for com-
        combined with differentially private training (see DPSGD in         putations to be performed without even the system kernel having
        the previous section) and MPC for secure model aggregation          access, (2) data de-identification, and (3) data synthesis. For now,
        [? ], so the data that was used to train a model cannot be          perfectly privacy-preserving AI is still a research problem, but there
        reverse engineered from the weight updates output out of            are a few tools that can address some of the most urgent privacy
        a single phone. In practice, Google has deployed federated          needs.
        learning on Gboard (see their blog post about it6 ) and Apple
        introduced federated learning support in CoreML37 .
                                                                            7    PRIVACY-PRESERVING MACHINE
                                                                                 LEARNING TOOLS
5    MODEL PRIVACY                                                               • Differential privacy in Tensorflow8
AI models can be companies’ bread and butter, many of which                      • MPC and Federated Learning in PyTorch9
provide predictive capabilities to developers through APIs or, more              • MPC in Tensorflow10
recently, through downloadable software. Model privacy is the last               • On-device Machine Learning with CoreML311
of the four pillars that must be considered and is also core to both
user and company interests. Companies will have little motivation           8    ACKNOWLEDGMENTS
to provide interesting products and spend money on improving AI             Many thanks to Pieter Luitjens and Dr. Siavash Kazemian for their
capabilities if their competitors can easily copy their models (an          feedback on earlier drafts of this write-up.
act which is not straightforward to investigate).

Evidence

   Machine learning models form the core product and IP of many
companies, so having a model stolen is a severe threat and can have
significant negative business implications. A model can be stolen
outright or can be reverse-engineered based on its outputs [? ].

Solutions

    (1) There has been some work on applying differential privacy
        to model outputs to prevent model inversion attacks. Differ-
        ential privacy usually means compromising model accuracy;
        however, [? ] presents a method that does not sacrifice accu-
        racy in exchange for privacy.
    (2) Homomorphic encryption can be used not only to preserve
        input and output privacy, but also model privacy, if one
        chooses to encrypt a model in the cloud. This comes at sig-
        nificant computational cost, however, and does not prevent
        model inversion attacks.                                            8 https://github.com/tensorflow/privacy
                                                                            9 https://github.com/OpenMined/PySyft
6 https://ai.googleblog.com/2017/04/federated-learning-collaborative.html   10 https://github.com/mpc-msri/EzPC
7 https://developer.apple.com/documentation/coreml                          11 https://developer.apple.com/documentation/coreml