<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Is It Possible to Preserve Privacy in the Age of AI?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vijayanta Jain</string-name>
          <email>vijayanta.jain@maine.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sepideh Ghanavati</string-name>
          <email>sepideh.ghanavati@maine.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Maine</institution>
          ,
          <addr-line>Orono, Maine</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>3</fpage>
      <lpage>7</lpage>
      <abstract>
        <p>Artificial Intelligence (AI) hopes to provide a positive paradigm shift in technology by providing new features and personalized experience to our digital and physical world. In the future, almost all our digital services and physical devices will be enhanced by AI to provide us with better features. However, as training artificially intelligent models require a large amount of data, it poses a threat to user privacy. The increasing prevalence of AI promotes data collection and consequently poses a threat to privacy. To address these concerns, some research eforts have been directed towards developing techniques to train AI systems while preserving privacy and help users preserve their privacy. In this paper, we survey the literature and identify these privacy-preserving approaches that can be employed to preserve privacy. We also suggest some future directions based on our analysis. We find that privacy-preserving research, specifically for AI, is in its early stage and requires more efort to address the current challenges and research gaps.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Privacy → Privacy protections.</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        Artificial Intelligence (AI) is increasingly becoming ubiquitous in
our lives through its growing presence in the digital services we
use and the physical devices we own. AI already powers our most
commonly used digital services, such as search (Google, Bing),
music (Spotify, YouTube Music), entertainment (Netflix, YouTube),
and social media (Facebook, Instagram, Twitter). These services
heavily rely on AI or Machine Learning (ML)1 to provide users with
personalized content and better features, such as relevant search
results, the content the users would like, and the people they might
know. AI/ML also enhances several physical devices that we own
(or can own), for example - smart speakers, such as Google Hub and
Amazon Echo, that rely on natural language processing to detect
voice, understand, and execute commands such as to control lights,
change the temperature, or add groceries to shopping list. Using
AI to provide highly personalized experience is beneficial for the
users as well as the providers; users get positive engagement with
these platforms and providers get engaged users who spend more
1AI and ML are used interchangeably in this paper.
time on their services. The number of applications and devices
that use AI will also increase in the near future. This is evident by
the increasing number of smartphones with dedicated chips for
machine learning (ML) [
        <xref ref-type="bibr" rid="ref1 ref2 ref27 ref3">1–3, 27</xref>
        ] and devices that come integrated
with personal assistants .2,3
      </p>
      <p>
        The proliferation of AI poses direct and indirect threats to user
privacy. The direct threat is the inference of personal information
and the indirect threat is the promotion of data collection. Movies
such as Her, accurately portray the Utopian-AI future some
companies hope to provide users as they increase the ubiquity of ML
in their digital and physical products. However, as training AI
systems, such as deep neural networks, requires a large amount of data,
companies collect usage data from users whenever they interact
with any of their services. There are two major problems with this
collection: first, the usage data collected is used to infer information
such as personal interests, habits, and behavior patterns thus
invading privacy; and second, to improve the personalization, intelligent
features, and AI-capabilities of the services, companies will
continuously collect and increase the data collected from users, thus
leading to an endless-loop of collecting data which threatens user
privacy (see Figure 2). Moreover, the collected data is often used for
ad-personalization or shared with third-party which does not meet
user’s expectations and thus, violates user privacy [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. For
example, when you interact with Google’s Home Mini, the text from
these recordings may be used for ad-personalization (see Figure 1)
which does not meet the privacy expectations of the users [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        Privacy violations in recent times have motivated research
efforts to develop techniques and methodologies to preserve privacy.
Previous research work has developed tools that provide users with
more efective notice and choice [
        <xref ref-type="bibr" rid="ref18 ref19 ref31 ref9">9, 18, 19, 31</xref>
        ]. With increasing
concerns about privacy because of AI, some eforts have also been
directed towards training machine learning models while
preserving privacy [
        <xref ref-type="bibr" rid="ref29 ref4">4, 29</xref>
        ]. User-focused techniques provide users with the
necessary tools to preserve privacy whereas privacy-preserving
machine learning helps companies use machine learning for their
services while still preserving user privacy. In this work, we survey
these methods to understand the methodologies that can be
employed when users are surrounded by digital services and physical
devices that use AI. The contributions of this paper are two-fold:
• We survey the machine learning based methodologies and
techniques.
      </p>
      <p>• Identify research gaps and suggest future directions.</p>
      <p>The rest of the paper is organized as follows: in Section 2, we
report the result of our survey. In Section 3, we discuss some related
work whereas Section 4 identifies the challenges and suggests future
directions. Finally in Section 5, we conclude our work.
2https://www.amazon.com/Amazon-Smart-Oven/dp/B07PB21SRV
3https://www.amazon.com/Echo-Frames/dp/B01G62GWS4</p>
    </sec>
    <sec id="sec-3">
      <title>ANALYSIS OF THE CURRENT LITERATURE</title>
      <p>
        In this section, we report on our survey of machine-learning based
techniques that have been developed to preserve user privacy. We
divide this section into two groups: i) privacy preserving machine
learning approaches and ii) techniques to provide users with notice
and give them choices.
Recent research eforts have been directed to develop
privacypreserving machine learning techniques [
        <xref ref-type="bibr" rid="ref24 ref4">4, 24</xref>
        ]. Prior to machine
learning, diferential privacy provided a strong standard to preserve
privacy for statistical analysis on public datasets. In this technique,
whenever a statistical query is made to a database containing
sensitive information, a randomized function k adds noise to the resulting
query which preserves privacy while also ensuring the usability of
the database [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Some work has used diferential privacy for
training machine learning models [
        <xref ref-type="bibr" rid="ref4 ref7">4, 7</xref>
        ]. Chaudhri and Monteleoni [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
use this technique to develop a privacy-preserving algorithm for
logistic regression. Abadi et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] also use this technique to train
deep neural networks by developing a noisy Stochastic Gradient
Descent (SGD) algorithm. However, a key problem with diferential
privacy is that having repeated queries to the database can average
out the noise and thus revealing the underlying sensitive
information of the database [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. To solve this, Dwork proposes privacy
budget that considers each query to the database as a privacy cost
and for each session there is a privacy budget [
        <xref ref-type="bibr" rid="ref11 ref13">11, 13</xref>
        ]. After the
privacy budget has been used for the session, no query results are
returned.
      </p>
      <p>
        Other work in this area has been to develop methods to train
neural networks on the device itself without sending the data back
to the servers [
        <xref ref-type="bibr" rid="ref24 ref25 ref29">24, 25, 29</xref>
        ]. Shokri and Shmatikov [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] present a
system to jointly train models without sharing the input dataset of
each individual. In their work, they develop a system that allows
several participants to train similar neural networks on their input
data without sharing the data but selectively sharing the
parameters with each other to avoid local minima. Similarly, in line with
Shokri and Shmatikov to not share data, McMahan et al. [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]
propose Federated Learning which allows developers to train neural
networks in a decentralized and privacy-preserving manner. The
ideology behind their work is that neural network models to be
trained are sent to the mobile devices which contain the user
sensitive data and use SGD locally to update the parameters. The models
are then sent back to a central server which "averages" the update
from all the models to achieve a better model. They term this
algorithm FederatedAveraging. Similarly, Papernot et al. [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] propose
Private Aggregation of Teacher Ensemble (PATE) - a method to
train machine learning models while preserving privacy. In their
approach, several "teacher" models are trained on disjoint subsets of
the dataset, then the "student" model is trained by the aggregation
of the "teachers" to accurately "mimic the ensemble". The goal of
this work is to address the information leakage problem [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        The goal of the work outlined above is to develop new
algorithms and methods to train neural networks on a device or use
diferentially private algorithms. However, information leakage still
provides a threat to the user’s privacy. Information leakage is the
concept in which the neural network implicitly contains sensitive
information it was trained on. This is demonstrated in [
        <xref ref-type="bibr" rid="ref15 ref30">15, 30</xref>
        ]. This
is an active research topic and new methods, such as PATE, aim
to resolve this issue by not exposing the dataset to the machine
learning model.
2.2
      </p>
    </sec>
    <sec id="sec-4">
      <title>Mechanisms to Control User’s Data</title>
      <p>
        The primary goal in this field of research has been to provide users
with better notice, give them choices and provide them with the
means to control their personal information. Notice and Choice is
one of the fundamental methods to preserve privacy and is based on
the Openness principle of the OECD Fair Information Principle [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
In Notice and Choice mechanism, the primary goal has been to
improve and extract relevant information from privacy policies
for the users. This is because privacy policies are lengthy and it is
infeasible for users to read the privacy policies for all the digital
and physical services they use/own [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Therefore, research has
focused on providing them with better notice and choice such as
in [
        <xref ref-type="bibr" rid="ref20 ref22 ref28">20, 22, 28</xref>
        ]. Other work have achieved similar results by applying
machine learning techniques. Harkous et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] develop PriBot
a Q&amp;A chatbot that analyzes a privacy policy and then provides
users with sections of the privacy policy that answers their question.
Some work has focused on identifying the quality of the privacy
policy. For example, Constane et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] use text categorization and
machine learning to categorize paragraphs of privacy policies and
assess their completeness with a grade. The grade is calculated by
the weight assigned by the user to each category and the coverage of
the category in a selected section. This method helps users inspect a
privacy policy in a structured way and read only the paragraphs that
interest them. Zimmeck et al. introduce Privee [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ] which integrates
Constane’s classification method with Sadeh’s crowdsourcing. In
Privee, if a privacy analysis results are available in the repository,
the result is returned to the user. Otherwise, the privacy policy is
automatically classified and then, it is returned. PrivacyGuide [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]
uses classification techniques, such as Naïve Bayes and Support
Vector Machines (SVM), to categorize privacy policies based on
the EU GDPR [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], summarize them and then allocate risk factors.
These above work certainly improve the previous
"state-of-theart" method of notice &amp; choice - a privacy policy by giving users
a succinct form of information. However, privacy policies often
contain ambiguities that are dificult for technology to answer, for
example, the number of third parties the data is shared with or how
long the data will be stored by the companies.
      </p>
      <p>
        Another active topic of research in providing control of their
privacy to users is to model privacy preferences. The goal of this topic
of research is to provide users with more control over what
information can mobile applications or other users access. Lin et al. [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]
create a small number of profiles for user’s privacy preference using
clustering and then based on those profiles analyze whether the
user from a profile allows certain permissions or not. Similar to
their work, Wijesekera et al. [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] develop a contextually-aware
permission system that dynamically permits access to private data
of Android applications based on user’s preferences. They argue
that their permission system is better than the default Android
permission system of Ask-On-First-Use (AOFU) as context, "what
[users] were doing on their mobile devices at the time that data was
requested" [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] afect user’s privacy preferences. In their system,
they use SVM classifier, trained over contextual information and
user’s behavior, to make permission decisions. They also conduct a
usability study to model the preferences of 37 users and test their
system [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. Similarly, other work to use contextual information
to model privacy preferences has been done for applications in
web-based services as well. Yuan et al. [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ] propose a model that
uses contextual information to share images, with diferent
granularity with other users. In their work, based on the semantic image
features and contextual features of a requester, they train logistic
regression, SVM and Random Forest to predict whether the user
would share, would not share, or partially share the image requested.
Similarly, Bilogrevic et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] develop Smart Privacy-aware
Information Sharing Mechanism, a system that shares personal information
with users, third-party, online services, or mobile apps based on the
user’s privacy preferences and the contextual information. They use
Naïve Bayesian, SVM, and Logistic Regression to model preferences.
They also conduct a user study to understand their preferences and
the factors influencing their decision. Using contextual information
and providing diferent levels of information access is a great step
towards providing the user with greater control of their data but
certain challenges still remain. Primarily, most of these systems
have not conducted usability studies to examine the user’s view.
This inhibits implementing such research into real-world.
      </p>
      <p>Overall we find that this line of work has focused on giving
users the mechanisms to understand the privacy practices and
control their data. Giving users the control of their data is important,
however, this approach puts the burden on the users to preserve
their privacy which might be dificult for less tech-savvy users as
often the privacy settings for websites are hidden under layers of
settings to control.
3</p>
    </sec>
    <sec id="sec-5">
      <title>RELATED WORK</title>
      <p>
        Papernot et al. [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] provide a Systematization of Knowledge (SoK)
of security and privacy challenges in machine learning. This work
surveys the existing literature to identify the security and privacy
threats as well as defenses that have been developed to mitigate
the threats. The research work also argues based on the
analysis, to develop a framework for understanding the sensitivity of
ML algorithms to its training data to foster security and privacy
implications of ML algorithms. Our analysis is similar as it
evaluates privacy implications of these machine learning algorithms,
but our work provides a more detailed discussion on the privacy
challenges as compared to [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Zhu et al. [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ] survey diferent
methods developed to publish and analyze diferentially private
data. The work analyzes diferentially private data published based
on the type of input data, the number of queries, accuracy, and
eficiency and evaluate diferentially private data analysis based on
Laplace/Exponential Framework, such as [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and Private Learning
Framework, such as [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The paper also presents with some future
directions for diferential privacy, such as executing more local
differential privacy. This work is the closest to our work as it surveys
a privacy-preserving analysis technique and suggests future work.
However, in our analysis, we also incorporate the technologies
that help users preserve their privacy. Overall, our work difers
from [
        <xref ref-type="bibr" rid="ref26 ref35">26, 35</xref>
        ] as we look at the big picture of privacy-preserving
technologies specifically with the increase in use of AI.
4
      </p>
    </sec>
    <sec id="sec-6">
      <title>DISCUSSION</title>
      <p>In this paper, we discussed techniques and methodologies
developed to preserve user privacy. Primarily, we identified two groups
of work: (1) privacy-preserving machine learning, such as noisy
SGD and federated learning, and (2) techniques to provide users
with the tool to protect their own privacy. In this section, we discuss
the advantages of each category of approaches, their existing
challenges, the research gaps, and suggest some potential future work
to address the challenges and gaps identified here. We summarize
our analysis in Table 1.</p>
      <p>
        Diferential Privacy and Machine Learning Approaches:
Diferential privacy provides a strong state-of-the-art for data
analysis by introducing noise to query results [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and this method has
also been used to train deep neural networks [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. One of the biggest
advantages of these approaches is the simplicity and eficiency of
the methodology. Some companies have even started to use
differential privacy in some of their applications .4 Using diferential
privacy for deep learning provides great potential for researchers
and developers. However, understanding the trade-ofs between
4https://www.apple.com/privacy/docs/Diferential_Privacy_Overview.pdf
• Puts the burden on user to preserve
      </p>
      <p>
        privacy
• Limited tools for controlling privacy
privacy and utility for specific tasks, models, optimizers, and similar
other factors can further help developers in using
diferentiallyprivate machine learning. Some initial work has been done in this
area [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] but future work can explore this in detail.
      </p>
      <p>
        Federated Learning: Federated learning provides a unique
approach to machine learning by training models on device instead
of on a central server [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. By keeping the data on a device, it will
prevent sharing with third-party and even profiling user-data for
ad-personalization. A key challenge with federated learning is the
complexity of using Federated Learning; small-scale companies
and developers might find diferential privacy easier to optimize
and employ on a smaller scale. Another challenge with this
approach is information leakage from the gradients of the neural
network [
        <xref ref-type="bibr" rid="ref15 ref30">15, 30</xref>
        ]. There has been some efort to address this
issue by developing diferent privacy-preserving machine learning
methodologies [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. However, a critical gap in this area of research
is that few research eforts have looked into providing users with
mechanisms that control the data being used for federated-learning.
Future work can address this gap. Another future direction for
federated learning is to combine diferentially-private data with
federated learning. Initial work has been done in this direction, such
as [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], but future work could expand the analysis by evaluating
diferent diferential privacy algorithms for privatizing data.
      </p>
      <p>
        User-Focused Privacy Preserving: Several methods have been
proposed that uses machine learning to preserve user-privacy [
        <xref ref-type="bibr" rid="ref18 ref32 ref6">6,
18, 32</xref>
        ] to provide users with the necessary notices and control
mechanisms to have control over their data. Some of these
methods [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] employ Natural Language Processing (NLP) to understand
privacy text to preserve user privacy. Future work in this direction
can employ more advanced architectures for this task to improve
accuracy and relevance. Another future direction can be to help
companies and developers create applications and systems that
preserve user’s privacy.
      </p>
      <p>Based on our analysis of the current data practices and research
development, we believe that it will be dificult to preserve privacy
in the age of AI. As the ubiquity of AI and economic incentives
to use AI will increase, it will passively promote data collection
and thus pose a threat to user privacy. The techniques developed
to preserve user privacy are not as efective as the current data
practices that violates them. Increased research efort along with
legal actions will be required to preserve privacy in the age of AI.</p>
    </sec>
    <sec id="sec-7">
      <title>5 CONCLUSION</title>
      <p>In this work, we provide a brief survey of machine learning based
techniques to preserve user privacy, identify the challenges with
these techniques and suggest some future work to address the
challenges. We argue that the privacy-preserving technologies
specifically for AI are in their early stages and it will be dificult to preserve
privacy in the age of AI. We identify research gaps and suggest
future work that can address some of the gaps and result in more
efective privacy-preserving technologies for AI. In future, we plan
on expanding this work for a more critical analysis of diferent
algorithms and evaluate their eficacy for diferent use cases.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[1] [n.d.]. iPhone 11 Pro</source>
          . https://www.apple.com/iphone-11-pro/.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2] [n.d.].
          <source>OnePlus 7 Pro</source>
          . https://www.oneplus.com/7pro#/specs.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3] [n.d.]. Samsung
          <string-name>
            <surname>Galaxy S10 Intelligence - Virtual Assistant</surname>
          </string-name>
          &amp; AR Photo. https: //www.samsung.com/us/mobile/galaxy-s10/intelligence/.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Martin</given-names>
            <surname>Abadi</surname>
          </string-name>
          , Andy Chu, Ian Goodfellow,
          <string-name>
            <surname>H Brendan McMahan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Ilya</given-names>
            <surname>Mironov</surname>
          </string-name>
          , Kunal Talwar,
          <string-name>
            <given-names>and Li</given-names>
            <surname>Zhang</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Deep learning with diferential privacy</article-title>
          .
          <source>In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM</source>
          ,
          <volume>308</volume>
          -
          <fpage>318</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Brendan</given-names>
            <surname>Avent</surname>
          </string-name>
          , Javier Gonzalez, Tom Diethe,
          <string-name>
            <given-names>Andrei</given-names>
            <surname>Paleyes</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Borja</given-names>
            <surname>Balle</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Automatic Discovery of Privacy-Utility Pareto Fronts</article-title>
          . arXiv preprint arXiv:
          <year>1905</year>
          .
          <volume>10862</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Igor</given-names>
            <surname>Bilogrevic</surname>
          </string-name>
          , Kévin Huguenin, Berker Agir, Murtuza Jadliwala, Maria Gazaki, and
          <string-name>
            <surname>Jean-Pierre Hubaux</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>A machine-learning based approach to privacyaware information-sharing in mobile social networks</article-title>
          .
          <source>Pervasive and Mobile Computing</source>
          <volume>25</volume>
          (
          <year>2016</year>
          ),
          <fpage>125</fpage>
          -
          <lpage>142</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Kamalika</given-names>
            <surname>Chaudhuri</surname>
          </string-name>
          and
          <string-name>
            <given-names>Claire</given-names>
            <surname>Monteleoni</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Privacy-preserving logistic regression</article-title>
          .
          <source>In Advances in neural information processing systems</source>
          .
          <volume>289</volume>
          -
          <fpage>296</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Elisa</given-names>
            <surname>Costante</surname>
          </string-name>
          , Yuanhao Sun, Milan Petković, and Jerry den Hartog.
          <year>2012</year>
          .
          <article-title>A machine learning solution to assess privacy policy completeness:(short paper)</article-title>
          .
          <source>In Proceedings of the 2012 ACM workshop on Privacy in the electronic society. ACM</source>
          ,
          <volume>91</volume>
          -
          <fpage>96</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Lorrie</given-names>
            <surname>Faith Cranor</surname>
          </string-name>
          .
          <year>2003</year>
          .
          <article-title>P3P: Making privacy policies more useful</article-title>
          .
          <source>IEEE Security &amp; Privacy</source>
          <volume>1</volume>
          ,
          <issue>6</issue>
          (
          <year>2003</year>
          ),
          <fpage>50</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Lorrie</surname>
            <given-names>Faith</given-names>
          </string-name>
          <string-name>
            <surname>Cranor</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Necessary but not suficient: Standardized mechanisms for privacy notice and choice</article-title>
          . J. on Telecomm. &amp;
          <string-name>
            <given-names>High</given-names>
            <surname>Tech</surname>
          </string-name>
          . L.
          <volume>10</volume>
          (
          <year>2012</year>
          ),
          <fpage>273</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Cynthia</given-names>
            <surname>Dwork</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Diferential privacy</article-title>
          .
          <source>Encyclopedia of Cryptography and Security</source>
          (
          <year>2011</year>
          ),
          <fpage>338</fpage>
          -
          <lpage>340</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Cynthia</surname>
            <given-names>Dwork</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Aaron</given-names>
            <surname>Roth</surname>
          </string-name>
          , et al.
          <year>2014</year>
          .
          <article-title>The algorithmic foundations of diferential privacy</article-title>
          .
          <source>Foundations and Trends® in Theoretical Computer Science</source>
          <volume>9</volume>
          ,
          <fpage>3</fpage>
          -
          <lpage>4</lpage>
          (
          <year>2014</year>
          ),
          <fpage>211</fpage>
          -
          <lpage>407</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Aaruran</given-names>
            <surname>Elamurugaiyan</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>A Brief Introduction to Diferential Privacy</article-title>
          . https://medium.com
          <article-title>/georgian-impact-blog/a-brief-introduction-todiferential-privacy-eacf8722283b</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>EU</surname>
            <given-names>GDPR</given-names>
          </string-name>
          [n.d.].
          <article-title>"The EU General Data Protection Regulation (GDPR)"</article-title>
          .
          <source>EU GDPR</source>
          . https://eugdpr.org.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Matt</surname>
            <given-names>Fredrikson</given-names>
          </string-name>
          , Somesh Jha, and
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Ristenpart</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Model inversion attacks that exploit confidence information and basic countermeasures</article-title>
          .
          <source>In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM</source>
          ,
          <volume>1322</volume>
          -
          <fpage>1333</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Ben</given-names>
            <surname>Gerber</surname>
          </string-name>
          . [n.d.]. OECDprivacy.org. http://www.oecdprivacy.org/.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Robin</surname>
            <given-names>C Geyer</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tassilo Klein</surname>
            , and
            <given-names>Moin</given-names>
          </string-name>
          <string-name>
            <surname>Nabi</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Diferentially private federated learning: A client level perspective</article-title>
          .
          <source>arXiv preprint arXiv:1712.07557</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Hamza</surname>
            <given-names>Harkous</given-names>
          </string-name>
          , Kassem Fawaz, Rémi Lebret, Florian Schaub, Kang G. Shin, and
          <string-name>
            <given-names>Karl</given-names>
            <surname>Aberer</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Polisis: Automated Analysis and Presentation of Privacy Policies Using Deep Learning</article-title>
          .
          <source>In USENIX Security Symposium.</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Patrick</given-names>
            <surname>Gage</surname>
          </string-name>
          <string-name>
            <surname>Kelley</surname>
          </string-name>
          , Joanna Bresee, Lorrie Faith Cranor, and Robert W Reeder.
          <year>2009</year>
          .
          <article-title>A nutrition label for privacy</article-title>
          .
          <source>In Proceedings of the 5th Symposium on Usable Privacy and Security. ACM</source>
          ,
          <volume>4</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Marc</given-names>
            <surname>Langheinrich</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>A privacy awareness system for ubiquitous computing environments</article-title>
          .
          <source>In international conference on Ubiquitous Computing</source>
          . Springer,
          <fpage>237</fpage>
          -
          <lpage>245</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Jialiu</surname>
            <given-names>Lin</given-names>
          </string-name>
          , Bin Liu,
          <source>Norman Sadeh, and Jason I Hong</source>
          .
          <year>2014</year>
          .
          <article-title>Modeling users' mobile app privacy preferences: Restoring usability in a sea of permission settings</article-title>
          .
          <source>In 10th Symposium On Usable Privacy and Security ({SOUPS}</source>
          <year>2014</year>
          ).
          <fpage>199</fpage>
          -
          <lpage>212</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Fei</surname>
            <given-names>Liu</given-names>
          </string-name>
          , Rohan Ramanath,
          <source>Norman Sadeh, and Noah A Smith</source>
          .
          <year>2014</year>
          .
          <article-title>A step towards usable privacy policy: Automatic alignment of privacy statements</article-title>
          .
          <source>In Proceedings of COLING</source>
          <year>2014</year>
          ,
          <source>the 25th International Conference on Computational Linguistics: Technical Papers</source>
          .
          <fpage>884</fpage>
          -
          <lpage>894</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Nathan</surname>
            <given-names>Malkin</given-names>
          </string-name>
          , Joe Deatrick, Allen Tong, Primal Wijesekera, Serge Egelman, and
          <string-name>
            <given-names>David</given-names>
            <surname>Wagner</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Privacy Attitudes of Smart Speaker Users</article-title>
          .
          <source>Proceedings on Privacy Enhancing Technologies</source>
          <year>2019</year>
          ,
          <volume>4</volume>
          (
          <year>2019</year>
          ),
          <fpage>250</fpage>
          -
          <lpage>271</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>H</given-names>
            <surname>Brendan McMahan</surname>
          </string-name>
          , Eider Moore, Daniel Ramage,
          <string-name>
            <given-names>Seth</given-names>
            <surname>Hampson</surname>
          </string-name>
          , et al.
          <year>2016</year>
          .
          <article-title>Communication-eficient learning of deep networks from decentralized data</article-title>
          .
          <source>arXiv preprint arXiv:1602.05629</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Nicolas</surname>
            <given-names>Papernot</given-names>
          </string-name>
          , Martín Abadi, Ulfar Erlingsson, Ian Goodfellow, and
          <string-name>
            <given-names>Kunal</given-names>
            <surname>Talwar</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Semi-supervised knowledge transfer for deep learning from private training data</article-title>
          .
          <source>arXiv preprint arXiv:1610.05755</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Nicolas</surname>
            <given-names>Papernot</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patrick</surname>
            <given-names>McDaniel</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Arunesh</given-names>
            <surname>Sinha</surname>
          </string-name>
          , and Michael P Wellman.
          <year>2018</year>
          .
          <article-title>SoK: Security and privacy in machine learning</article-title>
          .
          <source>In 2018 IEEE European Symposium on Security</source>
          and
          <string-name>
            <surname>Privacy (EuroS&amp;P). IEEE</surname>
          </string-name>
          ,
          <fpage>399</fpage>
          -
          <lpage>414</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>Brian</given-names>
            <surname>Rakowski</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Pixel 4 is here to help</article-title>
          . https://blog.google/products/pixel/ pixel-4/.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Joel R Reidenberg</surname>
            ,
            <given-names>N Cameron</given-names>
          </string-name>
          <string-name>
            <surname>Russell</surname>
          </string-name>
          , Alexander J Callen, Sophia Qasir, and Thomas B Norton.
          <year>2015</year>
          .
          <article-title>Privacy harms and the efectiveness of the notice and choice framework</article-title>
          .
          <source>ISJLP</source>
          <volume>11</volume>
          (
          <year>2015</year>
          ),
          <fpage>485</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>Reza</given-names>
            <surname>Shokri</surname>
          </string-name>
          and
          <string-name>
            <given-names>Vitaly</given-names>
            <surname>Shmatikov</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Privacy-preserving deep learning</article-title>
          .
          <source>In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM</source>
          ,
          <volume>1310</volume>
          -
          <fpage>1321</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Reza</surname>
            <given-names>Shokri</given-names>
          </string-name>
          , Marco Stronati, Congzheng Song, and
          <string-name>
            <given-names>Vitaly</given-names>
            <surname>Shmatikov</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Membership inference attacks against machine learning models</article-title>
          .
          <source>In 2017 IEEE Symposium on Security and Privacy (SP)</source>
          .
          <source>IEEE</source>
          ,
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Welderufael</surname>
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Tesfay</surname>
            ,
            <given-names>Peter</given-names>
          </string-name>
          <string-name>
            <surname>Hofmann</surname>
            , Toru Nakamura, Shinsaku Kiyomoto, and
            <given-names>Jetzabel</given-names>
          </string-name>
          <string-name>
            <surname>Serna</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>PrivacyGuide: Towards an Implementation of the EU GDPR on Internet Privacy Policy Evaluation</article-title>
          .
          <source>In Proceedings of the Fourth ACM International Workshop on Security and Privacy Analytics (IWSPA '18)</source>
          . ACM, New York, NY, USA,
          <fpage>15</fpage>
          -
          <lpage>21</lpage>
          . https://doi.org/10.1145/3180445.3180447
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Lynn</surname>
            <given-names>Tsai</given-names>
          </string-name>
          , Primal Wijesekera, Joel Reardon, Irwin Reyes, Serge Egelman, David Wagner,
          <string-name>
            <given-names>Nathan</given-names>
            <surname>Good</surname>
          </string-name>
          , and
          <string-name>
            <surname>Jung-Wei Chen</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Turtle guard: Helping android users apply contextual privacy preferences</article-title>
          .
          <source>In Thirteenth Symposium on Usable Privacy and Security ({SOUPS}</source>
          <year>2017</year>
          ).
          <fpage>145</fpage>
          -
          <lpage>162</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Primal</surname>
            <given-names>Wijesekera</given-names>
          </string-name>
          , Joel Reardon, Irwin Reyes, Lynn Tsai,
          <string-name>
            <surname>Jung-Wei</surname>
            <given-names>Chen</given-names>
          </string-name>
          , Nathan Good, David Wagner,
          <string-name>
            <given-names>Konstantin</given-names>
            <surname>Beznosov</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Serge</given-names>
            <surname>Egelman</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Contextualizing privacy decisions for better prediction (and protection)</article-title>
          .
          <source>In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM</source>
          ,
          <volume>268</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>Lin</surname>
            <given-names>Yuan</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joël Theytaz</surname>
            , and
            <given-names>Touradj</given-names>
          </string-name>
          <string-name>
            <surname>Ebrahimi</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Context-dependent privacyaware photo sharing based on machine learning</article-title>
          .
          <source>In IFIP International Conference on ICT Systems Security and Privacy Protection</source>
          . Springer,
          <fpage>93</fpage>
          -
          <lpage>107</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <surname>Tianqing</surname>
            <given-names>Zhu</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Gang</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Wanlei</given-names>
            <surname>Zhou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S Yu</given-names>
            <surname>Philip</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Diferentially private data publishing and analysis: A survey</article-title>
          .
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>29</volume>
          ,
          <issue>8</issue>
          (
          <year>2017</year>
          ),
          <fpage>1619</fpage>
          -
          <lpage>1638</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>Sebastian</given-names>
            <surname>Zimmeck and Steven M Bellovin</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Privee: An Architecture for Automatically Analyzing Web Privacy Policies.</article-title>
          .
          <source>In USENIX Security</source>
          , Vol.
          <volume>14</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>