<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Differential Privacy Preserving Regression Analysis and Deep Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Xintao Wu</string-name>
          <email>xintaowu@uark.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Arkansas</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p />
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Many data mining and machine learning methods such as
regression models often involve optimization of objective functions. The
functional mechanism (FM), which perturbs coefficients of the
polynomial representation of the objective function, has been shown
as an effective way to achieve differential privacy. Although the
learned model guarantees protection against attempts to infer whether
a subject was included in the training set, it is not designed to
protect attribute privacy when model inversion attacks are launched.
In model inversion attacks, an adversary uses the released model to
make predictions of sensitive attributes of a target individual when
some background information is available. In the first part of this
talk, we present an approach which leverages the FM but
effectively balances the privacy budget for sensitive and non-sensitive
attributes in learning the model. As a result, the approach can
effectively prevent model inversion attacks and retain model utility
while preserving privacy. In the second part of this talk, we
concentrate on recent research on privacy preserving deep learning.
In particular, we present the differential privacy preserving deep
Auto-encoder based on the FM. Finally we present challenges and
findings when applying the developed techniques in healthcare and
genome wide association studies.
Differential privacy; regression; deep learning
Biography
Dr. Xintao Wu is the professor and the Charles D. Morgan/Acxiom
Endowed Graduate Research Chair in Database in Computer
Science and Computer Engineering Department at University of Arkansas.
He was a faculty member in College of Computing and
Informatics at the University of North Carolina at Charlotte from 2001 to
2014. Dr. Wu’s major research interests include data mining,
privacy and security, database application testing and big data analysis.
His recent research work has been to develop privacy preserving
techniques for mining tabular data, social network data, healthcare
data, and GWAS data and develop spectral analysis based fraud
de</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>