<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>SEBD</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Julia Stoyanovich</string-name>
          <email>stoyanovich@nyu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Short Biography</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dr. Julia Stoyanovich is Institute Associate Professor of Computer Science and Engineering</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>New York University</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>31</volume>
      <fpage>02</fpage>
      <lpage>05</lpage>
      <abstract>
        <p>of the Keynote Incorporating ethics and legal compliance into data-driven algorithmic systems has been attracting significant attention from the computing research community, most notably under the umbrella of fair and interpretable machine learning. Yet, much of this work has been limited to the ”last mile” of data analysis, disregarding both the data lifecycle, and the lifecycle of a system's design, development, and use. In my talk, I argued that the decisions we make during data collection and preparation profoundly impact the robustness, fairness and interpretability of the systems we build, and that our responsibility for the operation of these systems does not stop once they are deployed. I discussed technical work, and placed this work into the broader context of policy, education, and public outreach. Associate Professor of Data Science, and Director of the Center for Responsible AI at New York University. Her goal is to make “Responsible AI” synonymous with “AI”. She works towards this goal by engaging in academic research, education and technology regulation, and by speaking about the benefits and harms of AI to practitioners and members of the public. Julia's research interests include AI ethics and legal compliance, data management and AI systems, and computational social choice. She has co-authored over 100 academic publications, and has written for the New York Times, the Wall Street Journal, and Le Monde. Julia has been teaching courses on responsible data science and AI to students, practitioners and the general public. She is a co-author of the “Data, Responsibly” and “We are AI” comic book series. She received her M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>in Computer Science and in Mathematics &amp; Statistics from UMass Amherst. Julia’s work has
been generously supported by the US National Science Foundation, Pivotal Ventures, JP Morgan
Chase, and Meta Responsible AI, among others. She is a recipient of an NSF CAREER award
and a Senior Member of the ACM.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>