<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Classification Parity, Causal Equal Protection and Algorithmic Fairness</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marcello Di Bello</string-name>
          <email>mdibello@asu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicolò Cangiotti</string-name>
          <email>nicolo.cangiotti@polimi.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Loi</string-name>
          <email>michele.loi@polimi.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Arizona State University</institution>
          ,
          <addr-line>975 S. Myrtle Ave P.O.Box 874302, Tempe, AZ 85287</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Polytechnic University Milan</institution>
          ,
          <addr-line>via Bonardi 9, Campus Leonardo, 20133 Milan</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The literature in computer science and philosophy has formulated several criteria of algorithmic fairness. One of these is classification parity also known under different names. It requires (roughly) that people who belong to different socially salient groups (say groups defined by race or gender) should have the same prospects of erroneous positive and negative classification by a predictive algorithm. At first blush, this is an intuitive criterion of algorithmic fairness. A number of authors in the legal and philosophical literature, however, have argued that classification parity is not a criterion of algorithmic fairness we should take seriously [1, 3, 5]. On the other hand, independently of discussions about algorithmic fairness, other authors [2] have defended an analogous yet different principle-equal protection-that applies to decisions in criminal trials. This principle requires that the risks of a mistaken positive classification (say, a conviction) be equal across factually negative (say, innocent) individuals who belong to different relevant groups. Equal protection is a form of classification parity for false positives in which the true value of the target variable is “innocence” and the “relevant group” is picked out by any feature used as a statistical profile. Given the similarity between classification parity and equal protection, we explore the relationships between the two. In particular, we seek to address two questions. First, is equal protection threatened by the criticisms of classification parity as a plausible criterion of algorithmic fairness? Conversely, if equal protection can be defended as a criterion of algorithmic fairness, to what extent does this contribute to make classification parity plausible in general as a principle of fairness for prediction-based decisions? To keep the discussion manageable, we focus on classification parity in the context of trial decisions to which equal protection was originally intended to apply.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Algorithmic Fairness</kwd>
        <kwd>Classification parity</kwd>
        <kwd>Equal protection</kwd>
        <kwd>Counterfactual Fairness</kwd>
        <kwd>Statistical Profile</kwd>
        <kwd>Diagnostic Evidence</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Causal Equal Protection</title>
      <p>
        Inspired by discussions in Kusner [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], Hedden [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] Long [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and Beigang [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] we begin by drawing a
distinction between classification parity and what we will call causal equal protection. Unlike
classification parity, causal equal protection requires that innocent individuals not be exposed to higher
risks of conviction because they belong to a specific profiled group. The two requirements are not the
same: suppose that judges’ decisions are in part guided by the statistical profile that young individuals
are more likely to commit criminal acts. In this case, classification parity can be violated across two
groups, say Orange and Green, even if this violation only happens because Orange includes a higher
proportion of young individuals compared to Green. Instead, causal equal protection would not be
violated under similar circumstances so long as Orange individuals are not more likely to be
misclassified because they are Orange. But, causal equal protection would be violated if the feature of
being Orange were to guide trial decisions. It would also be violated even when the feature of being
Orange was not intentionally used as a criterion in trial decisions, but decisions to convict were not
counterfactually fair [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for members of Orange and Green. In other words, causal equal protection can
be viewed as counterfactual fairness conditional on the target variable.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Statistical Profiles</title>
      <p>With the distinction between classification parity and causal equal protection in hand, we ask whether
the intuitive unfairness of violating classification parity or causal equal protection depends on whether
a statistical profile guides the decisions as opposed to a decision that does not rely on statistical profiles.</p>
      <p>We begin by drawing a distinction between statistical profiles (or statistical evidence) and
diagnostic evidence based on causal ordering. Predictive evidence lies upstream in the causal structure
relative to the target variable, while diagnostic evidence lies downstream. When diagnostic evidence is
used to guide a decision, the casual path from group membership to the decision is typically mediated
by the target variable. When predictive evidence is used, the causal path from group membership to
the decision is unmediated by the target variable. So the use of a group characteristic (e.g., Orange) as
a statistical profile will violate causal equal protection. But no such violation should occur when
diagnostic evidence guides the decision: insofar as the evidence is causally downstream relative to the
target variable, any causal influence of the group membership onto the decision would be blocked by
conditioning on the target variable.</p>
      <p>We argue that cases in which statistical profiles are deployed in decision-making are clearly
unfair because, by construction, they violate causal equal protection (and not merely classification
parity).</p>
    </sec>
    <sec id="sec-4">
      <title>4. A Pro Tanto Principle</title>
      <p>
        We conclude by showing that causal equal protection can be defended from the objections in Hedden
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and Long [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] against classification parity. When classification parity is violated because statistical
profiles are used (as opposed to diagnostic evidence), causal equal protection is violated and this
violation is pro tanto unfair irrespective of the nature of the group in question. In contrast, when
classification parity is violated by a decision-making process that relies on diagnostic information, it is
implausible to regard this violation as morally problematic in the general case. The pro tanto reasons
for blocking predictions causally influenced by group features must, however, be weighed against
consequentialist considerations, including the loss of predictive accuracy and their distributive effects.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Beigang</surname>
          </string-name>
          (
          <year>2023</year>
          ),
          <source>Reconciling Algorithmic Fairness Criteria, Philosophy and Public Affairs 51</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M. Di</given-names>
            <surname>Bello</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>C. O'Neil</surname>
          </string-name>
          (
          <year>2020</year>
          ),
          <article-title>Profile Evidence, Fairness, and the Risks of Mistaken Convictions</article-title>
          ,
          <source>Ethics</source>
          <volume>103</volume>
          (
          <issue>2</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hedden</surname>
          </string-name>
          (
          <year>2021</year>
          ),
          <source>On Statistical Criteria of Algorithmic Fairness, Philosophy &amp; Public Affair</source>
          <volume>49</volume>
          (
          <issue>2</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kusner</surname>
          </string-name>
          et al. (
          <year>2018</year>
          ), Counterfactual Fairness, preprint: https://arxiv.org/abs/1703.06856.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Long</surname>
          </string-name>
          (
          <year>2021</year>
          ),
          <article-title>Fairness in Machine Learning: Against False Positive Rate Equality</article-title>
          ,
          <source>Journal of Moral Philosophy</source>
          <volume>19</volume>
          (
          <issue>1</issue>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>