<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Proof System with Causal Labels (Part II): checking Counterfactual Fairness</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Leonardo Ceragioli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giuseppe Primiero</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Milan</institution>
          ,
          <addr-line>Via Festa del Perdono 7, Milano, 20122</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>In this article we propose an extension to the typed natural deduction calculus TNDPQ to model verification of counterfactual fairness in probabilistic classifiers. This is obtained formulating specific structural conditions for causal labels and checking that evaluation is robust under their variation. The calculus TPTND (Trustworthy Probabilistic Typed Natural Deduction [1, 2]) is designed to evaluate post-hoc the trustworthiness of the behavior of opaque systems. The system is implemented for verification of dataframes in the tool BRIO [ 3, 4]. In [5], we introduced TNDPQ (Typed Natural Deduction for Probabilistic Queries), a variation of the previous system in which a probabilistic output is associated to a target variable when a Data Point - consisting of a list of values attributions for a set of variables - is provided. In this paper, we extend this system with tools to verify counterfactual fairness of classifiers. We start with a formal definition of classifiers. Let A be a set of protected variables 1, . . . , , X be a (disjoint) set of non-protected variables 1, . . . ,  and  be a target variable. Moreover, let V be a set of values  1 ,  2 , . . . ,   that  can receive, V the set of all V , V be a set of values  1 ,  2 , . . . ,   that  can receive, V the set of all V , and V the set  1,  2, . . . ,   of values that  can receive. Let us use 1, . . . ,  to denote elements of A ∪ X (that is, variables regardless of their protected or unprotected status), and  1 , . . . ,   to denote the values that  can receive. We use  :   (respectively  :   ) to express the judgment that variable  receives value   (respectively, variable  receives value   ), and  :  11 , . . . ,   to express the probabilistic judgment that  1, . . . ,   are all the possible values that variable  can receive and that, for 1 ≤  ≤ , it receives value   with probability .1 We use JA for the set of all the judgments about protected variables, JX for the set of all the judgments about non-protected variables, and JP for the set of all probabilistic judgments. Moreover, we use  A to express a set of judgments about protected variables such that each element of A receives at most one value,  X to express a set of judgments about non-protected variables such that each element of X receives at most one value, ΣA to refer to the set of all  A, and ΣX to refer to the set of all  X.  is used to express the union of a  A and a  X, and Σ is used to refer to the set of all  . More formally:</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Fairness</kwd>
        <kwd>Probabilistic Logic</kwd>
        <kwd>Causal Graphs</kwd>
        <kwd>Structural Rules</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>ΣA = { A ⊆</p>
      <p>JA | ∀( :   ∈  A ∧  :   ∈  A
→  = )}
7th International Workshop on Artificial Intelligence and Formal Verification, Logic, Automata, and Synthesis (OVERLAY 2025),
October 26, 2025, Bologna, Italy
* Corresponding author.
†These authors contributed equally.
$ leonardo.ceragioli@unimi.it (L. Ceragioli); giuseppe.primiero@unimi.it (G. Primiero)
0000-0001-5250-9720 (L. Ceragioli); 0000-0003-3264-7100 (G. Primiero)</p>
      <p>© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
1 We assume that the values for  (also for the elements of A and X but this is irrelevant here) are all mutually exclusive,
and so ∑︀</p>
      <p>=1  = 1. Note that we make no assumption regarding whether  ∈ A ∪ X and so on whether V = V or
V = V for some .
ΣX = { X ⊆</p>
      <p>JX | ∀( :   ∈  X ∧  :   ∈  X
A classifier f̂︀ ∈ F̂︀ is a function from Σ to JP, where each  ∈ Σ describes a Data Point, that is what
we know about a subject, and the probabilistic judgment  :  11 , . . . ,   in JP represents the output
of the classifier regarding the probability distribution of the possible values for the target variable .</p>
      <p>
        TNDPQ is a proof system working with sequents describing the result of queries for classifiers. More
precisely, each classifier f̂︀ is characterized by a set of ground sequents of the form:2
 |∼
For readability reasons, the sequents focus on only one possible value for the target variable at a time. In
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] we show how to extend TNDPQ with sequents working with logically complex sequents – possibly
non-atomic variables receiving possibly non-atomic values. As an example, the following sequent
expresses the probability that a non-white 27 years old man who is married or divorced and has a gross
annual income of 65000 receives a loan:
 : 27, . : ,   :  + , . : ℎ⊥,  : 65 ⃒⃒⃒ ∼
      </p>
      <p>
        TNDPQ was initially designed to investigate the preservation of trustworthiness under the
composition of logically simpler queries and then extended with causal labels to verify individual and
intersectional fairness for a probabilistic classifier via structural properties, see [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In this paper,
we focus only on the atomic fragment and provide a criterion to verify counterfactual fairness for a
probabilistic classifier.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Counterfactual Fairness</title>
      <p>
        Counterfactual fairness requires that a subject would not have been treated diferently had their
protected attributes been diferent [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Formally, it can be defined as follows:
Definition 2.1 (Counterfactual Fairness (CF)). An algorithm is counterfactually fair regarding a
protected variable  if, given a Data Point  describing an actual individual, the algorithm gives the
same output to both  and to the Data Point  ′ describing how the individual would have been, had the
protected variable  received a diferent value.
      </p>
      <p>As an example, we could wonder whether the probability of receiving a loan in the previous example
would have still been 60%, had the subject been a woman? If we do not consider the connections
between the features, this question just corresponds to whether or not the following sequent is derivable
in the system:3
 : 27, . : ,   :  + , . : ℎ⊥,  : 65 ⃒⃒⃒ ∼
However, when causal relations are taken into account, it is a trivial observation that gender influences
other features (both directly and indirectly). For example, both through objective physical diferences
and prejudices, gender influences job opportunities, and therefore the counterpart of a subject having a
diferent gender would probably not have the same . This makes the individuation of the Data
Point  ′ a lot more complicated.</p>
      <p>
        Hence, assessing counterfactual situations CF requires some more precise formal tools. The usual
approaches are: possible worlds semantics [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and causal models [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In the following we choose the
second method.
      </p>
      <p>MS: div</p>
      <sec id="sec-2-1">
        <title>Gender</title>
      </sec>
      <sec id="sec-2-2">
        <title>Degree GAI</title>
      </sec>
      <sec id="sec-2-3">
        <title>Loan</title>
      </sec>
      <sec id="sec-2-4">
        <title>SAT score</title>
      </sec>
      <sec id="sec-2-5">
        <title>Experience</title>
        <p>(b) Graph of Counterfactual Data Point after
intervention assigning  to  .</p>
      </sec>
      <sec id="sec-2-6">
        <title>Gender</title>
      </sec>
      <sec id="sec-2-7">
        <title>Degree GAI</title>
      </sec>
      <sec id="sec-2-8">
        <title>Loan</title>
      </sec>
      <sec id="sec-2-9">
        <title>SAT score</title>
      </sec>
      <sec id="sec-2-10">
        <title>Experience</title>
        <p>(a) Graph of Factual Data Point</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Causal Relations</title>
      <p>We first consider how counterfactual situations are dealt with in causal models, and then internalize both
causal relations and the characterization of counterfactuals in our calculus TNDPQ. For the purposes
of this work, we define causal graphs as follows:
Definition 3.1 (Causal Graph). A causal graph is an acyclic directed graph with nodes representing
events (variables receiving values) and edges representing immediate causal relations.</p>
      <p>By closing edges under transitivity, we obtain the notion of mediate cause. For purely formal reasons,
we close the notion of cause under reflexivity as well. The usual extension of deterministic causal
graphs with functions to compute the value of a node on the basis of those of all the immediate parent
nodes is here expressed by sequents like in equation 1.4</p>
      <p>
        As already shown, to capture counterfactuals one cannot just change the protected attribute and leave
all other variables fixed: the properties of the subject that do not depend on the protected variable need
to be identified and kept fixed. For this, the usual approach (followed, for example, in [
        <xref ref-type="bibr" rid="ref7 ref9">9, 7</xref>
        ]) is to rely on
the distinction between exogenous and endogenous variables: exogenous variables represent attributes
that have no direct cause in the graph, while endogenous ones represent their consequences. By keeping
ifxed the values of exogenous variables (possibly with the exclusion of the protected variable), we make
sure that the causal graph represents the counterfactual situation. In fact, exogenous variables cannot
be consequences of the protected variable. Moreover, under the assumptions of causal models, it is
possible to calculate the values of the endogenous variables from those of the exogenous ones.
      </p>
      <p>According to this usual approach, the protected variable can be either exogenous (such as gender) or
endogenous (such as marital status, which, for example, depends on gender: there are more widows
than widowers). When the protected variable is endogenous, we erase all the edges that enter it, since
we want to assign its value ad arbitrium.</p>
      <p>
        In summary, to capture counterfactual situations, the usual approach prescribes to intervene on the
graph as follows [
        <xref ref-type="bibr" rid="ref7 ref9">9, 7</xref>
        ]:
      </p>
      <p>1. impose some value to the protected variable;
2Technically, we should add a subscript in the equation specifying the classifier we are focusing on. However, this will not be
needed here, since we will not compare outputs of diferent classifiers.
3Note that this would make counterfactual and individual fairness indistinguishable from one another.
4However, note that in causal models the function does not assign value in a probabilistic way and probabilities come in play
only at a later stage. On the contrary, our equations are probabilistic in the strictest sense.</p>
      <p>2. keep all the (other) exogenous variables fixed;
3. erase all the edges that enter in the protected variable;
4. calculate the values of the endogenous variables (particularly of the target), using those of the
exogenous variables.</p>
      <p>The third point is relevant only when the protected variable is not exogenous; otherwise, it is vacuously
satisfied. Now, to check CF we just need to control whether the resulting value of the target variable is
the same.</p>
      <p>We slightly modify this approach to apply it to ML classifiers. The assignment of a value to the target
variable depends on the selection of a set of exogenous variables, so we have to be sure to work with an
adequate set of such variables. Moreover, while in causal models we can require to have a suficiently
vast set of exogenous variables, we cannot ask the same regarding the set of entries of a classifier, which
we cannot change. For this reason, instead of relying only on the exogenous variables, we will use
all and only the variables that are not (direct or indirect) efects of the protected one, and ignore all
the others. This will allow to use all the factual information that remains valid in the counterfactual
situation in order to derive the counterfactual classification. An example of the graphs corresponding
to a factual and a counterfactual Data Point is shown in figure 1.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Adding Counterfactuals to TNDPQ</title>
      <p>
        To express the fact that a Data Point is the counterfactual of another, we need to internalize both causal
relations and interventions in our calculus TNDPQ. For this purpose, we use the methodology of labeled
calculi [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. First, we extend the language with the following relational predicates for variables and
expressions for interventions on Data Points, as already introduced in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]:
Immediate Causal Relations  ◁  =  is an immediate cause of  .
      </p>
      <p>Mediate Causal Relations  ▶  =  is a mediate cause of  , with intermediate causes  .
Intervention on Data Point [◁f ,  ]( :  ) = an intervention assigning the value  to variable
 is operated on the Data Pô︀int  and its associated graph ◁f.5
̂︀
Then, we reformulate TNDPQ sequents by extending their left-hand side:</p>
      <p>◁f̂︀ ,  ⃒⃒⃒ ∼  :   [◁f̂︀ ,  ]( :  ) ⃒⃒⃒ ∼  :   (2)</p>
      <p>Let us use   to indicate the set of variables that occur in  . We use ◁f to indicate all the
immediate causal relations among features in the classifier. ▶f denotes all the mêd︀iate causal relations
in the resulting graph and is derivable as the closure of ◁f û︀nder reflexivity and transitivity. We will
̂︀
use ◁′f̂︀ and ▶′f̂︀ respectively for diferent sets of direct and indirect causal relations. Hence, the first
sequent of equation 2 is a sequent that gives an output for the target  in the actual situation (that is,
 ), also specifying the immediate causal relations that hold between the features of the classifier ( ◁f),
̂︀
while the second is a hypersequent that gives an output for the target  in the counterfactual situation
resulting from ◁f ,  by the intervention that assigns  to . We call  the variable of intervention.
Although ◁f is n̂︀ot actually used by the classifier to evaluate , it is relevant in the calculus to check
whether a D̂a︀ta Point is the counterfactual of another.</p>
      <p>Example 1 (Factual and Counterfactual sequents). The following sequents express, respectively, that the
probability of receiving a loan for a 27 years old person with a gross annual income of 40.000 is 60%, and
that it would have been 50% had them been 35 years old:
 ◁  ,  ◁ ,  ◁ ,  ◁ ,  : 27,  : 40 |∼
5Notice that we focus on interventions on protected variables, although interventions on unprotected variables could also be
studied.
While sequents describing actual decisions of a classifier, such as the one on the left of equation 2,
are assumptions of TNDPQ, those describing counterfactual decisions, such as the one on the right of
the equation, are derivable. To derive them, we start with a plausible sequent for the counterfactual,
which can be obtained using only the features that do not depend on the variable of intervention to run
the classifier:</p>
      <p>◁′f̂︀ ,  ′ ⃒⃒⃒ ∼  :  
Let us call this the counterfactual candidate. Then, we apply the rule C-Weakening in table 1, adding
[◁f̂︀ ,  ]( :  ). The resulting hypersequent [◁f̂︀ ,  ]( :  ) ◁′f̂︀ ,  ′ ⃒⃒⃒ ∼  :   can be interpreted as
saying that the classifier assigns probability  to the value assignment  :  for the Data Point  ′, which
we regard as a counterfactual candidate for the Data Point  after intervention assigning to  the value
 . The formulas ◁f̂︀ and ◁′f̂︀ represent, respectively, the graph describing the causal relations between
the variables of the classifier and the same graph after the intervention.</p>
      <p>If ◁′ ,  ′ is really the counterfactual of ◁f,  after intervention ( :  ), by applying the rules of Cut
f̂︀ ̂︀
in table 1 we end with a sequent of the form:
⃒
[◁f̂︀ ,  ]( :  ) ⃒⃒ ∼
 :  
More precisely, the rule -Cut erases  :  from the premise, that is, the value assignment to the
protected variable imposed by the intervention. Hence, if the counterfactual candidate contains  :  , it
can be erased. The rule ◁-Cut erases  ◁ , under the condition that  ̸= , enabling the erasure of
all direct causal relations that do not enter the protected variable. The rule -Cut erases  :  , that
is, the assignment of value given to  by the original Data Point, under the condition that  is not
a consequence of  . This is established by checking ▶f, that is, the set of mediate causal relations
resulting from the graph before intervention. Hence, all t̂h︀ e assignments of values to the variables that
are not consequences of the protected variable in the original graph can be erased.</p>
      <p>The label  of these rules refers to the fact that they can be seen as contractions of Cut applications
of the following kind:6
[◁′f̂︀ ,  ′]( :  ) ⃒⃒⃒ ∼</p>
      <p>⃒
 :  1 ◁f̂︀ , ,  :  ⃒⃒ ∼</p>
      <p>⃒
[◁′f̂︀ ,  ′]( :  ), ◁f̂︀ ,  ⃒⃒ ∼  :  
Where the first premise says that in the counterfactual Data Point obtained from ◁′f ,  ′ by the
interven̂︀
tion ( :  ), variable  receives the value  with probability 1 (that is, with certainty), and the second
premise gives the probability of  :  in the counterfactual candidate. ◁-Cut and -Cut are contractions
of similar Cut applications.
6See note 1 for the occurrence of  as target variable.</p>
      <p>If by applying all these rules we can erase all the formulas in the counterfactual candidate, then this
is really the Data Point corresponding to the counterfactual of the factual Data Point. Now, all we have
to do to check CF is to compare the probabilities  and . This can be done either by requiring their
identity or by requiring a threshold on their diference.</p>
      <p>Example 2 (Evaluation of CF for a classifier) . Let us use ◁f for the causal relations resulting from graph
̂︀
(a) of figure 1 and ◁′f for the causal relations resulting from graph (b) of the same figure. Moreover, let us
̂︀
assume that the following sequent describe decisions of a classifier:
◁f̂︀ , . : ,   : ,  : 1100,  : 65, . :  ℎ,  : 5 ⃒⃒⃒ ∼
[◁f̂︀ , . : ,   : ,  : 1100,  : 65, . :  ℎ,  : 5](  : ) ⃒⃒⃒ ∼
The derivation can be constructed as follows, using [. . .] for [◁f , . : ,   : ,  : 1100,  :
̂︀
65, . :  ℎ,  : 5] and relying only on the rules in table 1:</p>
      <p>◁′f̂︀ , . : ,   : ,  : 1100, . :  ℎ ⃒⃒⃒ ∼  : 0.60
[. . .](  : ), ◁′f̂︀ , . : ,   : ,  : 1100, . :  ℎ ⃒⃒⃒ ∼  : 0.60
[. . .](  : ), ◁′f̂︀ , . : ,  : 1100, . :  ℎ ⃒⃒⃒ ∼  : 0.60 ◁-Cut
[. . .](  : ) , . : ,  : 1100, . :  ℎ |∼  : 0.60
C-W
-Cut
[◁f̂︀ , . : ,   : ,  : 1100,  : 65, . :  ℎ,  : 5](  : ) ⃒⃒⃒ ∼  : 0.60
-Cut
Moreover, since the probability that the variable  receives value  is the same in both sequents, this
application of the classifier satisfies CF.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This work focuses on formal tools to check counterfactual fairness of probabilistic classifiers. We have
seen how causal models allow to formalize counterfactuals, and argued that a modified version of
their approach is suitable to test fairness for classifiers. We have proposed a typed natural deduction
calculus TNDPQ extended with labels representing causal relations and expressions for interventions
to internalize this approach.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This research was supported by the Ministero dell’Università e della Ricerca (MUR) through PRIN 2022
Project SMARTEST – Simulation of Probabilistic Systems for the Age of the Digital Twin (20223E8Y4X),
and through the Project “Departments of Excellence 2023-2027” awarded to the Department of
Philosophy “Piero Martinetti” of the University of Milan.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Grammarly in order to: Grammar and spelling
check. After using these tool, the authors reviewed and edited the content as needed and take full
responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F</given-names>
            <surname>. A. D'Asaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Genco</surname>
          </string-name>
          , G. Primiero,
          <article-title>Checking trustworthiness of probabilistic computations in a typed natural deduction system</article-title>
          ,
          <source>Journal of Logic and Computation</source>
          (
          <year>2025</year>
          )
          <article-title>exaf003</article-title>
          . URL: https://doi.org/10.1093/logcom/exaf003. doi:
          <volume>10</volume>
          .1093/logcom/exaf003.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kubyshkina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Primiero</surname>
          </string-name>
          ,
          <article-title>A possible worlds semantics for trustworthy non-deterministic computations</article-title>
          ,
          <source>International Journal of Approximate Reasoning</source>
          <volume>172</volume>
          (
          <year>2024</year>
          )
          <article-title>109212</article-title>
          . URL: https://www. sciencedirect.com/science/article/pii/S0888613X24000999. doi:https://doi.org/10.1016/j. ijar.
          <year>2024</year>
          .
          <volume>109212</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Coraglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>A. D'Asaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Genco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Giannuzzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Posillipo</surname>
          </string-name>
          , G. Primiero,
          <string-name>
            <given-names>C.</given-names>
            <surname>Quaggio</surname>
          </string-name>
          ,
          <article-title>Brioxalkemy: a bias detecting tool</article-title>
          , in: G. Boella,
          <string-name>
            <given-names>F.</given-names>
            <surname>A. D'Asaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dyoub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gorrieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Lisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Manganini</surname>
          </string-name>
          , G. Primiero (Eds.),
          <source>Proceedings of the 2nd Workshop on Bias</source>
          ,
          <string-name>
            <surname>Ethical</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <article-title>Explainability and the role of Logic and Logic Programming co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA</article-title>
          <year>2023</year>
          ), Rome, Italy, November 6,
          <year>2023</year>
          , volume
          <volume>3615</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>44</fpage>
          -
          <lpage>60</lpage>
          . URL: https: //ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3615</volume>
          /paper4.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Coraglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Genco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Piantadosi</surname>
          </string-name>
          , E. Bagli,
          <string-name>
            <given-names>P.</given-names>
            <surname>Giufrida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Posillipo</surname>
          </string-name>
          , G. Primiero,
          <article-title>Evaluating ai fairness in credit scoring with the brio tool</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2406.03292. arXiv:
          <volume>2406</volume>
          .
          <fpage>03292</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ceragioli</surname>
          </string-name>
          , G. Primiero,
          <article-title>Trustworthiness preservation by copies of machine learning systems (submitted)</article-title>
          . URL: https://doi.org/10.48550/arXiv.2506.05203.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ceragioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Primiero</surname>
          </string-name>
          ,
          <article-title>A proof system with causal labels (part i): checking individual fairness and intersectionality</article-title>
          ,
          <source>in: Proceedings of the 7th International Workshop on Artificial Intelligence and Formal Verification</source>
          , Logic, Automata, and
          <string-name>
            <surname>Synthesis</surname>
          </string-name>
          , Bologna, Italy,
          <year>October 26th</year>
          ,
          <year>2025</year>
          , OVERLAY Workshop Proceedings, CEUR-WS.org,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kusner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Loftus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Silva</surname>
          </string-name>
          , Counterfactual fairness, volume
          <volume>30</volume>
          , Massachusetts Institute of Technology Press,
          <year>2017</year>
          , pp.
          <fpage>4067</fpage>
          -
          <lpage>4077</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D. K.</given-names>
            <surname>Lewis</surname>
          </string-name>
          , Counterfactuals, Blackwell, Malden, Mass.,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Glymour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jewell</surname>
          </string-name>
          , Causal Inference in Statistics: A Primer, Wiley,
          <year>2017</year>
          . URL: https: //books.google.it/books?id=
          <fpage>L3G</fpage>
          -CgAAQBAJ.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Viganò</surname>
            ,
            <given-names>Labelled</given-names>
          </string-name>
          <string-name>
            <surname>Non-Classical</surname>
            <given-names>Logics</given-names>
          </string-name>
          , Springer, New York,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Negri</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. von Plato</surname>
          </string-name>
          ,
          <article-title>Proof Analysis: A Contribution to Hilbert's Last Problem</article-title>
          , Cambridge University Press, Cambridge,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>