<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Privacy Implications of Explainable AI in Data-Driven Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fatima Ezzeddine</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Explainable Artificial Intelligence, Privacy-Preserving Machine Learning, Privacy Attacks</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Università della Svizzera italiana</institution>
          ,
          <addr-line>Lugano</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Machine learning (ML) models, demonstrably powerful, sufer from a lack of interpretability. The absence of transparency, often referred to as the black box nature of ML models, undermines trust and urges the need for eforts to enhance their explainability. Explainable AI (XAI) techniques address this challenge by providing frameworks and methods to explain the internal decision-making processes of these complex models. Techniques like Counterfactual Explanations (CF) and Feature Importance play a crucial role in achieving this goal. Furthermore, high-quality and diverse data remains the foundational element for robust and trustworthy ML applications. In many applications, the data used to train ML and XAI explainers contain sensitive information. In this context, numerous privacy-preserving techniques can be employed to safeguard sensitive information in the data, such as diferential privacy. Subsequently, a conflict between XAI and privacy solutions emerges due to their opposing goals. Since XAI techniques provide reasoning for the model behavior, they reveal information relative to ML models, such as their decision boundaries, the values of features, or the gradients of deep learning models when explanations are exposed to a third entity. Attackers can initiate privacy breaching attacks using these explanations, to perform model extraction, inference, and membership attacks. This dilemma underscores the challenge of finding the right equilibrium between understanding ML decision-making and safeguarding privacy.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Context and Motivation</title>
      <p>
        In recent years, advancements in Artificial Intelligence (AI) have expanded beyond the primary
objective of predictive capabilities. Although accurate predictions are crucial, an equally
important goal has emerged: ensuring explainability. Explainability in Machine Learning (ML)
models has become a critical objective for making clear and justifiable predictions, especially in
high-stakes social decisions. It is essential for these models to ofer clear and comprehensible
reasons for their predictions and decisions [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In this context, Explainable AI (XAI) has emerged
as a crucial field of investigation. XAI methodologies are specifically designed to unveil the
decision-making processes of complex, opaque models, often referred to as black boxes. With
the use of XAI techniques, researchers can gain valuable insights into the reasoning behind
model decisions, after they have already been made [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. XAI techniques employ various
methods to interpret the inner workings of complex ML models. These methods generate diferent
types of explanations, e.g., feature importance, counterfactual explanations, etc. To generate
tailored explanations, XAI requires a combination of data, interpretable models, and explanatory
techniques and often incorporates user interaction. Therefore, XAI starts with the foundational
element of data, which needs to be diverse and of high quality. This data is not only used to
train AI models but also to create explainers. This combination of data, interpretable models,
explanatory techniques, and user interaction builds the XAI.
      </p>
      <p>
        In many applications, the data used to train AI and XAI models contain sensitive information
about individuals, such as medical records, or financial transactions, which the GDPR [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
seeks to safeguard. Diferent approaches are proposed to safeguard sensitive information in
data, such as diferential privacy (DP) and federated learning (FL). These approaches afect
predictive performance to some extent, resulting in a drop in performance, yet they manage
to uphold an acceptable level of it. Subsequently, a conflict between ensuring transparency
through XAI and ensuring privacy emerges due to their opposing goals. XAI aims to provide
insights into model behavior for transparency, while privacy-preserving solutions obscure data
to prevent data leakage. Moreover, the output of XAI can unintentionally expose model decision
boundaries, leading to potential attacks on privacy [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. For instance, attackers can exploit
XAI explanations such as CFs, which describe the minimal feature value change to alter the
model decision and return instances that are close to the decision boundary. FI, which scores
the contribution and impact of each feature to the model output exposes information about
the gradients in Deep Neural Networks (DNNs) or about the values of the features in ML. In
this context, attackers can initiate attacks from these explanations to perform model extraction,
inference, and membership attacks [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], especially when the model is shared or deployed publicly
on the cloud as ML as a Service (MLaaS). This dilemma underscores the challenge of finding the
right equilibrium between explainability and safeguarding private information [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background on Explainable Artificial Intelligence</title>
      <sec id="sec-2-1">
        <title>2.1. Motivation and Definition</title>
        <p>
          In order to enhance transparency, XAI techniques provide the necessary tools to open up
complex black boxes and shed light on how AI decisions are made [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], promoting fairness,
transparency, and accountability within real-world organizations. Moreover, XAI has proven to
play a pivotal role in ensuring that AI is trusted and used responsibly. By answering essential
“How?” and “Why?” inquiries regarding AI systems, XAI serves as a valuable tool for tackling
the increasing ethical and legal issues associated with them. XAI targets diverse entities and
includes various stakeholders, such as researchers, model developers like engineers and data
scientists, as well as practitioners.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Post-hoc Explainability</title>
        <p>Post-hoc explainability is a technique used to gain insight into the decision-making process of a
trained ML model. In this context, post-hoc means that the model’s interpretability is addressed
after its training, regardless of its complexity or the algorithms used. The approach primarily
revolves around the act of querying the model with diverse sets of input data to observe how
it reacts to diferent scenarios. Through these interactions, we can efectively map out the
decision boundaries the model uses, shedding light on what factors influence its predictions.</p>
        <p>Visualizations and explanations can then be applied to make these insights more accessible and
human-friendly, ultimately enabling a better understanding of the model’s predictions. These
visual aids are essential in making the insights gained more accessible to data scientists, end users,
and domain experts who are willing to understand why the model is making specific predictions.
By going through this process, post-hoc explainability serves a vital role in improving model
transparency and building trust in its performance.</p>
        <p>Understanding an AI system with XAI relies on its training data, process, and model.
Therefore, XAI can be applied throughout the entire AI development pipeline. Specifically, it can
be applied in diferent stages of modeling, such as before, during, and after (post-modeling
explainability). In this work, the primary emphasis will be on post-modeling XAI (Post-hoc),
since ML models are often developed with only predictive performance in mind.</p>
        <p>
          Feature Importance Feature Importance (FI) explanations involve assigning a quantitative
measure in the form of a numerical score to each input feature within a given model. The
primary goal of calculating FI is to discern which features have influential efects on the model’s
predictions and which ones have a relatively lesser impact. These importance scores help
practitioners and data scientists gain insights into which factors are most critical in influencing
those decisions. Features that, when modified, cause more substantial shifts in the model’s
output are considered more important because they have a greater influence on the final
prediction. For deep learning models, many feature-based explanation functions are
gradientbased techniques that analyze the gradient flow through a model. Approaches such as Layer-wise
Relevance Propagation (LRP) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], and Deep Learning Important FeaTures (DeepLIFT) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] exist.
Counterfactual Explanations CFs leverage the concept of potential outcomes to assess causal
relationships within a data-model framework. CFs empower informed decision-making and
the implementation of explainable, accountable, and ultimately more ethically responsible
AI [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. It achieves this by constructing a hypothetical scenario, distinct from the observed
data, and evaluating the corresponding model output under this scenario. The generation of
informative and interpretable CFs necessitates the optimization of well-defined metrics [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]
such as diversity, validity, proximity, and user constraints. Conversely, model-specific methods
tailor the cost function optimization process to leverage the inherent characteristics of the
employed model. For instance, in the case of diferentiable models, gradients play a critical role
in guiding the optimization towards finding CFs [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Conversely, model-agnostic methods
achieve generalization across diverse model architectures [
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Related Work: Interplay between XAI and Privacy</title>
      <sec id="sec-3-1">
        <title>3.1. Context and Problem Formulation</title>
        <p>Data protection and privacy is one of the primary dimensions in ML and AI. It involves ensuring
that the data used to train and test ML models does not expose sensitive information about
individuals or entities. This is particularly critical when dealing with datasets that contain
personally identifiable information or confidential details. Techniques like anonymization and
DP have emerged as valuable tools in the data privacy field. They allow us to protect the
privacy of individuals represented in the data, even as we leverage it to train models. Beyond
data privacy, model privacy is also a pressing concern. The architecture of ML models can be
susceptible to privacy breaches. Models may unintentionally encode information about the
training data they were exposed to, and this could pose risks when shared or deployed publicly
on the cloud as MLaaS. Attacks such as model extraction, inversion, or membership inference
can exploit these vulnerabilities (Details in the following sections and Fig. 1). However, privacy
is not included in the default behavior of most ML algorithms. They tend to learn not just the
general trends but also the specifics of the data, potentially revealing sensitive information
when the model is made public. In an ideal scenario, we want these algorithms to focus on
extracting general trends and patterns from the data while deliberately avoiding the inclusion
of specific details about the data. This emphasis on distilling general patterns means that the
algorithms should primarily capture the fundamental, common insights that are valuable for
decision-making, aligning with privacy concerns, as they identify important details without
risking individual privacy. XAI can inadvertently compromise privacy by revealing sensitive
information about the model’s decision boundaries. Moreover, the process of returning real data
points with CFs can inadvertently expose specific instances from the training set or behaviors.
Also, the process of assigning FI scores exposes the values of gradients and the feature values
themselves. This conflict makes striking the right balance between model explainability and
data privacy crucial to ensuring that XAI enhances our understanding of AI systems without
leaking individual privacy.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Attacks on Machine Learning Models</title>
        <sec id="sec-3-2-1">
          <title>3.2.1. Membership inference Attacks</title>
          <p>
            A membership inference attack (MIA) is a privacy-related threat in ML where an adversary
attempts to determine whether a specific data point was part of the training dataset of a deployed
model [
            <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
            ]. MIA are particularly concerning because they can compromise the privacy of
individuals whose data was part of the training dataset. If an attacker can determine that a
specific data point was included in the training data, it may reveal sensitive information about
that individual, even if the model’s output does not directly disclose such information. To
perform membership attacks, [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ] proposes a shadow training process that mimics the target
model with shadow models, and trains the attack model using data that is extracted using data
synthesis. Also, [
            <xref ref-type="bibr" rid="ref17">17</xref>
            ] discusses and proves that points with a very high loss tend to be far from
the decision boundary and are more likely to be non-members. Regarding how explanation
can facilitate performing MIA, [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ] quantifies information leakage in model predictions when
explanations are provided. The authors evaluate feature-based explanations, highlighting how
back-propagation-based explanations reveal decision boundaries.
          </p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Model Extraction Attack</title>
          <p>
            Model extraction (MEA) is a class of attacks where an adversary tries to reverse-engineer a
target model by observing its behavior and querying it. MEA can potentially lead to the theft of
intellectual property compromising proprietary models [
            <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
            ]. Authors in [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ] discuss the
weakness in ML services that take incomplete data with confidence levels and show successful
attacks on diferent ML models like decision trees, SVMs, and DNNs by using equation-solving,
path-finding algorithms. Regarding how explanations can facilitate MEA, FIs, and CFs have
proved their ability to reveal the decision boundary of a target model [
            <xref ref-type="bibr" rid="ref20">20</xref>
            ].
          </p>
          <p>
            [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ] perform the attack by minimizing task-classification loss and task-explanation loss.
Authors in [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ] show how gradient-based explanations quickly reveal the model itself and
highlight the power of gradients. Regarding CFs, [
            <xref ref-type="bibr" rid="ref23">23</xref>
            ] proposes a strategy to target the decision
boundary shift by taking not only the CF but also the CF of the CF as pairs of training samples.
          </p>
        </sec>
        <sec id="sec-3-2-3">
          <title>3.2.3. Model Inversion Attack</title>
          <p>
            A model inversion attack (MINA) is a privacy-related threat in ML where an adversary attempts
to reconstruct sensitive or private information about individual data points from trained model
predictions. In other words, the MINA task is to predict the input data, that is, the original
dataset for the target model. In [
            <xref ref-type="bibr" rid="ref24">24</xref>
            ] discusses how providing explanations harms privacy
and studies this risk for image-based MINA on private image data from model explanations.
The authors developed several CNN architectures that achieve significantly higher inversion
performance than using only the target model prediction. To minimize the risk of MINA, [
            <xref ref-type="bibr" rid="ref25">25</xref>
            ]
presents a generative noise injector for model FI explanations by perturbing model explanations.
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Research Questions and Objectives</title>
      <p>We pose the following research questions (RQs):
1. To what extent does the utilization of known privacy-preserving techniques, such as
DP, efectively safeguard privacy and prevent information leakage when combined with
explanations provided by XAI?
2. Can we produce high-quality XAI explanations while safeguarding privacy to mitigate
potential vulnerabilities to attacks?
3. Which approach, privacy-preserving XAI or privacy-preserving ML, ofers a more efective
solution for safeguarding sensitive information in XAI systems?</p>
      <p>To address RQ1, we aim to evaluate the trade-of and assess the efectiveness of existing
privacy-preserving techniques (e.g., DP) in mitigating information leakage when combined with
XAI explanations for CFs and FI. This will involve investigating the extent to which explanations
can be exploited for privacy attacks like MIA, MEA, or MENA.</p>
      <p>To address RQ2, we aim to explore the possibility of generating high-fidelity XAI explanations
while simultaneously safeguarding privacy.</p>
      <p>Such approaches aim to develop an XAI framework that concurrently optimizes two objectives:
i) generating high-quality CFS, and ii) adhering to pre-defined privacy constraints . Furthermore,
the integration of DP during the backpropagation of gradients for FI computation is another
promising avenue for investigation.</p>
      <p>To address RQ3, we will conduct a comparative analysis of privacy-preserving XAI and
privacy-preserving ML techniques. This analysis will evaluate their strengths and weaknesses
in safeguarding sensitive information within XAI systems. By comprehensively assessing these
aspects, we aim to identify the approach that ofers a more robust and enduring mechanism for
privacy protection within XAI applications, covering diferent types of data.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results and contributions to date</title>
      <p>In the initial research, I explored CF generation through RL, with the specific goal of constructing
an explainer that operates independently of input data. The investigation then progressed to a
more in-depth examination of CFs, focusing on their potential for information leakage and their
ability to reveal the decision boundaries of ML models. To reach this aim, a new methodology
is proposed to carry out MEA through a concept known as knowledge distillation (KD). I also
delved into the domain of explainable deep learning methods within distributed systems, such as
Vertical Split Learning (VSL), aiming to evaluate the potential information disclosure resulting
from FI across various entities. In addition, I analyzed the impact of DP on the explainability of
anomaly detection (AD) models. More specifically:
1. Explored how RL can be leveraged to generate CF explanations without relying on
the dataset as input to the explainer. The main aim is to let the CF generator learn
generalizable patterns from the training data without exposing it. The explainer determines
which features to modify and by how much, by maximizing a custom reward function
designed to jointly optimize various metrics.
2. Designed a new attack approach to evaluate the use of KD for an MEA in scenarios
where CFs are given to an attacker. I benefit from the property of KD and the process
of transferring knowledge from a large model to a smaller one. The findings reveal that
employing KD with the presence of CFs can indeed yield successful MEA.
3. Proposed an approach to generate private CFs I introduce the concept of DP within the
GANs CF generation pipeline to generate CFs that deviate from the statistical properties of
the confidential dataset, ofering a layer of protection against potential privacy breaches.
4. Explored VSL strategies and performed experiments to explore the risk of
information leakage regarding the original features using gradient-based explanations (IG and
DeepLIFT). My application of VSL focused on a use case related to Network Function
Virtualization. My findings highlight how an attacker on the server side can exploit XAI
techniques to achieve additional tasks, without access to the original features.
5. Explored DP with AD Analyzed the trade-of between privacy achieved by DP and
explainability achieved using SHAP.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Expected next steps and final contribution to knowledge</title>
      <p>This PhD research aims to achieve significant advancements in bridging the critical gap between
XAI and data privacy. We will address the inherent conflict between providing users with clear
explanations of AI models and protecting their sensitive data (privacy). We aim to develop a
defense mechanism in the form of high-quality explanations while simultaneously ensuring
privacy.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Z. C.</given-names>
            <surname>Lipton</surname>
          </string-name>
          ,
          <article-title>The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery</article-title>
          .,
          <source>Queue</source>
          <volume>16</volume>
          (
          <year>2018</year>
          )
          <fpage>31</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          , G. Langs,
          <string-name>
            <given-names>H.</given-names>
            <surname>Denk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zatloukal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <article-title>Causability and explainability of artificial intelligence in medicine</article-title>
          ,
          <source>Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</source>
          <volume>9</volume>
          (
          <year>2019</year>
          )
          <article-title>e1312</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Regulation</surname>
          </string-name>
          ,
          <article-title>General data protection regulation</article-title>
          ,
          <source>Intouch</source>
          <volume>25</volume>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Shokri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Strobel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zick</surname>
          </string-name>
          ,
          <article-title>On the privacy risks of model explanations</article-title>
          ,
          <source>in: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>231</fpage>
          -
          <lpage>241</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Goethals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sörensen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Martens</surname>
          </string-name>
          ,
          <article-title>The privacy issue of counterfactual explanations: explanation linkage attacks</article-title>
          ,
          <source>ACM Transactions on Intelligent Systems and Technology</source>
          <volume>14</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rigaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Garcia</surname>
          </string-name>
          ,
          <article-title>A survey of privacy attacks in machine learning</article-title>
          ,
          <source>ACM Computing Surveys</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Speith</surname>
          </string-name>
          ,
          <article-title>A review of taxonomies of explainable artificial intelligence (xai) methods</article-title>
          , in
          <source>: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>2239</fpage>
          -
          <lpage>2250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Binder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Montavon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Klauschen</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-R. Müller</surname>
          </string-name>
          , W. Samek,
          <article-title>On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation</article-title>
          ,
          <source>PloS one 10</source>
          (
          <year>2015</year>
          )
          <article-title>e0130140</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shrikumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Greenside</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kundaje</surname>
          </string-name>
          ,
          <article-title>Learning important features through propagating activation diferences</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>3145</fpage>
          -
          <lpage>3153</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <article-title>Counterfactual explanations without opening the black box: Automated decisions and the gdpr</article-title>
          ,
          <source>Harv. JL &amp; Tech. 31</source>
          (
          <year>2017</year>
          )
          <fpage>841</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Boonsanong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hoang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Hines</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Dickerson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <article-title>Counterfactual explanations and algorithmic recourses for machine learning: A review</article-title>
          , arXiv preprint arXiv:
          <year>2010</year>
          .
          <volume>10596</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vasconcelos</surname>
          </string-name>
          , Scout:
          <article-title>Self-aware discriminant counterfactual explanations</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>8981</fpage>
          -
          <lpage>8990</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kanamori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Takagi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kobayashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ike</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Uemura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Arimura</surname>
          </string-name>
          ,
          <article-title>Ordered counterfactual explanation by mixed-integer linear optimization</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>35</volume>
          ,
          <year>2021</year>
          , pp.
          <fpage>11564</fpage>
          -
          <lpage>11574</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>T. M. Nguyen</surname>
            ,
            <given-names>T. P.</given-names>
          </string-name>
          <string-name>
            <surname>Quinn</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Nguyen</surname>
          </string-name>
          , T. Tran,
          <article-title>Counterfactual explanation with multi-agent reinforcement learning for drug target prediction</article-title>
          ,
          <source>arXiv preprint arXiv:2103.12983</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Shokri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stronati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Shmatikov</surname>
          </string-name>
          ,
          <article-title>Membership inference attacks against machine learning models</article-title>
          ,
          <source>in: 2017 IEEE symposium on security and privacy (SP)</source>
          , IEEE,
          <year>2017</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>N.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Shokri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zick</surname>
          </string-name>
          ,
          <article-title>Model explanations with diferential privacy</article-title>
          ,
          <source>in: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1895</fpage>
          -
          <lpage>1904</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sablayrolles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Douze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schmid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ollivier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jégou</surname>
          </string-name>
          ,
          <article-title>White-box vs black-box: Bayes optimal strategies for membership inference</article-title>
          ,
          <source>in: International Conference on Machine Learning, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>5558</fpage>
          -
          <lpage>5567</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shukla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kanade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shevade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ganapathy</surname>
          </string-name>
          , Activethief:
          <article-title>Model extraction using active learning and unannotated public data</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>34</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>865</fpage>
          -
          <lpage>872</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>F.</given-names>
            <surname>Tramèr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Juels</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Reiter</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. Ristenpart,</surname>
          </string-name>
          <article-title>Stealing machine learning models via prediction {APIs}</article-title>
          ,
          <source>in: 25th USENIX security symposium (USENIX Security 16)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>601</fpage>
          -
          <lpage>618</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hasegawa</surname>
          </string-name>
          , T. Shibahara,
          <article-title>Megex: Data-free model extraction attack against gradientbased explainable ai</article-title>
          ,
          <source>arXiv preprint arXiv:2107.08909</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Towards explainable model extraction attacks</article-title>
          ,
          <source>International Journal of Intelligent Systems</source>
          <volume>37</volume>
          (
          <year>2022</year>
          )
          <fpage>9936</fpage>
          -
          <lpage>9956</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Milli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Dragan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <article-title>Model reconstruction from model explanations</article-title>
          ,
          <source>in: Proceedings of the Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Miao</surname>
          </string-name>
          , Dualcf:
          <article-title>Eficient model extraction attack from counterfactual explanations</article-title>
          ,
          <source>in: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1318</fpage>
          -
          <lpage>1329</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <article-title>Exploiting explanations for model inversion attacks</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF international conference on computer vision</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>682</fpage>
          -
          <lpage>692</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>H.</given-names>
            <surname>Jeong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Hwang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Son</surname>
          </string-name>
          ,
          <article-title>Learning to generate inversion-resistant model explanations</article-title>
          , in: S. Koyejo,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Belgrave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Oh (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>35</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2022</year>
          , pp.
          <fpage>17717</fpage>
          -
          <lpage>17729</lpage>
          . URL: https://proceedings.neurips.cc/paper_files/paper/2022/file/ 70d638f3177d2f0bbdd9f400b43f0683-Paper-Conference.pdf .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>