<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>European Workshop on Algorithmic Fairness, June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Living with Opaque Technologies: Insights for AI from Digital Simulations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eugenia Cacciatori</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enzo Fenoglio</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emre Kazim</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Bayes Business School</institution>
          ,
          <addr-line>City</addr-line>
          ,
          <institution>University of London</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Holistic AI</institution>
          ,
          <addr-line>London</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University College London, University of London</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>0</volume>
      <fpage>7</fpage>
      <lpage>09</lpage>
      <abstract>
        <p>This study explores transparency challenges in algorithmic fairness. After reviewing progress in technical and regulatory transparency, we suggest that some level of opacity is inherent to AI systems. Drawing on the relational approach and Polanyi's work on tacit knowledge, we propose studying how society has dealt with other opaque technologies. Using digital simulation modeling as an example, we discuss the similarities and diferences between simulations and AI systems in terms of accuracy and transparency. Further research is recommended to advance algorithmic fairness and responsible practices.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI ethics</kwd>
        <kwd>AI transparency</kwd>
        <kwd>digital simulations</kwd>
        <kwd>algorithmic fairness</kwd>
        <kwd>responsible algorithms</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        There have been significant developments in both the technical properties of a transparent
AI system and adequate regulation and legislation. Transparency is increasingly becoming
a shorthand to refer to the many aspects of the design, training, and implementation of an
AI system needed to ensure that the whys and hows of AI-based decisions are accessible to
humans. This includes issues related to all the elements of an AI system: the data, the system,
and the business models [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Transparency is also increasingly used in the practice and policy
contexts to encompass a variety of technical terms such as explainability, explicability, and
interpretability, whose usage and meaning are far from settled [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], but that aim to describe the
internal algorithmic logic of an AI system, providing information adapted to the expertise of
the stakeholder concerned (e.g., layperson, regulator, or researcher) so that they can perceive it
as transparent.
      </p>
      <p>
        In this paper, we propose that useful insights for future research on AI transparency can
be drawn from research on the adoption of other opaque technologies, in particular digital (or
computer) simulations [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Exploring Transparency Issues in Algorithmic Fairness</title>
      <sec id="sec-2-1">
        <title>2.1. Transparency and Fairness</title>
        <p>
          Fairness, accountability, and transparency are tightly connected in the literature on the ethics
of artificial intelligence [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Transparency is an important condition for fairness since the
perception of fairness depends on the ability to understand the rationale and the process behind
a decision. Transparency is also a precondition for accountability. However, fairness is a broader
concept than transparency, in particular, because it includes the need for decisions to be free
from bias and discrimination but dificult to achieve in practice.
        </p>
        <p>
          Beyond fairness, transparency is critical for many other functions, such for instance as
enabling learning, which in turn improves the design of AI systems [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Current Approaches to Transparency</title>
        <p>
          There are now techniques and methods that can reconstruct, or at least gain some insights, into
how an AI system reached its decisions [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Several studies have highlighted the potential of
post-hoc model-agnostic local explanation methods [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], which focus on explaining individual
predictions of any black-box model. Methods, such as LIME (Local Interpretable Model-agnostic
Explanations) and SHAP (Shapley Additive Explanations), have been employed in various
domains. Notable examples can be found in high-stakes applications such as healthcare to
interpret the decisions made by medical diagnosis models, finance to interpret credit scoring
models, or transportation to interpret the decisions made by the self-driving system [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>Model-agnostic explanations are sensitive to sparsity—most features have no impact, and
missing features make explanations dificult. SHAP and LIME difer based on sparsity, and only
SHAP excels at identifying important features in sparse regions.</p>
        <p>
          One prominent model-agnostic local explanation is the use of counterfactuals. Counterfactual
explanations do not attempt to clarify the internal decision-making process, focusing instead
on identifying external factors that could be diferent to achieve the desired outcome [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
Because of their nature, counterfactual explanations work around the need to understand the
working of the model directly. They are thus a promising way forward that allows the benefit
of employing technically opaque but eficient AI systems while also potentially ensuring a level
of transparency that is socially acceptable.
        </p>
        <p>
          From a regulatory point of view, there has been a recognition that transparency is a relational
property of AI systems, which emerges from the interaction of particular AI systems in relation
to specific issues, specific users, and specific contexts [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. Thus, the issue of transparency cannot
be solved exclusively from technical approaches but requires complex social infrastructures [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]
as well.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. The Black Box Issue</title>
        <p>
          While the push towards more transparency is necessary and important, there are good reasons
to believe that total transparency is unlikely to be possible. There are epistemological arguments
suggesting that complete explainability might not be possible. This viewpoint builds upon the
longstanding research tradition, including the influential work of Polanyi [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], which emphasizes
the inherent presence of tacit knowledge within human understanding. Tacit knowledge
depends on attention focus, with a conscious part open to scrutiny and a tacit part in the
background. Decision-making balances explicit and tacit knowledge. This framework suggests
that full transparency and articulation of all knowledge may be unattainable. Recognizing
tacit knowledge implies that some aspects cannot be fully expressed or understood. Instead,
explanation becomes contextual and relational, shaped by the interplay of tacit and explicit
knowledge.
        </p>
        <p>
          If explanations are relational [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], it is unlikely that we can achieve transparency in every
circumstance, and it may not always be essential [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. As we do not normally require absolute
transparency for every technology we use, the widespread usage and established regulatory
frameworks have made them an accepted, albeit occasionally controversial, part of our lives.
Therefore, overall, there is a general acceptance of treating them as black boxes. A lot of work
in regulation and legislation thus aims at creating a similar social infrastructure of trust for AI.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. AI Systems and Mechanistic Tacit Knowledge</title>
        <p>
          Similarly to human knowledge, the knowledge embedded in AI systems is also characterized
by tacit elements, so-called mechanistic tacit knowledge, which encompasses unobservable and
distributed processes in AI systems [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. The term Mechanistic highlights knowledge produced
through unknown mechanisms and processes of artificial neural networks (ANNs), not directly
observable or manipulable by humans. For example, while engineers program a robot with
explicit knowledge (the algorithm) for tasks like riding a bike, the robot’s execution relies on
inaccessible mechanistic tacit knowledge. The robot lacks explicit knowledge of the algorithm
but can still perform the task successfully [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          The unobservable nature of mechanistic tacit knowledge hinders transparent explanations
of AI system behavior, similar to the challenges in explaining human decision-making. This
concept is crucial in AI explainability discussions, indicating that certain aspects of AI
decisionmaking may perpetually remain opaque or hard to comprehend [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], akin to human knowledge
in modern societies. In addition, even with we had full access to an AI system’s internal workings
and complete transparency, our comprehension may still be limited because of the challenges of
understanding fundamentally diferent cognitive processes [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Thus, integrating mechanistic
tacit knowledge and counterfactual explanations provides a promising framework for gaining
insights into how AI systems reach decisions.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Dealing with Opacity: Directions for Further Research</title>
      <p>
        Further insights into how to address the opacity of AI systems can come from experience
with other opaque technologies. Digital simulations are characterized by an essential epistemic
opacity because “no human can examine and justify every element of the computational processes
that produce the output of a computer simulation” [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Further, simulations are also opaque
because the limited possibilities for experimentation mean that it is often dificult to assess the
truthfulness of a simulation’s result [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], a trait they share with at least some AI systems, such
as in healthcare, for which the correctness of decisions is often dificult to assess[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        Simulations, while opaque, ofer valuable problem-solving capabilities. The technical
literature today ofers extensive normative guidance on how to establish validation and verification
procedures, including sensitivity analyses and comparisons with real-world data. Yet, these
practices did not emerge fully formed into handbooks—even today, the guidance in handbooks
falls well short of accounting for the realities of decision-making through simulations in
organizations [
        <xref ref-type="bibr" rid="ref16 ref4">4, 16</xref>
        ]. Yet, it is these practices that, in the end, determine how opacity is managed
and how it impacts how decisions are taken. The literature on simulations suggests that an
accounting of the realities of managing AI system opacity in organizations is crucial to make
sure that the debate on AI transparency does not remain concerned only with technical issues
or a broad regulatory architecture framework, which might have limited or counterproductive
efects at worst[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        The ability to use simulations efectively despite their opacity developed gradually through
trial and error processes. The result of these incremental changes, which emerged to
accommodate the specific balance between tacit and explicit knowledge aforded by simulation,
fundamentally altered the nature of decision-making. For instance, the use of simulations
engendered a longstanding debate in science about the nature of the evidence that simulations
provide—when can a simulation result be considered evidence for a theory? Over time, a
distinctive mode of knowing through simulations emerged, with its own rules about which problem
can be addressed and what counts as evidence [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. As AI systems introduce an unavoidable
and new (mechanistic) tacit dimension in our decisions processes, we need to investigate the
specific organizational practices through which this remaining opacity is managed and how
this contributes to reshaping how decisions are reached in organizations.
      </p>
      <p>Finally, as with any other technology, the adoption of simulation is associated with shifts
in power between occupations. If a tacit dimension is unavoidable, a debate on transparency
within a framework of ethical AI needs to consider how AI systems shift the balance between
explicit knowledge, human tacit knowledge, and mechanistic tacit knowledge; and how this
changes the nature of decision-making and power balances.</p>
      <p>Adapting to the opacity of AI systems won’t be straightforward, as simulations indicate. The
adoption of AI necessitates new decision-making practices and organizational processes tailored
to how it balances tacit and explicit knowledge. These practices may give rise to a new class
of professionals. While current transparency tools and regulations will be relevant to shaping
these practices, empirical studies are needed to understand the impact of AI on decision-making
within individuals and organizations.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>This paper summarizes transparency approaches and highlights the need to understand the
limitations of explaining complex AI systems. We argue that addressing transparency in AI
models requires acknowledging the persistence of opacity. Research on digital simulations
suggests that this opacity will not fade away with trust infrastructure alone. Adapting to the
opacity of AI systems will lead to subtle adjustments in decision-making processes, creating
new types of decision-making processes. Research on transparency should engage with these
changes to ensure ethical AI deployment in society.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Barredo Arrieta</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Explainable Artificial</surname>
          </string-name>
          <article-title>Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI</article-title>
          ,
          <source>Information Fusion</source>
          <volume>58</volume>
          (
          <year>2020</year>
          )
          <fpage>82</fpage>
          -
          <lpage>115</lpage>
          . doi:https://doi.org/10.1016/j.inffus.
          <year>2019</year>
          .
          <volume>12</volume>
          .012.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Fenoglio</surname>
          </string-name>
          , E. Kazim, AI Explainability, Interpretability, and
          <string-name>
            <surname>Transparency</surname>
          </string-name>
          ,
          <year>2023</year>
          . (forthcoming).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Humphreys</surname>
          </string-name>
          ,
          <article-title>The philosophical novelty of computer simulation methods</article-title>
          ,
          <source>Synthese</source>
          <volume>169</volume>
          (
          <year>2009</year>
          )
          <fpage>615</fpage>
          -
          <lpage>626</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11229-008-9435-2.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Bailey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Leonardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Barley</surname>
          </string-name>
          ,
          <article-title>The lure of the virtual</article-title>
          ,
          <source>Organization Science</source>
          <volume>23</volume>
          (
          <year>2012</year>
          )
          <fpage>1485</fpage>
          -
          <lpage>1504</lpage>
          . doi:
          <volume>10</volume>
          .1287/orsc.1110.0703.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. J.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <article-title>Role of fairness, accountability, and transparency in algorithmic afordance</article-title>
          ,
          <source>Computers in Human Behavior</source>
          <volume>98</volume>
          (
          <year>2019</year>
          )
          <fpage>277</fpage>
          -
          <lpage>284</lpage>
          . doi: https://doi.org/ 10.1016/j.chb.
          <year>2019</year>
          .
          <volume>04</volume>
          .019.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Coba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Besold</surname>
          </string-name>
          ,
          <article-title>A historical perspective of explainable artificial intelligence</article-title>
          ,
          <source>WIREs Data Mining Knowl Discov</source>
          <volume>11</volume>
          (
          <year>2021</year>
          )
          <article-title>e1391</article-title>
          . doi: https: //doi.org/10.1002/widm.1391.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Saranya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Subhashini</surname>
          </string-name>
          ,
          <article-title>A systematic review of explainable artificial intelligence models and applications: Recent developments and future trends</article-title>
          ,
          <source>Decision Analytics Journal</source>
          <volume>7</volume>
          (
          <year>2023</year>
          )
          <article-title>100230</article-title>
          . doi:https://doi.org/10.1016/j.dajour.
          <year>2023</year>
          .
          <volume>100230</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <article-title>Counterfactual explanations without opening the black box: automated decisions and the GDPR, Harv</article-title>
          .
          <string-name>
            <given-names>J.L.</given-names>
            &amp;
            <surname>Tech</surname>
          </string-name>
          .
          <volume>31</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>267</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          . doi:https://doi.org/10.1016/j.artint.
          <year>2018</year>
          .
          <volume>07</volume>
          .007.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Polanyi</surname>
          </string-name>
          ,
          <article-title>The logic of tacit inference</article-title>
          ,
          <source>Philosophy</source>
          <volume>41</volume>
          (
          <year>1966</year>
          )
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          . doi:
          <volume>10</volume>
          .1017/ S0031819100066110.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>Brożek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Furman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jakubiec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kucharzyk</surname>
          </string-name>
          ,
          <article-title>The black box problem revisited. Real and imaginary challenges for automated legal decision making</article-title>
          ,
          <source>Artificial Intelligence and Law</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10506-023-09356-9.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Héder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Paski</surname>
          </string-name>
          , Autonomous robots and
          <source>tacit knowledge, Appraisal</source>
          <volume>9</volume>
          (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L. J. J.</given-names>
            <surname>Wittgenstein</surname>
          </string-name>
          , Philosophical Investigations, New York, US: Wiley-Blackwell,
          <year>1953</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Weinkle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Roger</given-names>
            <surname>Pielke</surname>
          </string-name>
          ,
          <article-title>The truthiness about hurricane catastrophe models</article-title>
          ,
          <source>Science, Technology, &amp; Human Values</source>
          <volume>42</volume>
          (
          <year>2017</year>
          )
          <fpage>547</fpage>
          -
          <lpage>576</lpage>
          . doi:
          <volume>10</volume>
          .1177/0162243916671201.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lebovitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Levina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lifshitz-Assaf</surname>
          </string-name>
          ,
          <article-title>Is AI Ground Truth Really 'True'? The Dangers of Training and Evaluating AI Tools Based on Experts' Know-</article-title>
          <string-name>
            <surname>What</surname>
          </string-name>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cacciatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Jarzabkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bednarek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chalkias</surname>
          </string-name>
          ,
          <article-title>What's in a Model? Computer Simulations and the Management of Ignorance</article-title>
          ,
          <source>Academy of Management Proceedings</source>
          <year>2019</year>
          (
          <year>2019</year>
          )
          <article-title>18102</article-title>
          . doi:
          <volume>10</volume>
          .5465/AMBPP.
          <year>2019</year>
          .
          <volume>250</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Solano-Kamaiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Nov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stoyanovich</surname>
          </string-name>
          ,
          <article-title>It's Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-of in Machine Learning for Public Policy</article-title>
          ,
          <source>in: ACM FAccT</source>
          <year>2022</year>
          , New York, NY, USA,
          <year>2022</year>
          , p.
          <fpage>248</fpage>
          -
          <lpage>266</lpage>
          . doi:
          <volume>10</volume>
          .1145/3531146. 3533090.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Morrison</surname>
          </string-name>
          , Reconstructing Reality: Models, Mathematics, and
          <string-name>
            <surname>Simulations</surname>
          </string-name>
          , Oxford University Press,
          <year>2015</year>
          . doi:
          <volume>10</volume>
          .1093/acprof:oso/9780199380275.001.0001.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>