<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>European Workshop on Algorithmic Fairness, June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Multidomain Relational Framework to Guide Institutional AI Research and Adoption</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vincent J. Straub</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Deborah Morgan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Youmna Hashem</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>John Francis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Saba Esnaashari</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jonathan Bright</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Alan Turing Institute</institution>
          ,
          <addr-line>British Library, 96 Euston Rd., London NW1 2DB</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, University of Bath</institution>
          ,
          <addr-line>Claverton Down, Bath BA2 7AY</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>0</volume>
      <fpage>7</fpage>
      <lpage>09</lpage>
      <abstract>
        <p>Calls for new metrics, technical standards, and governance mechanisms to guide and evaluate the adoption of ethical Artificial Intelligence (AI) in institutions are now commonplace. Yet, most research and policy eforts do not fully account for all the diferent approaches and issues potentially relevant to the institutional adoption of AI. In this position paper, we contend that this omission stems, in part, from what we call the 'relational problem': the persistence of difering value-based terminologies to categorize and assess institutional AI systems, and the prevalence of conceptual isolation in the fields that study them including ML, human factors, and social science. After developing this critique, we propose a basic ontological framework to bridge ideas across fields-consisting of three horizontal, discipline-agnostic domains for organizing foundational concepts into themes: Operational, Epistemic, and Normative.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multidomain approach to AI</kwd>
        <kwd>socio-technical topics</kwd>
        <kwd>institutions</kwd>
        <kwd>conceptual framework</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        prevalence of conceptual isolation in the fields that study institutional AI [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], which we call the
‘relational problem’.
      </p>
      <p>
        While a myriad ways of categorizing issues related to AI development and adoption have
been proposed, many ultimately rely on unidimensional thinking. That is, they rely heavily on
a single viewpoint or concept—understood both as an abstract idea that ofers a point of view
for understanding some aspect of experience (e.g., bias), and, relatedly, a mental image that
can be operationalized (e.g., measurement bias)—to discuss institutional AI. Much work in the
social and policy sciences stresses the ethical and legal challenges at stake in the adoption of
AI; while research in computer science tends to highlight the computational and operational
aspects that need to be considered. Yet, as pointed out by [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] in discussing scholarship on AI
ethics and governance, addressing this shortcoming and uniting the field requires sustained
interdisciplinary efort and a richer consideration of the multi-faceted relation between concepts.
      </p>
      <p>
        To address this relational gap, we propose a basic ontological framework, described briefly
below, to help bridge terms across fields—consisting of three discipline-agnostic domains for
organizing relevant concepts: Operational, Epistemic, and Normative. Our framework aims to
achieve two key aims: (1) disciplinary reach, i.e., bridge diferent subcommunities (ML, human
factors, social science, policy etc.), and (2) provide impetus for an intellectual shift that reframes
how researchers and key stakeholders (decision-makers, policy creators, advocates, etc.) think
about institutional AI systems. Our framework is ontological in the sense that it is composed of
three simple domains or meta-concepts that aim to act loosely as semantic fields [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] to guide
researchers engaged in studying and conceptualizing institutional AI systems (Figure 1).
      </p>
      <p>Operational Domain The first field is the ‘operational domain’, which aims to represent the
topics, issues, and methods related to the routine activities and functionality of institutional AI
systems. It is meant to capture terms that are mainly but not exclusively defined, operationalized,
and studied in a technical, applied context. More specifically, it is meant to enable researchers to
categorize into a single category all relevant concepts that can be employed both as an abstract
idea (e.g., ‘accuracy’) and easily operationalized to quantitatively measure a specific attribute of
a particular institutional AI system (i.e., ‘percentage of correct predictions’).</p>
      <p>Epistemic Domain The epistemic domain aims to capture knowledge-related topics and
issues connected to a particular AI system or institutional AI in general. That is, the epistemic
domain is meant to help researchers group together concepts that seek to describe properties
which pertain to the interface between AI applications and human actors. Both in terms of the
knowledge, beliefs, and intentions of those using AI applications (e.g., a desire for transparency),
and the internal properties of the system itself (e.g., its interpretability).</p>
      <p>
        Normative Domain The meaning and uses of concepts in the normative domain, the final
domain we propose, collectively relate to the entitlements, values, and principles of political
morality that stakeholders and afected parties hold towards a particular algorithmic system
or institutional AI in general. The term ‘political morality’ is used here to refer to normative
principles and ideals regulating and structuring the political domain [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Taken together, the framework’s utility derives from the fact that it is discipline-agnostic.
More specifically, it aims to be instructive for the individual researcher studying institutional
AI in both helping with organizing concepts used to study AI systems and, perhaps more
importantly, by drawing attention to whether all potential topics—by virtue of being relevant to
one or more of the three proposed domains—have been accounted for. Overall, our contribution</p>
    </sec>
    <sec id="sec-2">
      <title>SCALE OF INQUIRY</title>
      <p>All institutional AI
Set of applications
Single application</p>
    </sec>
    <sec id="sec-3">
      <title>MACRO</title>
    </sec>
    <sec id="sec-4">
      <title>MESO</title>
    </sec>
    <sec id="sec-5">
      <title>MICRO</title>
    </sec>
    <sec id="sec-6">
      <title>DOMAIN</title>
    </sec>
    <sec id="sec-7">
      <title>SCOPE</title>
    </sec>
    <sec id="sec-8">
      <title>CONCEPTS</title>
    </sec>
    <sec id="sec-9">
      <title>METHODS</title>
      <sec id="sec-9-1">
        <title>OPERATIONAL</title>
      </sec>
      <sec id="sec-9-2">
        <title>EPISTEMIC</title>
      </sec>
      <sec id="sec-9-3">
        <title>NORMATIVE</title>
        <p>Routine machine activities
and technical functionality</p>
        <p>Knowledge-related issues
at human-machine interface</p>
        <p>Moral entitlements, principles
and values of affected groups
§ Accuracy
§ Efficiency
§ Reliability
§§ [R…o]bustness
§ Metrics:
‣ Classification accuracy
§ Standards:
‣ Performance benchmarks
‣ Reporting cards
[…]
§ Explainability
§ Interpretability
§ Reproducibility
§§ [T…ra]nsparency
§ Mechanisms:
‣ Human-in-the-loop
§ Metrics
§ Standards:
‣ Documentation checklists
§ […]
§ Equality
§ Fairness
§ Priority
§ Welfare
§ […]
§ Mechanisms:
‣ Algorithmic impact</p>
        <p>Assessments
§ Metrics:
‣ Classification parity
§ […]</p>
        <p>Contingent on
application
context
aims to benefit the algorithmic fairness community by facilitating a constructive dialog around
the challenges we face as a diverse, interdisciplinary field.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Margetts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dorobantu</surname>
          </string-name>
          ,
          <article-title>Rethink government with ai</article-title>
          ,
          <source>Nature</source>
          <volume>568</volume>
          (
          <year>2019</year>
          )
          <fpage>163</fpage>
          -
          <lpage>165</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Laufer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Cooper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Heidari</surname>
          </string-name>
          ,
          <article-title>Four years of facct: A reflexive, mixed-methods analysis of research contributions, shortcomings, and future prospects</article-title>
          ,
          <source>in: 2022 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>401</fpage>
          -
          <lpage>426</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mökander</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sheth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Watson</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Floridi,</surname>
          </string-name>
          <article-title>The switch, the ladder, and the matrix: Models for classifying ai systems</article-title>
          ,
          <source>Minds and Machines</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Birhane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kalluri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Card</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Agnew</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dotan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <article-title>The values encoded in machine learning research</article-title>
          ,
          <source>in: 2022 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>184</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Burr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Leslie</surname>
          </string-name>
          ,
          <article-title>Ethical assurance: a practical approach to the responsible design, development, and deployment of data-driven technologies</article-title>
          ,
          <source>AI and Ethics</source>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Wierzbicka</surname>
          </string-name>
          ,
          <article-title>Semantic primitives and semantic fields, Frames, fields, and contrasts (</article-title>
          <year>1992</year>
          )
          <fpage>209</fpage>
          -
          <lpage>227</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E.</given-names>
            <surname>Erman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Furendal</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence and the political legitimacy of global governance</article-title>
          ,
          <source>Political Studies</source>
          (
          <year>2022</year>
          )
          <fpage>00323217221126665</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>