<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Designing organizational control mechanisms for consequential AI systems: towards a situated methodology</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Shan Amin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roel Dobbe</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sander Renes</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Delft University of Technology</institution>
          ,
          <addr-line>Jafalaan 5, 2628BX Delft</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Artificial intelligence (AI) holds both potential benefits and significant risks for organizations, including biases, discrimination, opacity, and reduced human accountability. Technical systems, including AI, must be regulated to safeguard stakeholders' interests and maintain proper functioning over time. However, the problem of designing practical controls for specific AI systems and organizations largely remains unresolved. To address this gap, we propose an initial methodology focusing on identifying and contextualizing stakeholders' values within their local environments. We validate our approach through a case study in the Japanese life insurance industry, aiming to assess its repeatability and potential improvements. Our design method includes 10 steps which AI system developers can use to situate high-level institutions in the local context to control their AI systems. The validation eforts highlight the contextual nature of designing controls for AI systems, emphasizing the need for diverse control mechanisms to comply with stakeholders' values.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Control by design</kwd>
        <kwd>AI applications</kwd>
        <kwd>System safety</kwd>
        <kwd>Design for Values</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        governance level, there is currently a gap in literature that aids in conceptualizing and empirically
validating the design of control mechanisms for AI systems in their situational context [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
      <p>
        Our paper addresses this gap by providing a repeatable process for situating identified values
into practical organizational controls in the context of specific AI applications. We build our core
contribution on earlier approaches that provide ways to tackle parts of the problem. Building
on Van De Poel [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we identify social values and norms relevant for AI systems. Using the
safety control structure, a methodology to map and evaluate how diferent processes relate to
the functioning of algorithmic systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], we position and operationalize identified norms and
values towards requirements on concrete processes and their responsible actors in the form of
feedback mechanisms between diferent processes and their actors. The emerging approach
helps identifying a wide range of potential risks, that have to be brought down to a set of feasible
and efective requirements for the particular AI application and use case. Building on Garst et al.
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], we propose several steps to reduce the dimensionality of reporting to yield the contextually
relevant selection of norms that is controllable for organizations. Furthermore, we then lean on
Mäntymäki et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] to further contextualize and incorporate the set of resulting norms within
the organization for a particular AI application and use case. This combined approach informs
a pragmatic framework for understanding the need and informing the design of organizational
control mechanisms for AI systems.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N. G.</given-names>
            <surname>Leveson</surname>
          </string-name>
          ,
          <article-title>Engineering a safer world: systems thinking applied to safety</article-title>
          ,
          <source>Choice Reviews Online</source>
          <volume>49</volume>
          (
          <year>2012</year>
          )
          <fpage>49</fpage>
          -
          <lpage>6305</lpage>
          . doi:
          <volume>10</volume>
          .5860/choice.49-
          <fpage>6305</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Dobbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. W.</given-names>
            <surname>Gilbert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mintz</surname>
          </string-name>
          ,
          <source>Hard choices in artificial intelligence, Artificial Intelligence</source>
          <volume>300</volume>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .1016/j.artint.
          <year>2021</year>
          .
          <volume>103555</volume>
          , article 103555.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Anagnostou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Karvounidou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Katritzidaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kechagia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Melidou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Mpeza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Konstantinidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Kapantai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Berberidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Magnisalis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Peristeras</surname>
          </string-name>
          ,
          <article-title>Characteristics and challenges in the industries towards responsible AI: a systematic literature review</article-title>
          ,
          <source>Ethics and Information Technology</source>
          <volume>24</volume>
          (
          <year>2022</year>
          )
          <article-title>37</article-title>
          . URL: https://link.springer.
          <source>com/10.1007/ s10676-022-09634-1</source>
          . doi:
          <volume>10</volume>
          .1007/s10676-022-09634-1.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zuiderwijk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Salem</surname>
          </string-name>
          ,
          <article-title>Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda</article-title>
          ,
          <source>Government Information Quarterly</source>
          <volume>38</volume>
          (
          <year>2021</year>
          )
          <article-title>101577</article-title>
          . URL: https://www.sciencedirect.com/science/article/ pii/S0740624X21000137. doi:
          <volume>10</volume>
          .1016/j.giq.
          <year>2021</year>
          .
          <volume>101577</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>I. Van De Poel</surname>
          </string-name>
          ,
          <article-title>Translating values into design requirements, In Philosophy of engineering and technology</article-title>
          (pp. (
          <year>2013</year>
          )
          <fpage>253</fpage>
          -
          <lpage>266</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -94-007-7762-020.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Garst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Maas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Suijs</surname>
          </string-name>
          ,
          <article-title>Materiality assessment is an art, not a science: Selecting esg topics for sustainability reports</article-title>
          ,
          <source>California Management Review</source>
          <volume>65</volume>
          (
          <year>2022</year>
          )
          <fpage>64</fpage>
          -
          <lpage>90</lpage>
          . doi:
          <volume>10</volume>
          .1177/ 00081256221120692.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mäntymäki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Minkkinen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Birkstedt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Viljanen</surname>
          </string-name>
          ,
          <article-title>Defining organizational ai governance</article-title>
          ,
          <source>AI And Ethics</source>
          <volume>2</volume>
          (
          <year>2022</year>
          )
          <fpage>603</fpage>
          -
          <lpage>609</lpage>
          . doi:
          <volume>10</volume>
          .1007/s43681-022-00143-x.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>