<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Ecology of AI - Reflections for Researchers</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Retno Larasati</string-name>
          <email>retno.larasati@open.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Venetia Brown</string-name>
          <email>venetia.brown@open.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Soraya Kouadri Mostéfaoui</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Syed Mustafa Ali</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tracie Farrell</string-name>
          <email>tracie.farrell@open.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Knowledge Media Institute, The Open University UK</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computing and Communications, The Open University UK</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <fpage>26</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>We summarise the first Workshop on Ecology of AI (EcAI 2023), co-located with 2nd International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023), held on June 26,2023 in Munich, ∗Corresponding author. †These authors contributed equally. (T. Farrell) HHAI-WS 2023: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI),</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Germany.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        Pratyusha Kalluri of the Radical AI Network has proposed that asking whether AI is good or
fair is not the right question if we want to look at potential benefits and harms [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. We have to
look at power. More specifically, we have to look at how AI impacts power relationships that
preserve inequality within our society in very real and material terms, particularly through
the vehicle of industrial racialised capitalism. One can consider the meta-ethical question of
goodness or badness in many diferent ways, but the project of ”ethical AI” often becomes
conflated with specific ideas of morality, shifting the conversation toward the cultural arena
(which makes it easier to dismiss or considerations of ethical AI to be siloed). We can see
evidence of this through the abstraction of ethics in research (e.g. lacking specific consideration
of specific harms), based primarily on White, Western notions of harm, focusing on “single issue”
conversations of equity (rather than global and historical power asymmetries) and mitigated
through mathematical means [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>Having a frank discussion, with multiple, diverse stakeholders, in which money, power and
influence are part of the discussion of potential harm and benefit of AI was one of the aims
of this workshop. To be able to consider the broader, world-systems thinking around harm
and benefit, in the short-, medium- and long-term, we propose a frame of Ecology of Artificial
Intelligence.</p>
      <sec id="sec-2-1">
        <title>1.1. Topics and Issues</title>
        <p>We invited reflection papers covering any aspect of AI including, but not limited to:
• The impact of AI on:
– non-normative bodies and identities
– oppressed and/or under-served populations
– communication and interaction between groups
– the development of communities and infrastructures
– the balance of ecosystems and resources
– the climate, the planet context
• Short-, medium- and long-term beneficiaries of AI
• AI Ethics: Framework, Principles, and Guidelines
• AI Ethics education and awareness
• Accountability, responsibility and liability of AI-based systems
• Explainable AI and interpretable AI
• Avoiding harm and negative side efects in AI-based systems
• Self-explanation, self-criticism and the transparency problem
• Ethical human-machine (AI) interaction
• Regulating AI-based systems: standards and certification
• Evaluation problems for AI bias and fairness
• Human-in-the-loop, bias, and the oversight problem
• Potential harm in AI-based systems, including in industrial processes, health, automotive
systems, robotics, critical infrastructures, war, among others.
• The potential of AI to shape or preserve existing power relationships
• Criticisms around the inevitability of AI</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2. Workshop Organisation</title>
      <sec id="sec-3-1">
        <title>2.1. Organisers</title>
        <p>Dr. Retno Larasati (Organiser) is Postdoctoral Research Associate in Artificial Intelligence
at the Knowledge Media Institute (KMi) at The Open University. Her work spans AI, HCI,
and explainable AI, looking at how explanations are understood and trusted by layperson.
Her research focuses on ethical and fair AI, AI for social good, their methodologies and their
ethical considerations, as well as their outputs/beneficiaries. She has been workshop program
committees of several workshops and international conferences.</p>
        <p>Dr. Venetia Brown (Organiser) is a Postdoctoral Research Associate in Qualitative Methods
at the Knowledge Media Institute (KMi) at The Open University. Her research focuses on
the pedagogical component of Ethics and Fairness in AI with interest in how AI researchers
learn about ethical principles and how they are applied in their work. Her additional
research interests include online pedagogical approaches with educational technologies and
exploring how they can facilitate learning, engagement and a sense of community in STEM courses.</p>
        <p>Dr. Tracie Farrell (Organiser) is a UKRI Future Leaders Fellow at the Knowledge Media
Institute of the Open University. Her fellowship explores the impacts of Artificial Intelligence
and its subfields on society, particularly through the lenses of queer, intersectional feminisms.
This includes the impacts of AI for non-normative bodies or identities, the consequences of
historical power asymmetries on the benefits and harms associated with AI, and the influence
of projected power relationships resulting from geopolitical and economic realities. As a social
scientist and former educator, she is also interested in how diferent stakeholders learn to
conceptualise and evaluate the future impacts of AI.</p>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Program Committees</title>
        <p>• Soraya Kouadri Mostéfaoui - School of Computing and Communication, The Open
University, United Kingdom.
• Syed Mustafa Ali - School of Computing and Communications, The Open University,</p>
        <p>United Kingdom.
• Aisling Third - Knowledge Media Institute, The Open University, United Kingdom.
• Pinelopi Troullinou - Trilateral Research Ethical AI, Ireland
• Ana Tomicic - ARETE Institute for Sustainable Prosperity, North Macedonia</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Workshop Summary</title>
      <p>The primary objective of the EcAI 2023 workshop was to broaden the perspectives of artificial
intelligence (AI) researchers and delve into the profound and far-reaching impacts of their
own work. This endeavor was motivated by the recognition that prevailing notions of ”ethical”
or ”responsible” AI and ”AI for Social Good” tend to be informed predominantly by Western
European moral frameworks. The workshop organizers aimed to underscore the extensive
ecosystem involved in AI implementation and the potential influence of AI on global systems.</p>
      <p>The workshop attendees, comprising researchers from diverse backgrounds and nations,
actively participated in both in-person and virtual capacities. They contributed through the
presentation of their own research findings and by engaging in substantive conversations. The
workshop fostered an environment conducive to collaboration and critical thinking, as the
participants explored ethical considerations, societal implications, and strategies for mitigating
possible adverse efects associated with AI.</p>
      <sec id="sec-4-1">
        <title>3.1. Submission</title>
        <p>We invited authors to submit a short reflection, preferably on their own previous or current
work in AI, that considers a wide scope of potential impacts of this work, direct and indirect
beneficiaries, projected into the future. We asked participants, to the best of their ability, to
consider longer-term impacts, and existing or projected power relationships that make certain
consequences more likely (good or bad, and everything in between). The purpose of this
statement was to help spark discussion and explore the questions that may assist AI researchers
to broaden their view on potential impacts of their work. The target audience size was 15-18
participants, and was attended by 10 participants. We received a total of four submissions. Each
paper was peer-reviewed by at least three Program Committee (PC) members.
The EcAI 2023 program was organised in one invited talk, one paper session, and one group
activity. Workshop Program is shown below:
Opening
Opening Talk by Syed Mustafa Ali: “The Political Economy, Ecology, Theology of AI.”</p>
        <sec id="sec-4-1-1">
          <title>Paper Session</title>
          <p>Protected Characteristics and Abuse Detection - Tracie Farrell
Social Media Platform Structures and Their Implications - Vijay Keswani
The Broader Impacts of AI Development - Maria Elahi
AI in Healthcare - Reflection on Potential Harms and Impacts - Retno Larasati</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>Group Activity</title>
          <p>Reflection Guide Stage
Discussion on Power</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgement References</title>
      <p>This work was funded by a UKRI Future Leaders Fellowship (Round Six) MR/W011336/1.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Kalluri</surname>
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Don</surname>
          </string-name>
          <article-title>'t ask if artificial intelligence is good or fair, ask how it shifts power</article-title>
          .
          <source>Nature. 7 July</source>
          <year>2020</year>
          . doi: https://doi.org/10.1038/d41586-020-02003-2
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Birhane</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruane</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laurent</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>S. Brown</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flowers</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ventresque</surname>
            <given-names>A</given-names>
          </string-name>
          , L. Dancy C.
          <article-title>The forgotten margins of AI ethics</article-title>
          .
          <source>In2022 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <source>2022 Jun</source>
          <volume>21</volume>
          (pp.
          <fpage>948</fpage>
          -
          <lpage>958</lpage>
          ). https://doi.org/10.1145/3531146.3533157
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>