<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ethical AI for the Governance of the Society: Challenges and Opportunities</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>VTT Technical Research Centre of Finland Ltd.</institution>
          ,
          <addr-line>Visiokatu 4, Tampere</addr-line>
        </aff>
      </contrib-group>
      <fpage>20</fpage>
      <lpage>26</lpage>
      <abstract>
        <p>Artificial Intelligence (AI) technologies are expected to have numerous and diverse social implications that cut deep into our society. Due to AI's specific nature as emergent and constantly evolving generic technology, we need new approaches, methodologies, and processes to govern and steer the utilization of AI technologies both in the public and private sectors. This is both a multilevel and multi-dimensional governance challenge. First, there has to be a shared and coordinated understanding across various social and administrational sectors on how AI is implemented and regulated. Second, good coordination between different levels of governance is crucial. Third, there is a challenge to find a balance between soft and hard governance mechanisms in varying implementation and organizational contexts. This paper presents an overview of a new Strategic Research Council funded project project entitled “Ethical AI for the Governance of the Society” (ETAIROS). The project focuses on studying and co-developing together with stakeholders practical governance approaches, as well as design and technology solutions that help public, private and civil society actors enhance the ethical sustainability of operations in the use of AI. To achieve its ambitious goals, this interdisciplinary endeavour integrates expertise in foresight, ethics, design, machine learning and governance.</p>
      </abstract>
      <kwd-group>
        <kwd>ethics</kwd>
        <kwd>artificial intelligence (AI)</kwd>
        <kwd>foresight</kwd>
        <kwd>design</kwd>
        <kwd>governance</kwd>
        <kwd>societal impacts</kwd>
        <kwd>responsibility</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Artificial Intelligence (AI) technologies are expected to have numerous and diverse
implications that cut deep into our society. Various definitions of AI exist and what
they have in common is that they usually refer to the increasing capability of machines
to perform tasks, which have been conducted traditionally by people. As humans, we
have certain limitations, and AI developers have often focused there, with the
presumption that AI is capable to overcome some of these physical, cognitive or other
limitations, and to expand the human potential to new horizons. Still, AI technologies and
application areas are emerging and there is uncertainty related to many aspects of their
design and implementation.</p>
      <p>Due to AI’s specific nature as constantly evolving generic technologies, we need
careful calibration of the existing approaches, methodologies, processes and guiding
principles, or development of completely novel ways to govern and steer the utilization
of AI technologies both in the public and private sectors. Essentially, this is a
multilevel governance challenge. First, there has to be a shared and coordinated
understanding across various social and administrational sectors on how AI is implemented and
regulated, and, second, coordination between different levels of governance is also
necessary. Third, there is a challenge to find an optimal balance between soft and hard
governance mechanisms in varying implementation and organizational contexts.</p>
      <p>This paper presents an overview of a new project entitled “Ethical AI for the
Governance of the Society” (ETAIROS). The project integrates expertise in foresight,
ethics, design, machine learning and governance to study and co-develop together with
stakeholders practical governance processes and frameworks as well as design and
technology solutions that help public, private and civil society actors enhance the
ethical sustainability of their operations in the use of AI. The project is implemented by the
joint effort of six organizations: Tampere University, VTT Technical Research Centre
of Finland Ltd., University of Helsinki, University of Jyväskylä, University of Turku
and 4Front. In ETAIROS, we study and co-develop together with stakeholders practical
governance processes and frameworks as well as design and technology solutions that
help public, private and civil society actors enhance the ethical sustainability of their
operations in the use of AI.</p>
      <p>The project activities are guided by four scientific objectives: 1) to develop a
theoretically and empirically grounded approach for steering the development and use of
AI and its societal impacts; 2) to produce a body of novel knowledge on context specific
challenges, opportunities and barriers to ethical and responsible use of AI; 3) to deliver
tested design and machine learning processes for ethical AI; and 4) to yield empirically
justified governance approaches and practices for the use of AI. These contributions
are expected to frame the keys to socially sustainable strategic planning, policy and
regulation of AI.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background: Current transformation in society and business</title>
      <p>
        AI technologies are revolutionizing our world. AI entities are “digital computers or
computer-controlled robots that perform tasks commonly associated with intelligent
beings”, combining diverse abilities to learn, reason, solve problems, perceive and use
language
        <xref ref-type="bibr" rid="ref7">(Copeland 2018)</xref>
        . AI technologies vary by the scope and depth of “thinking”
they undertake. We already use narrow AI technologies, algorithmic expert systems,
big data, and deep learning in health care, banks, insurance companies, public policy
and governance, factory production, security and law enforcement, as well as
autonomous vehicles, not to mention social media apps
        <xref ref-type="bibr" rid="ref24 ref25">(Grace et al. 2018; O’Neil 2016;
OECD 2018)</xref>
        . Artificial General Intelligence (AGI) systems that “possess a degree of
self-understanding and autonomous self-control”
        <xref ref-type="bibr" rid="ref12 ref27">(Goertzel &amp; Pennachin 2007,
Saariluoma 2015)</xref>
        are still beyond the horizon.
      </p>
      <p>
        Even if some of the buzz around AI were hyped, AI has a staggering economic
potential
        <xref ref-type="bibr" rid="ref3 ref4">(Brynjolfsson &amp; Mcafee 2014, 2017)</xref>
        . Forecasts indicate that AI revenues will
surge in the coming years. For instance, Tractica research has estimated that the income
from AI applications will shift from $643.7 million in 2016 to $36.8 billion by 2025
        <xref ref-type="bibr" rid="ref11">(Faggella 2018)</xref>
        . A recent global survey of business executives reported that some 72%
of respondents expected AI to have a significant impact on businesses in the next five
years
        <xref ref-type="bibr" rid="ref14 ref26">(Ransbotham et al. 2017)</xref>
        . Globally, there is a competition who will become the
world leader in AI. China, for example, set goals to become one by 2030
        <xref ref-type="bibr" rid="ref10">(Forbes, 2019)</xref>
        .
In Europe, spending for AI-based technologies in 2019 has increased 49% over the
2018 figure to reach USD5.2 billion
        <xref ref-type="bibr" rid="ref15">(IDC, 2019)</xref>
        . For example, large corporations such
as Microsoft just announced their “AI for Good” programme, which aims at “providing
technology, resources and expertise to empower those working to solve humanitarian
issues and create a more sustainable and accessible world.” (Microsoft webpage). This
initiative is planned to be run from the UK and it aims at integrating technology,
expertise in artificial intelligence and data science with expertise in environmental science,
disability needs and humanitarian assistance.
      </p>
      <p>
        Governments have clearly recognized the AI potential yet some are more vocal as
forerunners than others. Nearly all developed nations have AI strategies and compete
with each other to support AI development and deployment
        <xref ref-type="bibr" rid="ref9">(Dutton 2018)</xref>
        . In 2018 EU
set up an AI expert group to prepare a union-wide AI strategy. Simultaneous to its
economic promise, AI is likely to have a transformative social and cultural impact. The
technologies appear capable of disrupting existing social power structures, industries,
even our life as a species
        <xref ref-type="bibr" rid="ref2">(Bostrom, 2014)</xref>
        . As a result, recent surveys and studies have
tried to gauge the effects AI could have on a wide variety of contexts ranging from
politics
        <xref ref-type="bibr" rid="ref14">(Helbing et al, 2017)</xref>
        , war
        <xref ref-type="bibr" rid="ref8">(Cummings, 2017)</xref>
        to wealth distribution and
employment
        <xref ref-type="bibr" rid="ref1 ref17 ref4">(e.g. Korinek &amp; Stiglitz 2017, Avent 2016)</xref>
        .
      </p>
      <p>United Nations (UN) join forces to ensure AI for Good: in 2019 the UN published a
report “UN Activities on Artificial Intelligence”, which outlines how AI is being used
to fight hunger, ensure food security, mitigate climate change, advance health, and
facilitate the transition to smart sustainable cities. It also offers insights into the
challenges associated with AI, addressing ethical and human right implications, and so
invites all stakeholders, including government, industry, academia and civil society, to
consider how best to work together to ensure AI serves as a positive force for humanity
and the environment.</p>
    </sec>
    <sec id="sec-3">
      <title>Challenges and opportunities</title>
      <p>Nowadays, AI is surrounded by intense hype. Leikas (2019) reminded that we need to
look beyond the hype because real-life examples and in-depth discussions on ethical
issues and potential impacts are still insufficient. It is unclear what we are even talking
about when we refer to ethics of AI. The problem is, as noted by Leikas (2019) that we
easily fall into looking for ethical dilemmas related to AI, while we should be asking
how these emerging technologies should be designed and used for good and improving
the quality of life. So, when it is asked “whom an autonomous car should be allowed to
run over”, it is simply a wrong question. The important questions to ask are those which
focus on ensuring peace, safety and security of citizens, trust in society, equal
availability of services, possibility to be heard, and justifying technology decisions from the
perspectives of human dignity, welfare and sustainability.</p>
      <p>Society can - and should - adopt AI in a way that maintains or improves the quality
of life of citizens. Currently, many expectations are pinned upon AI in different fields
of everyday life, yet at the same time, there are a number of ethical questions associated
with it. Many of them concern the design of interactions between human and
non-human actors that foster trust. These include e.g., the human-machine co-working, the
ownership of the used data, and distortions in the data, as well as privacy-preserving
and resilient AI. For example, collecting a wealth of personal data for health
maintenance when at the same time facing an increase in radical openness give both citizens
and decision makers causes of concern. Alike, the promises of AI in work life in terms
of autonomous systems as work mates, as well as illustrations of future smart cities with
autonomous maintenance and transportation exercise many citizens’ minds.</p>
      <p>
        The design and use of AI are inevitably socially and culturally embedded
        <xref ref-type="bibr" rid="ref16">(Kitchin
2017, 18)</xref>
        . Research has shown that machine-learning methodologies often give rise to
social biases, which derive from the programming choices and data used to train the
systems. Such biases may relate e.g. to individuals’ gender and ethnic background, and
affect their equality of opportunity
        <xref ref-type="bibr" rid="ref28">(Weber 2018)</xref>
        . Due to system complexity, it is
extremely difficult to identify such biases and develop “debiasing” algorithms
        <xref ref-type="bibr" rid="ref4">(Brynjolfsson &amp; Mcafee 2017)</xref>
        .
      </p>
      <p>
        Challenges have been identified in AI use as well. Algorithmic technologies have
been utilized to destabilize democratic processes
        <xref ref-type="bibr" rid="ref5">(Cadwallar &amp; Graham-Harrison 2018)</xref>
        and for purposes of control, which may give rise to societies where panoptic
surveillance (Müller 2016) and pervasive scoring
        <xref ref-type="bibr" rid="ref6">(Citron &amp; Pasquale 2014)</xref>
        affect all aspects
of our lives. There are even worse dystopias of the end of humanity caused by
“singularity” or the devastating consequences criminal use of AI may have
        <xref ref-type="bibr" rid="ref23">(Naudé &amp; Nicola
2018)</xref>
        . We seem to be on the cusp of a “new era of widespread algorithmic governance,
wherein algorithms will play an ever-increasing role in the exercise of power”
        <xref ref-type="bibr" rid="ref16">(Kitchin
2017, 15)</xref>
        . This challenge may be boosted by the fact that machine thinking is not and
will unlikely be an imitation or extension of human reasoning
        <xref ref-type="bibr" rid="ref18">(Lake 2017)</xref>
        .
      </p>
      <p>Many organizations and public actors, such as governments, have been developing
and publishing guidelines for ethical development and use of AI. These initiatives have
been triggered by the need to address the potential harms - deliberate or unintentional
AI systems can cause at every stage of their lifecycle harms to individuals, society or
the environment. Perhaps we need to rethink our concept of “lifecycle” and even what
it means to be a human in the future. The main challenges associated with the AI
systems have generally been described as related to misuse, design that is not thoroughly
considered, and unintended negative consequences.</p>
      <p>AI holds a lot of opportunities to accelerate innovation and international industry
and governmental cooperation in order to tackle key societal challenges of our time.
From that perspective, AI can be seen as an important emerging tool to catalyze positive
social impact. Decisions taken today and technological solutions designed nowadays
may affect current societies and the environment, and also significantly influence the
future generations. To benefit from potential opportunities, we need futures thinking
and brave action to place humans in the center. Ethics is needed to provide vocabulary
and approach for the AI systems developers, implementers, and stakeholders to equip
them with values, principles, and practical techniques to mitigate potential current and
long term harms associated with AI applications.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Moving towards new horizons in developing ethical AI and societal governance</title>
      <p>
        The expected societal impacts of AI concern the public, private and third-sector actors’
ethical self-regulation and steering of the society. ETAIROS will advance knowledge
on relevant use contexts and specific challenges and opportunities of AI, develop
ethical design and assessment frameworks and tools
        <xref ref-type="bibr" rid="ref19 ref20">(Leikas et al., 2019)</xref>
        , and elaborate
general governance principles and practices. For public authorities and private sector,
the project produces suggestions and practices for the use, design and governance of AI
from the perspective of sustainable, transparent, and inclusive societal development.
From the perspective of citizens and civil society, the project increases transparency of
the use of AI, general understanding of ethically acceptable AI systems and possibilities
for informed public debate and influence.
      </p>
      <p>AI is affecting everyone, and AI applications and autonomous systems are facing
huge business expectations and hopes as means to make citizens and societies flourish.
To succeed in this, common action and discussion is needed, not only between research
and industry but also between citizens, decision makers and companies to domesticate
AI in a trustworthy manner in the everyday life of citizens and organisations. To ensure
societal impact, ETAIROS will collaborate productively across all sectors by actively
engaging all relevant key actors (public authorities, experts, citizens, private sector) to
a transparent and well-informed co-innovation process of new practical governance
frameworks and tools including regulation suggestions. Concrete use cases are
examined to support the formation of shared understanding of the challenges and solutions.</p>
      <p>Research in ETAIROS will be executed in two phases: during Phase I (2019-2022)
we will study ethical AI development and use, its governance challenges, and develop
and pilot frameworks and practical instruments for ethical AI design, use and
governance in collaboration with the stakeholders. During Phase II (2022-2025), the
frameworks and practical tools will be refined and finalized on the basis of further
experiments, and scaled up to a wider use by public authorities, companies and the third
sector.</p>
      <p>Interaction activities in ETAIROS aim at co-creating design models and stimulating
concrete action for ethical adoption and utilization of AI. ETAIROS brings together
researchers, public agencies, policy makers, industry, business community, and civil
society actors in a co-creative research and innovation process. The core tools to
achieve this goal are the Co-Innovation Forum (CIF) and the Open Dialogue Forum
(ODF). The CIF is a forum where practical use case areas of AI are elaborated and
cocreated together with co-innovation partners. The ODF is open for all relevant actors,
including civil society, and supports the ideas of open innovation and open science.</p>
      <p>In summary, ETAIROS project is expected to provide novel insights by combining
AI design challenges and societal concerns into a single empirical study; anticipating
systematically societal impacts of AI development by using established participatory
foresight methods; incorporating governance aspects to the inquiry to provide policy
and business relevant suggestions and practical solutions; co-innovating societally
acceptable and desirable solutions by integrating stakeholders and citizens, and
developing tools for screening and enhancing ethical aspects in applications utilizing AI.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>Project “Ethical AI for the Governance of the Society” (ETAIROS) is funded by
Strategic Research Council at the Academy of Finland.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Avent</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>The wealth of humans: Work, power, and status in the twenty-first century</article-title>
          .
          <source>NY: St. Martin's Press.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bostrom</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2014</year>
          )
          <article-title>Superintelligence: Paths, dangers, strategies</article-title>
          . Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Brynjolfsson</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>McAfee</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2014</year>
          )
          <article-title>The second machine age: work, progress, and prosperity in a time of brilliant technologies</article-title>
          . New York: W.W. Norton &amp; Company.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Brynjolfsson</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Mcafee</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>The business of artificial intelligence. What it canand cannot-do for your organization</article-title>
          .
          <source>Harvard Business Review</source>
          ,
          <volume>7</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cadwallar</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Graham-Harrison</surname>
            <given-names>E</given-names>
          </string-name>
          . (
          <year>2018</year>
          )
          <article-title>How Cambridge Analytica turned Facebook 'likes' into a lucrative political tool</article-title>
          .
          <source>The Guardian, March</source>
          <volume>17</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Citron</surname>
            ,
            <given-names>D. K.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Pasquale</surname>
            ,
            <given-names>F. A.</given-names>
          </string-name>
          (
          <year>2014</year>
          )
          <article-title>The scored society: Due process for automated predictions</article-title>
          . Washington Law Review,
          <volume>89</volume>
          ,
          <year>2014</year>
          ; U of Maryland Legal Studies Research Paper No.
          <year>2014</year>
          -
          <volume>8</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Copeland</surname>
            ,
            <given-names>B.J.</given-names>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>Artificial intelligence</article-title>
          .
          <source>Encyclopaedia Britannica.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Cummings</surname>
            ,
            <given-names>M. L.</given-names>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>Artificial intelligence and the future of warfare</article-title>
          . Research Paper. London: Chatham House.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dutton</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>An overview of national AI strategies</article-title>
          . https://medium.com/
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Forbes</surname>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>Artificial intelligence, China and the U.S. - How the U.S. is losing the technology war</article-title>
          . https://www.forbes.com/sites/steveandriole/2018/11/09/artificial-intelligencechina
          <article-title>-and-the-us-how-the-us-is-losing-the-technologywar/#2dcafacd6195.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Faggella</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>Valuing the artificial intelligence market, graphs and predictions</article-title>
          . https://www.techemergence.com/
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Goertzel</surname>
            <given-names>B.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Pennachin</surname>
            ,
            <given-names>C</given-names>
          </string-name>
          . (Eds.) (
          <year>2007</year>
          )
          <article-title>Artificial general intelligence</article-title>
          . Springer-Verlag: Berlin Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Grace</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salvatier</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dafoe</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>When will AI exceed human performance? Evidence from AI experts</article-title>
          .
          <source>Journal of Artificial Intelligence Research (AI and Society Track).</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Helbing</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          et al. (
          <year>2017</year>
          )
          <article-title>Will democracy survive Big Data</article-title>
          and Artificial Intelligence? Scientific American,
          <source>February</source>
          <volume>25</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>IDC</surname>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>Automation and Customer Experience Needs Will Drive AI Investment to $5 Billion by 2019 Across European Industries</article-title>
          . https://www.idc.com/getdoc.jsp?containerId=
          <fpage>prEMEA44978619</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Kitchin</surname>
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>Thinking critically about and researching algorithms</article-title>
          .
          <source>Information, Communication &amp; Society</source>
          ,
          <volume>20</volume>
          (
          <issue>1</issue>
          ),
          <fpage>14</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Korinek</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Stiglitz</surname>
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>Artificial Intelligence and its implications for income distribution and unemployment</article-title>
          . NBER Working Paper.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Lake</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ullman</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tenenbaum</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Gershman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>Building machines that learn and think like people</article-title>
          .
          <source>Behavioral and Brain Sciences</source>
          ,
          <volume>40</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Leikas</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koivisto</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Gotcheva</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>Ethical framework for designing autonomous systems</article-title>
          .
          <source>Journal of Open Innovation: Technology, Market, and Complexity</source>
          <year>2019</year>
          ,
          <volume>5</volume>
          , 18; doi:10.3390/joitmc5010018
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Leikas</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>The ethics of AI - what are we even talking about</article-title>
          ? https://vttblog.com/
          <year>2019</year>
          /01/16/the-ethics
          <article-title>-of-ai-what-are-we-even-talking-about/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          (
          <year>2019</year>
          ) https://www.microsoft.com/en-gb/ai/ai-for-good
          <source>[Accessed</source>
          <volume>1</volume>
          .
          <fpage>10</fpage>
          .
          <year>2019</year>
          ]
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Müller</surname>
          </string-name>
          , V. (Ed.).
          <article-title>(2016) Risks of Artificial Intelligence</article-title>
          . CRC Press, Taylor &amp; Francis Group.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Naudé</surname>
            <given-names>W.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Nicola</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>The race for an artificial general intelligence: Implications for public policy</article-title>
          .
          <source>UNU-MERIT Working Papers, Maastricht Economic and Social Research institute on Innovation and Technology.</source>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <given-names>O</given-names>
            <surname>'Neil</surname>
          </string-name>
          <string-name>
            <surname>C.</surname>
          </string-name>
          (
          <year>2016</year>
          )
          <article-title>Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy</article-title>
          . Crown Publishing Group &amp; Penguin.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>OECD</surname>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>AI: Intelligent machines, smart policies: Conference summary</article-title>
          .
          <source>OECD Digital Economy Papers, No. 270</source>
          ,
          <string-name>
            <given-names>OECD</given-names>
            <surname>Publishing</surname>
          </string-name>
          , Paris.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Ransbotham</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kiron</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gerbert</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Reeves</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>Reshaping business with Artificial Intelligence. Closing the gap between ambition and action</article-title>
          . MIT Sloan Management Review.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Saariluoma</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2015</year>
          )
          <article-title>Four challenges in designing autonomous systems</article-title>
          . In: Williams,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Scharre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Mayer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Arnold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Crootof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Anderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            &amp;
            <surname>Husniux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Autonomous</surname>
          </string-name>
          <article-title>Systems: Issues for Defence Policymakers</article-title>
          . NATO.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Weber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>Auto-management as governance? Predictive analytics in counter-insurgency and marketing</article-title>
          .
          <source>Conference presentation at EASST</source>
          <year>2018</year>
          ,
          <article-title>July</article-title>
          , Lancaster University.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>