<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Linking Artificial Intelligence Principles</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yi Zeng</string-name>
          <email>yi.zeng@ia.ac.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enmeng Lu</string-name>
          <email>enmeng.lu@ia.ac.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cunqing Huangfu</string-name>
          <email>cunqing.huangfu@ia.ac.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial Intelligence Principles: Different School of Thoughts</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Berggruen Institute China Center, Peking University</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Charter (OpenAI 2018). (20) AI at Google: Our Principles (Google 2018). (21) Microsoft AI Principles (Microsoft 2018). (22) Microsoft CEO's 10 AI rules (Nadella 2016). (23) Principles for the Cognitive Era (IBM 2017). (24) Principles for Trust and Transparency (IBM 2018). (25) Developing AI for Business with Five Core Principles (Sage 2017). (26) SAP's Guiding Principles for Artificial Intelligence. (SAP 2018). (27) Sony Group AI Ethics Guidelines</institution>
          ,
          <addr-line>Sony 2018</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>School of Artificial Intelligence, University of Chinese Academy of Sciences</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Artificial Intelligence principles define social and ethical considerations to develop future AI. They come from research institutes, government organizations and industries. All versions of AI principles are with different considerations covering different perspectives and making different emphasis. None of them can be considered as complete and can cover the rest AI principle proposals. Here we introduce LAIP, an effort and platform for linking and analyzing different Artificial Intelligence Principles. We want to explicitly establish the common topics and links among AI Principles proposed by different organizations and investigate on their uniqueness. Based on these efforts, for the long-term future of AI, instead of directly adopting any of the AI principles, we argue for the necessity of incorporating various AI Principles into a comprehensive framework and focusing on how they can interact and complete each other.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>AI ethics and social impacts have drawn serious attentions
and lots of policy frameworks have been brought up by
various organizations. We confine our study to different AI
principles (including guidelines, codes, and initiatives)
pertaining to the general governance of AI. Typically, such
principles are literally and explicitly documented in an
itemby-item style, announced as an efforts to express the
proposers’ values and attitudes towards the understanding,
development, and utilization of AI. Technically detailed
discussions, including techniques oriented standards, are not
included in this study. Traditional principles on robotics are
also not included in this study.</p>
      <p>Based on these considerations, we collected 27 proposals
of AI principles to date. For each of the collected principles,
we extract the texts of direct relevance to the author’s points
(in most cases this means the title words of the principles).
We also include the necessary comments from the raw text
* These authors contributed equally to this study.
1 OpenAI identifies itself as “a non-profit AI research company”.</p>
      <p>Semantically Linking Various AI Principles
We aim to link various AI principles from the perspectives
that they considered in common. Common perspectives may
not use exactly the same word term, and semantically
equivalent and similar terms should be considered.</p>
      <p>We first identified a set of manually chosen keywords as
the core terms, which belong to 10 general topics. We use
word2vec representation of the word to find keywords with
similar meanings. Google word vector trained from news1 is
used. The similarity between the original keyword and the
other words is calculated by the cosine similarity between
the word vector of the original keyword and the other words.
A list of candidate extended keywords ranked by similarity
is generated. The first word on the list with obviously
deviated semantic meaning from the original keyword is selected
as the threshold point, and all words with lower similarity
are abandoned. Some phrases with similar meanings are
added to the expanded keyword list. For example, for the
term “collaboration”, the expanded list also includes
collaborations, collaborative, collaboratively, collaborate,
collaborates and collaborating. While for the term “fairness”, the
expanded list also includes fair, fairer, unfair and unfairness.</p>
      <p>Keywords
humanity, beneficial, well-being, human
value, human right, dignity, freedom,
education, common good, human-centered,
human-friendly
collaboration, partnership, cooperation,
dialogue
share, equal, equity, inequity, inequality
fairness, justice, bias, discrimination,
prejudice
transparency, explainable, predictable,
intelligible, audit, trace, opaque
privacy, personal information, data
protection, informed, explicit confirmation,
control the data, notice and consent
security, cybersecurity, cyberattack,
hacks, confidential
safety, validation, verification, test,
controllability, under control, control the
risks, human control
accountability, responsibility
Accountability
AGI/ASI AGI, superintelligence, super intelligence
Here we define the topic coverage of a principle proposal
as the percentage of topics that have been mentioned in the
proposal. If any term or expanded keyword term has ever
appeared in a proposal, we would mark that this proposal
1
https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit
has covered the related topic. Table 1 presents 10 general
topics and related terms for AI Principles. Term expansion
efforts based on semantic similarities are introduced to
extend the list for more comprehensive coverage.</p>
      <p>Figure 1 shows the coverage of different principles on the
10 topics. The colors are related to how many times the term
appeared in the proposal. As can be observed, expanding the
keywords using semantic similarity significantly increased
topics found in principles, making the semantic analysis
more accurate and robust against different use of similar
word terms and expressions. The linkages among different
AI principles are represented using Semantic Web standards
(RDF/OWL) on the LAIP platform.</p>
      <p>
        Complementary Considerations from
Different Organizations and Different AI Principles
Different principle proposals are compared by calculating
their coverage on topics and keywords, as shown in Figure
2. We can observe that one of the principle proposals
covered all the major topics. Among the top 10 proposals ranked
by keywords, 8 of them ranked top 10 on topic coverage
ranking as well. However,
        <xref ref-type="bibr" rid="ref19">SAP 2018</xref>
        ranked higher on
keywords coverage ranking (the 10th), but ranked comparatively
lower on topic coverage ranking (the 14th in parallel), since
it discussed extensively about collaboration, fairness,
privacy, and safety, while may have missed the topics of share,
accountability and AGI/ASI.
        <xref ref-type="bibr" rid="ref6">HAIP 2018</xref>
        covered 8 of the
10 major topics (the 7th in parallel) without going through
much of the details, hence ranked lower in keywords
ranking (the 16th). We should emphasize that coverage of a
proposal may not reflect lacking of considerations on certain
topics, but just reflects that they may choose to have
different emphasis. On the other hand, different considerations
may interact to complement with each other.
      </p>
      <p>(A) Topic Coverage Ranking
(B) Keywords Coverage Ranking</p>
      <p>According to the division of different school of thoughts
from the types of publisher point of view, Figure 3 shows
the comparative frequency of topics mentioned in three
different types of AI principle proposals.</p>
      <p>We can observe from Figure 3 that corporations would
like to mention more about collaboration, but not that much
for security and privacy. While governments mentioned
more about security, but would not like to mention
accountability. Corporations can benefit from collaboration, but the
atmosphere of collaboration may not be as good as academia,
which may be the reason why they would like to mention it.
Privacy and security are sensitive issues for corporations,
maybe that is why corporations would not like to mention
them. And the government mentioned the topic of
accountability significantly less than academia.</p>
      <p>
        Although in most cases, principles from different
organizations usually share a common vocabulary, ambiguities in
the analysis of the text still remain. The ambiguities may
come from the polysemy of words and the context. For
example, “race” is used in the context of “arms race” and “race
avoiding”
        <xref ref-type="bibr" rid="ref4">(FLI 2017)</xref>
        to represent the competition across
researchers and nations (thus referring to the topic of
“collaboration”), it is also used in the context of “gender, race,
sexual orientation”
        <xref ref-type="bibr" rid="ref25 ref29">(UNI Global Union 2017)</xref>
        to talk about
possible biases of AI system (thus referring to the topic of
“fairness”). Meanwhile, the “self-improvement” of an advanced
AI system is a trait we should be very cautious about
        <xref ref-type="bibr" rid="ref4">(FLI
2017)</xref>
        , yet such “self-improvement” of AI researchers is
what we ask for
        <xref ref-type="bibr" rid="ref26">(JSAI 2017)</xref>
        .
      </p>
      <p>
        Such ambiguities also appear within a topic. For instance,
we may ask for “transparency” from the decision-making
process of the system out of our fairness concerns. We may
also ask for “transparency” from the system to make it more
safe, traceable, and controllable. The Asilomar AI principles
have made such distinctions explicitly in their discussions
(see “Judicial Transparency” and “Failure Transparency” in
        <xref ref-type="bibr" rid="ref4">(FLI 2017)</xref>
        ) while others usually seem to take one side of
the concept or mixed them up. The ambiguities in these
cases can be derived from the high-level abstraction of the
concept itself and is also a reflection of the inner linkage
between various topics.
      </p>
      <p>
        Besides the general topics those AI Principle proposals
share in common, many principles also reflect the unique
perspectives of different organizations. For example, the
Montreal Declaration has suggested promoting the
well-being of “all sentient creatures”, which according to their
definition, includes “any being able to feel pleasure, pain,
emotions; basically, to feel”
        <xref ref-type="bibr" rid="ref30">(Montreal 2017)</xref>
        . The JSAI Ethical
Guidelines include that AI must abide these guidelines “in
the same manner as the members of the JSAI in order to
become a member or a quasi-member of society”
        <xref ref-type="bibr" rid="ref26">(JSAI 2017)</xref>
        .
The General Principles from IEEE’s report recommend that
“For the foreseeable future, A/IS should not be granted
rights and privileges equal to human rights: A/IS should
always be subordinate to human judgment and control”
        <xref ref-type="bibr" rid="ref25">(IEEE
2017)</xref>
        . IBM takes the view that “Cognitive systems will not
realistically attain consciousness or independent agency”
and thus lay their stress on promoting AI and cognitive
systems to “augment human intelligence”
        <xref ref-type="bibr" rid="ref8">(IBM 2017)</xref>
        . Those
different perspectives from different proposals reflect the
diversity of the whole AI community and it turns to be
necessary to identify and incorporate such various considerations
for a more comprehensive framework.
      </p>
      <p>Based on the analysis, we have the following suggestions
for future research and proposals for AI Principles:
• Strengthening safety-related considerations in academia
and industry. Safety issues are the core for AI governance
and have been realized in different government
organizations, but many of the AI companies have not taken this
seriously. While their AI products will directly bring
potential risks for society.
• Long-term strategic design for AGI and ASI. Most AI
principles investigated here do not cover considerations
for AGI and ASI. While most of them should have been
regarded as relatively long-term design for AI. Long-term
planning on AGI and ASI will have clearer observations
for strategic future and could have arrangements for
potential risks in advance.
• From Human-centered to Harmonious Principle Design.</p>
      <p>Current AI principle proposals mainly focus on beneficial,
human-centered design, while lack of considerations that
the human society is on the way for transformation. More
harmonious design considering both human and future AI
as cognitive living systems should be considered.</p>
    </sec>
    <sec id="sec-2">
      <title>Conclusions</title>
      <p>Different AI Principles have their own perspectives and
coverage for the current and future strategies of AI. Instead of
directly adopting any of the AI principles, we argue the
necessity of linking and incorporating various AI Principles
into a comprehensive framework and focusing on how they
can interact and complement each other. The Linking
Artificial Intelligence Principles (LAIP) platform is available as
an online service under the address
http://www.linking-aiprinciples.org. It supports semantic search by keyword
terms and paragraph search where semantically similar
principles could be listed for exploration.</p>
    </sec>
    <sec id="sec-3">
      <title>Acknowledgement</title>
      <p>This study is supported by New Generation of Artificial
Intelligence Development Research Center, Ministry of
Science and Technology of China under the project “Key issues
of Social Ethics for Artificial Intelligence” from ISTIC.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>DeepMind.</surname>
          </string-name>
          <year>2017</year>
          . DeepMind Ethics &amp; Society Principles.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Etzioni</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>How to Regulate Artificial Intelligence</article-title>
          . https: //www.nytimes.com/
          <year>2017</year>
          /09/01/opinion/artificial-intelligenceregulations-rules.
          <source>html.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>European Group on Ethics in Science and New Technologies (EGE)</source>
          .
          <source>2018. Statement on Artificial Intelligence</source>
          , Robotics and 'Autonomous' Systems. http://ec.europa.eu/research/ege/pdf/ege _ai_statement_
          <year>2018</year>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>Future of Life Institute (FLI)</source>
          .
          <year>2017</year>
          .
          <article-title>Asilomar AI Principles</article-title>
          . https: //futureoflife.org/ai-principles/.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Google</surname>
          </string-name>
          .
          <year>2018</year>
          . AI at Google: Our Principles. https://ai.google /principles.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>HAIP</given-names>
            <surname>Initiative</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Harmonious Artificial Intelligence Principles (HAIP)</article-title>
          . http://bii.ia.ac.cn/hai/index.php.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>House</surname>
          </string-name>
          of Lords, UK.
          <year>2018</year>
          .
          <article-title>AI in the UK: ready, willing</article-title>
          and able? https://publications.parliaent.uk/pa/ld201719/ldselect/ldai/100 /100.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>IBM.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Principles for the Cognitive Era</article-title>
          . https://www.ibm .com/blogs/think/2017/01/ibm-cognitive-principles/.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>IBM.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>Principles for Trust and Transparency</article-title>
          . https://www .ibm.com/blogs/policy/trust-principles/.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>Information Technology Industry Council (ITI)</source>
          .
          <year>2017</year>
          .
          <article-title>AI Policy Principles</article-title>
          . https://www.itic.org/public-policy/ITIAIPolicy PrinciplesFINAL.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Microsoft AI Principles</article-title>
          . https://www.microsoft .com/en-us/ai/our-approach-to-ai.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <article-title>Ministry of Internal Affairs and Communications (MIC), the Government of Japan</article-title>
          .
          <year>2017</year>
          .
          <string-name>
            <surname>AI R&amp;D Principles</surname>
          </string-name>
          . http://www .soumu.go.jp/main_content/000507517.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <article-title>Ministry of Internal Affairs and Communications (MIC), the Government of Japan</article-title>
          .
          <year>2018</year>
          .
          <article-title>Draft AI Utilization Principles</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>http://www.soumu.go.jp/main_content/000581310.pdf.</mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Nadella</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>The Partnership of the Future: Microsoft's CEO explores how humans and A.I. can work together to solve society's greatest challenges</article-title>
          . https://slate.com/technology/2016/06 /microsoft-ceo
          <article-title>-satya-nadella-humans-and-a-i-can-work-together -to-solve-societys-challenges</article-title>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>OpenAI.</surname>
          </string-name>
          <year>2018</year>
          . OpenAI Charter. https://blog.openai.com/openai -charter/.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <source>Partnership on AI (PAI)</source>
          .
          <year>2016</year>
          . Tenets. https://www.partnership onai.org/tenets/.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Sage</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>The Ethics of Code: Developing AI for Business with Five Core Principles</article-title>
          . https://www.sage.com/ca/our-news/press-releases/
          <year>2017</year>
          /06/designing
          <article-title>-AI-for-business.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>SAP.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>SAP's Guiding Principles for Artificial Intelligence</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          https://news.sap.com/
          <year>2018</year>
          /09/sap-guiding
          <article-title>-principles-for-artificial -intelligence/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Sony</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Sony Group AI Ethics Guidelines</article-title>
          . https://www.sony .net/SonyInfo/csr_report/humanrights/hkrfmg0000007rtj-att/AI _Engagement_within_Sony_Group.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          Stanford University.
          <year>2018</year>
          .
          <article-title>The Stanford Human-Centered AI Initiative (HAI)</article-title>
          . http://hai.stanford.edu/news/introducing_stanfords _human_centered_ai_initiative/.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <source>The Future Society</source>
          .
          <year>2017</year>
          .
          <article-title>Principles for the Governance of AI</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          http://www.thefuturesociety.org/science-law
          <string-name>
            <surname>-</surname>
          </string-name>
          society-sls-initiative /#
          <fpage>1516790384127</fpage>
          -
          <lpage>3ea0ef44</lpage>
          -2aae.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <source>The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems</source>
          .
          <year>2017</year>
          .
          <article-title>Ethically Aligned Design, Version 2</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <source>The Japanese Society for Artificial Intelligence (JSAI)</source>
          .
          <year>2017</year>
          .
          <article-title>The Japanese Society for Artificial Intelligence Ethical Guidelines</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-EthicalGuidelines-1.pdf.</mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <given-names>The</given-names>
            <surname>Public Voice</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Universal Guidelines for Artificial Intelligence</article-title>
          . https://thepublicvoice.org/ai-universal-guidelines/.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <given-names>UNI Global</given-names>
            <surname>Union</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Top 10 Principles For Ethical Artificial Intelligence</article-title>
          . http://www.thefutureworldofwork.org/media/35420 /uni_ethical_ai.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          University of Montreal.
          <year>2017</year>
          .
          <article-title>The Montreal Declaration for a Responsible Development of Artificial Intelligence</article-title>
          . https://www .montrealdeclaration
          <article-title>-responsibleai.com/the-declaration.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>US Public Policy</surname>
            <given-names>Council</given-names>
          </string-name>
          ,
          <source>Association for Computing Machinery (USACM)</source>
          .
          <year>2017</year>
          .
          <article-title>Principles for Algorithmic Transparency and Accountability</article-title>
          . https://www.acm.org/binaries/content/assets /public-policy/
          <year>2017</year>
          <article-title>_usacm_statement_algorithms</article-title>
          .pdf.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>