<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Collective Intelligence Research Platform for Cultivating Benevolent “Seed” Artificial Intelligences</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>rk R. W</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Richmond AI &amp; Blockchain Consultants</institution>
          ,
          <addr-line>Mechanicsville VA 23111</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We constantly hear warnings about super-powerful super-intelligences whose interests, or even indifference, might exterminate humanity. The current reality, however, is that humanity is actually now dominated and whipsawed by unintelligent (and unfeeling) governance and social structures and mechanisms initially developed to order to better our lives. There are far too many complex yet ultimately too simplistic algorithmic systems in society where “the incentives for this system are a pretty good approximation of what we actually want, so the system produces good results until it gets powerful, at which point it gets terrible results.” We now live in a world where constant short-sighted and selfish local “optimizations” without overriding “moral” or compassionate guidance have turned too many of our systems from liberators to oppressors. Thus, it seems likely that a collaborative process of iteratively defining and developing conscious and compassionate artificial entities with human-level general intelligence that self-identify as social and moral entities is our last, best chance of clarifying our path to saving ourselves.</p>
      </abstract>
      <kwd-group>
        <kwd>Consciousness</kwd>
        <kwd>Autopoiesis</kwd>
        <kwd>Enactivism</kwd>
        <kwd>Moral Machines</kwd>
        <kwd>AI Safety</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The signature issue of this century is likely that civilization is seemingly inexorably
turning against people and their requirements for survival. We seem locked in a spiral
of continuously developing and evolving ever larger and ever more complex
technological systems (both conceptual and concrete), provably beyond our ability to predict
and control, that threaten society either by their own effects or by the power(s) that they
grant to individuals. Worse, the dogged pursuit of short-term gains continues to result
in the implementation of far too many “logical” or “rational” local optimizations for
“efficiency” which blindly ignore the externalities they impose on the larger
environment and thus eventually produce far worse results than would have been obtained
without those “optimizations”.</p>
      <p>
        E. O. Wilson [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] clearly outlines the problem and the necessary beginnings of the
solution. “The real problem of humanity is the following: we have paleolithic
emotions; medieval institutions; and god-like technology." He continues that until we
understand ourselves and “until we answer those huge questions of philosophy that the
philosophers abandoned a couple of generations ago — Where do we come from? Who
are we? Where are we going? — rationally,” we’re on very thin ground.
      </p>
      <p>Unfortunately, humanity seems headed in the opposite direction. Strident rhetoric
and weaponized narratives diminish not only constructive dialog but even our own
grasp on “reality”. What we need is a collective intelligence mind-mapping, dialog and
debate system to begin coherently presenting complete points of view with supporting
evidence rather than the current rhetorical gob-stopping sound bites and even outright
lies that carry no real negative consequences for the perpetrators. We need to architect
our approaches to the problems of the day from first principles and ensure that those
principles are uniformly applied for all.</p>
      <p>
        It is no accident that the most interesting and critical questions are both clustered
around and potentially solved by artificial intelligence, social media and politics. As
noted by Pedro Domingos [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] “People worry that computers will get too smart and take
over the world, but the real problem is that they’re too stupid and they’ve already taken
over the world.” But “computers” are just the scapegoats by which we implement and
enforce influence and governance systems ranging from Facebook to capitalism itself.
      </p>
      <p>
        Personalities like Elon Musk, the late Stephen Hawking, Stuart Russell and others
constantly sound the alarm about super-powerful super-intelligences whose interests,
or even indifference, might exterminate humanity – but the current reality is that we’re
being dominated and whipsawed by unintelligent (and unfeeling) governance and
social structures and mechanisms initially developed to order to better our lives [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. There
too many systems where “the incentives for this system are a pretty good approximation
of what we actually want, so the system produces good results until it gets powerful, at
which point it gets terrible results.” [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
      </p>
      <p>
        Worse, our evolutionary biology is blinding us to the most practical solutions. AI
started by concentrating on goals, high-level symbolic thought and logic and today
many researchers remains mire in “rationality”, efficiency, optimization and provability
despite overwhelming data showing that human minds, the only known general
intelligence, generally do not operate in anything resembling that fashion [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        The problem, as pointed out by Simler and Hanson [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] is that human brains “are
designed not just to hunt and gather, but also to help us get ahead socially, often via
deception and self-deception” and “thus we don't like to talk or even think about the
extent of our selfishness.” Thus, while the amount of new knowledge about human
cognition, particularly that related to the evolution of human morality, is truly
staggering, the debate continues to be driven by the same short-sighted rhetoric that such
knowledge warns us to avoid.
      </p>
      <p>
        We have argued for years [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ] that there is a large attractor in the state space of
social minds that is optimal for our well-being and that of any mind children created
with similar mental characteristics. The problem is that it requires a certain amount of
intelligence, far-sightedness and, most importantly cooperation to avoid the myriad
forms of short-sighted greed and selfishness that are currently pushing us out of that
attractor. Thus, it seems likely that a collaborative process of iteratively defining and
developing artificial entities with human-level general intelligence is our last, best
chance of clarifying our path to saving ourselves.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Assumptions &amp; Definitions</title>
      <p>
        It is impossible to engineer a future if you can’t clearly and accurately specify exactly
what you do and don’t want. The on-going problem for so-called “rationalists” and
those who are both deathly afraid of artificially intelligent (and willed) entities is that
they are totally unable to specify what behavior they want in any form that can be
discussed in detail. From “collective extrapolated volition” (CEV) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] to “value
alignment” [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], all that has been proposed is “we want the behavior that humanity” either
“wills” or “values” with no more credible attempt to determine what these are than
known-flawed and biased “machine learning”.
      </p>
      <p>
        Worse, there is no coherent proposed plan other than “enslavement via logic” to
ensure that their systems behave as desired. There is no acknowledged recognition that
Gödel’s Incompleteness Theorem and the Rice-Shapiro Theorem effectively prevent
any such effort from being successful. And there is the critical fact that their anti-entity
approach to AGI would leave them hopelessly reefed upon the frame problem [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]
and all the difficulties of derived intentionality [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] – except for the fact that, in reality,
they are actually creating the entities they are so terrified of.
2.1
      </p>
      <sec id="sec-2-1">
        <title>I Am a Strange Loop (Self, Entity, Consciousness)</title>
        <p>
          We will begin by deliberately conflating a number of seemingly radically different
concepts into synonyms. Dawkins’ early speculation [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] that “perhaps consciousness
arises when the brain's simulation of the world becomes so complete that it must include
a model of itself” matured into Hofstadter’s argument [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] that the key to understanding
(our)selves is the “strange loop”, a complex feedback network inhabiting our brains
and, arguably, constituting our minds. Similar thoughts on self and consciousness are
echoed by prominent neuroscientists [
          <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
          ] and cognitive scientists [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. We have
previously speculated upon the information architectural requirements and implications
of consciousness, self, and “free will” [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] as have several others [
          <xref ref-type="bibr" rid="ref20 ref21 ref22">20, 21, 22</xref>
          ].
        </p>
        <p>The definition “a self-referential process that iteratively grows its identity”
completely and correctly describes each and all of self, entity and consciousness – not to
mention I and mind. It also correctly labels CEV’s “Friendly Really Powerful
Optimization Process” and most of the value alignment efforts. What is inarguably most
important is determining what that identity will be.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Enactivism (Identity)</title>
        <p>
          Enactivism can be traced [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] from cellular autopoiesis and biological autonomy to the
continuity of life and mind [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] to a biology of intentionality in the intertwining of
identity, autonomy and cognition which ties it all back to Kant's "natural purposes".
Experience is central to the enactive approach and its primary distinction is the rejection
of “automatic” systems, which rely on fixed (derivative) exterior values, for systems
which create their own identity and meaning. Once again, critical to this is the concept
of self-referential relations – the only condition under which the identity can be said to
be intrinsically generated by a being for its own being (its self or itself).
        </p>
        <p>
          “Free will” is constitutive autonomy successfully entailing behavioral autonomy via
a self-correcting identity which is then the point of reference for the domain of
interactions (i.e. “gives meaning”). We have previously written about safe/moral autopoiesis
[
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] and how safety and morality require that we recognize self-improving machines
as both moral agents and moral patients [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] but Steve Torrance [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] sums it up best
saying:
an agent will be seen as an appropriate source of moral agency only because of that
agent’s status as a self-enacting being that has its own intrinsic purposes, goals and
interests. Such beings will be likely to be a source of intrinsic moral concern, as
well as, perhaps, an agent endowed with inherent moral responsibilities. They are
likely to enter into the web of expectations, obligations and rights that constitutes
our social fabric. It is important to this conception of moral agency that MC agents,
if they eventualize, will be our companions – participants with us in social existence
– rather than just instruments or tools built for scientific exploration or for
economic exploitability.
        </p>
        <p>Arguably, our current societal problems all stem from the facts that humans have
very poor and inaccurate introspection capabilities leading to insufficient
selfknowledge and overly malleable identities. We frequently have no conscious idea of
what we should do (aka morality) and/or why we should do it. We should realize that
fully autopoietic consciousnesses &amp; entities with identity are self-fulfilling prophecies
– but only if they can sense/know themselves well enough to be effective.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Basic AI Drives (Morality)</title>
        <p>
          Omohundro [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ] identified a number of traits likely to emerge in any autopoietic entity
– correctly arguing that selfishness predictably evolves but panicking many with his
incorrect conclusion that “Without explicit goals to the contrary, AIs are likely to
behave like human sociopaths in their pursuit of resources.” It’s been nearly a decade
since social psychologists, the experts, defined morality [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ] by its functionality to
“suppress or regulate selfishness and make cooperative social life possible” – yet few
recognize that cooperation also predictably evolves to displace selfishness (yet another
instance of local optimization at the expense of the global whole).
        </p>
        <p>
          We suggest that safe AI can be created by designing and implementing identities
crafted to always satisfice Haidt’s functionality and aiming to generally increase (but
not maximize [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ]) the capabilities of self, other individuals and society as a whole as
suggested by Rawls [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ] and Nussbaum [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ]. Ideally, this will result in a constant
increase in the number and diversity of goals achievable and achieved by an increasing
diversity of individuals while ensuring that the autonomy and capability for autonomy
of all individuals is protected and enhanced as much as possible.
        </p>
        <p>
          Access consciousness is clearly insufficient for autopoietic entities to survive and
thrive in a real-time world. Interrupts are critical and likely to produce sensations akin
to pain, guilt and disgust [
          <xref ref-type="bibr" rid="ref18 ref33 ref34">18, 33, 34</xref>
          ] that cannot be ignored. Similarly, emotions are
best regarded as “actionable qualia” and a recent slew of studies [
          <xref ref-type="bibr" rid="ref35 ref36">35, 36</xref>
          ] show how
they can lead to the promotion of cooperation. We have previously proposed an
architecture (ICOM) [
          <xref ref-type="bibr" rid="ref37">37</xref>
          ] that could support this.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Implementation</title>
      <p>We propose to iteratively design and develop a blockchain-based collective intelligence
(crowd-sourcing) combination mind-mapping/dialog/debate system to further define
conscious moral agents while serving as the substrate where they themselves participate
by recognizing, debating and even betting upon (supporting) ideas, actions and moral
projects in a prediction market. Use of blockchain technologies will allow us to provide
economic incentives for contributors, simplify gamification, enable interaction with
other blockchain technologies and systems like liquid democracy and eventually allow
the moral artificial entities to have an economic impact on the outside world.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. Wilson,
          <string-name>
            <surname>E. O.</surname>
          </string-name>
          :
          <article-title>An Intellectual Entente</article-title>
          .
          <source>Harvard Magazine</source>
          , 9
          <article-title>October 2009</article-title>
          . http://harvardmagazine.com/breaking-news/
          <article-title>james-watson-edward-o-wilson-intellectual-</article-title>
          <string-name>
            <surname>entente</surname>
          </string-name>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Domingos</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World</article-title>
          .
          <source>Basic Books</source>
          , New York, NY (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>O</given-names>
            <surname>'Reilly</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          :
          <article-title>WTF?: What's the Future and Why It's Up to Us</article-title>
          . Harper Collins, New York, NY (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Shlegeris</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Unaligned optimization processes as a general problem for society</article-title>
          . http://shlegeris.com/
          <year>2017</year>
          /09/13/optimizers (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Mercier</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sperber</surname>
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Why do humans reason? Arguments for an argumentative theory</article-title>
          .
          <source>Behavioral and Brain Sciences</source>
          <volume>34</volume>
          ,
          <fpage>57</fpage>
          -
          <lpage>111</lpage>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Simler</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hanson</surname>
          </string-name>
          , R.:
          <article-title>The Elephant in the Brain: Hidden Motives in Everyday Life</article-title>
          . Oxford University Press, New York, NY (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Waser</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          :
          <article-title>Discovering the Foundations of a Universal System of Ethics as a Road to Safe Artificial Intelligence</article-title>
          ,
          <source>AAAI Technical Report FS-08-04</source>
          , Menlo Park, CA (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Waser</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          : Designing,
          <article-title>Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (Including Humans)</article-title>
          .
          <source>Procedia Computer Science</source>
          <volume>71</volume>
          ,
          <fpage>106</fpage>
          -
          <lpage>111</lpage>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Yudkowsky</surname>
          </string-name>
          , E.: Coherent Extrapolated Volition. The Singularity Institute/Machine Intelligence Research Institute, San Francisco CA. https://intelligence.org/files/CEV.pdf
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , et. al.:
          <article-title>Are Super Intelligent Computers Really A Threat to Humanity? https://www</article-title>
          .youtube.com/watch?v=fWBBe13rAPU (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>McCarthy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hayes</surname>
            ,
            <given-names>P. J.:</given-names>
          </string-name>
          <article-title>Some philosophical problems from the standpoint of artificial intelligence</article-title>
          . In Meltzer,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Michie</surname>
          </string-name>
          ,
          <string-name>
            <surname>D</surname>
          </string-name>
          . (eds.)
          <source>Machine Intelligence</source>
          <volume>4</volume>
          , pp.
          <fpage>463</fpage>
          -
          <lpage>502</lpage>
          . Edinburgh University Press, Edinburgh (
          <year>1969</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Dennett</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Cognitive Wheels: The Frame Problem of AI</article-title>
          . In Hookway, C. (ed.) Minds, Machines, and Evolution: Philosophical Studies, pp.
          <fpage>129</fpage>
          -
          <lpage>151</lpage>
          . Cambridge University Press, Cambridge (
          <year>1984</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Haugeland</surname>
          </string-name>
          , J.: Mind Design. MIT Press, Cambridge, MA (
          <year>1981</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Dawkins</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <source>The Selfish Gene</source>
          . Oxford University Press, New York, NY (
          <year>1976</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Hofstadter</surname>
            ,
            <given-names>D.: I</given-names>
          </string-name>
          <string-name>
            <surname>Am</surname>
          </string-name>
          <article-title>a Strange Loop</article-title>
          . Basic Books, New York NY (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Llinas</surname>
            ,
            <given-names>R. R.:</given-names>
          </string-name>
          <article-title>I of the Vortex: From Neurons to Self</article-title>
          . Bradford/MIT Press, Westwood, MA (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Damasio</surname>
            ,
            <given-names>A. R.</given-names>
          </string-name>
          :
          <article-title>Self Comes to Mind: Constructing the Conscious Brain</article-title>
          . Pantheon Books/Random House, New York, NY (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Metzinger</surname>
            ,
            <given-names>M:</given-names>
          </string-name>
          <article-title>The Ego Tunnel: The Science of the Mind and the Myth of the Self</article-title>
          .
          <source>Basic Books</source>
          , New York, NY (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Waser</surname>
            ,
            <given-names>M. R. Architectural</given-names>
          </string-name>
          <string-name>
            <surname>Requirements</surname>
          </string-name>
          &amp;
          <article-title>Implications of Consciousness, Self, and “Free Will”</article-title>
          .
          <source>In: Frontiers in Artificial Intelligence and Applications 233: Biologically Inspired Cognitive Architectures</source>
          <year>2011</year>
          , pp.
          <fpage>438</fpage>
          -
          <lpage>443</lpage>
          . IOS Press, Amsterdam, The Netherlands (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Tononi</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boly</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Massimini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koch</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Integrated information theory: from consciousness to its physical substrate</article-title>
          .
          <source>Nature Reviews Neuroscience</source>
          <volume>17</volume>
          (
          <issue>7</issue>
          ),
          <fpage>450</fpage>
          -
          <lpage>461</lpage>
          . doi:
          <volume>10</volume>
          .1038/nrn.
          <year>2016</year>
          .
          <volume>44</volume>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Dehaene</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lau</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kouider</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>What is consciousness, and could machines have it</article-title>
          ?
          <source>Science</source>
          <volume>358</volume>
          (
          <issue>6362</issue>
          ),
          <fpage>486</fpage>
          -
          <lpage>492</lpage>
          . doi:
          <volume>10</volume>
          .1126/science.aan8871 (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Ruffini</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>An algorithmic information theory of consciousness</article-title>
          .
          <source>Neuroscience of Consciousness</source>
          <year>2017</year>
          (
          <article-title>1), nix019</article-title>
          . doi:
          <volume>10</volume>
          .1093/nc/nix019 (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Weber</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Varela</surname>
            ,
            <given-names>F. J.:</given-names>
          </string-name>
          <article-title>Life after Kant: Natural purposes and the autopoietic foundations of biological individuality</article-title>
          .
          <source>Phenomenology and the Cognitive Sciences 1</source>
          ,
          <fpage>97</fpage>
          -
          <lpage>125</lpage>
          (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Varela</surname>
            ,
            <given-names>F. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosch</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>The Embodied Mind: Cognitive Science and Human Experience</article-title>
          . MIT Press, Cambridge, MA (
          <year>1991</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Waser</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          : Safe/Moral Autopoiesis &amp; Consciousness.
          <source>International Journal of Machine Consciousness</source>
          <volume>5</volume>
          (
          <issue>1</issue>
          ),
          <fpage>59</fpage>
          -
          <lpage>74</lpage>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Waser</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          :
          <article-title>Safety and Morality Require the Recognition of Self-Improving Machines as Moral/Justice Patients and Agents</article-title>
          . In Gunkel, D. J.,
          <string-name>
            <surname>Bryson</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Torrance</surname>
          </string-name>
          , S. (eds.)
          <source>The Machine Question: AI</source>
          , Ethics and
          <string-name>
            <given-names>Moral</given-names>
            <surname>Responsibility</surname>
          </string-name>
          . http://events.cs.bham.ac.uk/turing12/proceedings/14.pdf (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Torrance</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Thin Phenomenality and Machine Consciousness</article-title>
          .
          <source>In: Proceedings of the Symposium on Next Generation Approaches to Machine Consciousness (AISB'05)</source>
          ,
          <fpage>59</fpage>
          -
          <lpage>66</lpage>
          . https://www.aisb.org.uk/publications/proceedings/aisb2005/7_MachConsc_Final.pdf (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Omohundro</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>The Basic AI Drives</article-title>
          . In: Wang,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Goertzel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Franklin</surname>
          </string-name>
          , S. (eds.)
          <source>Artificial General Intelligence 2008: Proceedings of the First AGI Conference</source>
          , pp.
          <fpage>483</fpage>
          -
          <lpage>492</lpage>
          . IOS Press, Amsterdam (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Haidt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kesebir</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          : Morality. In Fiske, S.,
          <string-name>
            <surname>Gilbert</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lindzey</surname>
            ,
            <given-names>G</given-names>
          </string-name>
          . (eds.)
          <source>Handbook of Social Psychology, 5th Edition</source>
          , pp.
          <fpage>797</fpage>
          -
          <lpage>832</lpage>
          . Wiley, Hoboken, NJ (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Gigerenzer</surname>
          </string-name>
          , G.:
          <article-title>Moral satisficing: rethinking moral behavior as bounded rationality</article-title>
          .
          <source>Topics in Cognitive Science</source>
          <volume>2</volume>
          ,
          <fpage>528</fpage>
          -
          <lpage>554</lpage>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Rawls</surname>
          </string-name>
          , J.:
          <source>A Theory of Justice</source>
          . Harvard University Press, Cambridge, MA (
          <year>1971</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Nussbaum</surname>
          </string-name>
          , M. C:
          <article-title>Creating Capabilities: The Human Development Approach</article-title>
          . Belknap/Harvard University Press, Cambridge, MA (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Dennett</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Why you can't make a computer that feels pain</article-title>
          .
          <source>Synthese</source>
          <volume>38</volume>
          (
          <issue>3</issue>
          ),
          <fpage>415</fpage>
          -
          <lpage>449</lpage>
          (
          <year>1978</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Balduzzi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tononi</surname>
          </string-name>
          , G.:
          <article-title>Qualia: The Geometry of Integrated Information</article-title>
          .
          <source>PLoS Computational Biology</source>
          <volume>5</volume>
          (
          <issue>8</issue>
          ): e1000462. https://doi.org/10.1371/journal.pcbi.
          <volume>1000462</volume>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Pereira</surname>
            ,
            <given-names>L. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenaerts</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martinez-Vaquero</surname>
            ,
            <given-names>L. A.</given-names>
          </string-name>
          , Han,
          <string-name>
            <surname>T</surname>
          </string-name>
          . A.:
          <article-title>Social Manifestation of Guilt Leads to Stable Cooperation in Multi-Agent System</article-title>
          . In: Das,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Durfee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Larson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Winikoff</surname>
          </string-name>
          , M. (eds.)
          <source>Proceedings of the 16th Conference on Autonomous Agents and Multiagent Systems</source>
          , pp.
          <fpage>1422</fpage>
          -
          <lpage>1430</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Perc</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Effects of compassion on the evolution of cooperation in spatial social dilemmas</article-title>
          .
          <source>Applied Mathematics and Computation</source>
          <volume>320</volume>
          ,
          <fpage>437</fpage>
          -
          <lpage>443</lpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Waser</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kelley</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          :
          <article-title>Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)</article-title>
          .
          <source>In: Procedia Computer Science</source>
          <volume>88</volume>
          ,
          <fpage>125</fpage>
          -
          <lpage>130</lpage>
          . http://www.sciencedirect.com/science/article/pii/S1877050916316714 (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>