<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Arti cial Phenomenology for Human-Level Arti cial Intelligence</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lorijn Zaadnoordijk</string-name>
          <email>L.Zaadnoordijk@donders.ru.nl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tarek R. Besold</string-name>
          <email>Tarek.Besold@telefonica.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Alpha Health AI Lab, Telefonica Innovation Alpha</institution>
          ,
          <addr-line>Barcelona</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Radboud University, Donders Institute for Brain</institution>
          ,
          <addr-line>Cognition, and Behaviour, Nijmegen</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>For human cognizers, phenomenal experiences take up a central role in the daily interaction with the world. In this paper, we argue in favor of shifting phenomenal experiences into the focus of human-level AI (HLAI) research and development. Instead of aiming to make arti cial systems feel in the same way humans do, we focus on the possibilities of engineering capacities that are functionally equivalent to phenomenal experiences. These capacities can provide a di erent quality of input, enabling a cognitive system to self-evaluate its state in the world more e ciently and with more generality than current methods allow. We ground our general argument using the example of the sense of agency. At the same time we re ect on the broader possibilities and bene ts for arti cial counterparts to human phenomenal experiences and provide suggestions regarding the implementation of functionally equivalent mechanisms.</p>
      </abstract>
      <kwd-group>
        <kwd>Human-Level Arti cial Intelligence Phenomenology Sense of Agency</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Phenomenal experiences are a de ning element of many interactions with our
surrounding world. While to us the presence of phenomenal qualities in our
everyday cognition does not always deserve active attention, the disappearance of
the experiential dimension would have far-reaching rami cations, for instance,
for learning, social interaction, and ethical behavior. Phenomenology has,
therefore, been a popular topic of theoretical and empirical investigation across di
erent disciplines [
        <xref ref-type="bibr" rid="ref19 ref3 ref7">3, 7, 19</xref>
        ] but|bar a few laudable exceptions such as [
        <xref ref-type="bibr" rid="ref17 ref4">17, 4</xref>
        ]|has
been widely ignored in AI. We argue in favor of shifting phenomenology also
into the focus of human-level AI (HLAI) research and development.
Phenomenal experiences provide a di erent quality of input to cognition as compared to
non-phenomenal perception (i.e., abstract registration of stimuli from the
environment). Among others, phenomenology can facilitate the self-evaluation of an
arti cial cognitive system's state in the world, facilitating learning about and
interacting with the physical world and other agents.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Phenomenology in Human-Level Arti cial Intelligence</title>
      <p>HLAI aims at developing machines that can meaningfully be considered to be
on par with humans in that they are similarly able to reason, to pursue and
achieve goals, to perceive and respond to di erent types of stimuli from their
environment, to process information, or to engage in scienti c and creative
activities. Our view on HLAI is functionalist: Any technologically realizable means of
(re)creating human-level intelligence in an arti cial system are considered valid.</p>
      <p>One of the core challenges any kind of cognitive system needs to solve is how
to best interact with the world (i.e., the environment in which it is situated) and
the associated (self-)evaluation of its state in the world. For humans, at least
two possible ways of solving these interconnected problems come to mind: one
route draws upon high-level reasoning capacities, and another one relies on
phenomenal experiences. The former route likely draws on a process requiring all of
perception, representation, reasoning, and evaluation. Phenomenal experiences
on the other hand often take over the function of providing immediate|and in
comparison much more unmediated|access and evaluation, allowing to go from
perception to evaluation via a route not involving high-level reasoning:
1. Perceive sensory input(s) fY g.
2. Represent the perceived inputs: R2(Y ).
3. Map from R(fY g)|and system-internal information S|to an evaluation of
the experiential category, quality, and valence in terms of, e.g., pain or
pleasure, weak or strong, attractive or aversive: E(R(Y ); S) 7! ffpain; pleasure; : : :g
fweak; strong; : : :g fattractive; aversive; : : :g; ;g.3</p>
      <p>Comparing both approaches, three advantages of the second route involving
phenomenal experiences can be explicated: (i) increased e ciency and
tractability, (ii) reduced requirements regarding additional information, and (iii)
increased generality. Mapping directly from perceptual representations to
evaluations removes the reasoning process from representation to category label
which otherwise is likely to involve the exploration of a signi cantly-sized state
space or the execution of a lengthy chain of individual reasoning steps.
Moreover, the successful performance of the high-level reasoning mechanism in many
cases requires further knowledge, which might not be available to the cognizer
at the relevant point in time. Phenomenal experiences, by contrast, are assumed
to be mostly independent from a person's knowledge (although they might be
in uenced by prior experiences and familiarity with a percept). Finally, in
`standard' approaches the interface between system and environment is commonly
conceived of in terms of evaluative functions taking two sets of input: A set of
current system and world states, often together with representations of potential
actions of the system, and a set of goals (i.e., desired system or world states). The
function output is an evaluation of the system and world states relative to the
systems goals. Generating these functions is far from trivial and hitherto lacks
3 The codomain of the mapping includes ; to account for cases where perception does
not yield a phenomenal experience as is the case, e.g., in subliminal priming.
a general answer or methodology. In most cases this hinders generalizability as
evaluation functions have to be grounded in a certain domain or action space
to be de nable in a comprehensive way. Moreover, they rely on the presence (or
absence) of certain de ned domain elements or action possibilities which imposes
further limitations regarding the generality of application domains.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Functional Equivalence in Arti cial Phenomenology</title>
      <p>
        We argue for engineering arti cial phenomenology (i.e., a functional equivalent of
phenomenal experiences) rather than human-like phenomenal experiences. Even
if we knew how to reproduce human phenomenology in arti cial systems, due to
a lack of kinship between AI/robotic systems and humans assuming similarity
of the phenomenal experience a priori is unwarranted: It might well be the case
that the precise phenomenal qualities may be an epiphenomenon resulting from
the particular forms of representation and/or processing in humans [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Still, we
believe that identity of phenomenal experiences is not required, but that a
functional equivalent on the side of the machine su ces for the purposes of creating
HLAI. The challenge, thus, becomes one of engineering a capacity ful lling the
same functions as phenomenal experiences do within cognitive processes, but
remains agnostic regarding the actual qualitative dimension.
      </p>
      <p>
        In considering ways of implementing arti cial phenomenology, we take a
representationalist approach [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] as often applied both to cognitive capacities as
well as phenomenal experiences. Representationalist accounts of phenomenology
posit that experiential states are characterized by the representational content
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Representationalism o ers a natural interface to approaches in HLAI, which
build upon the computational cognition maxim (i.e., assuming that computation
is in principle capable of giving rise to mental capacities) and, therefore, among
others introduce representations as important part of computation [
        <xref ref-type="bibr" rid="ref14 ref16">14, 16</xref>
        ].
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Implementing Phenomenology: the Sense of Agency</title>
      <p>
        In typically-developed human adults the \sense of agency" (i.e., the feeling of
causing one's actions and their consequences [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]), contributes to important
aspects of cognition, such as learning through intervention [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], social and moral
interaction [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and self-other distinction [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. At least two di erent
phenomena are considered under the banner of the \sense of agency": the \judgment of
agency" and the \feeling of agency" [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. In the case of the judgment, a reasoning
step gives rise to the assumed status as agent in the world|considering oneself
as agent provides the best explanation for the observations from the
environment, thus agency is assumed in a post-hoc fashion. This results in a belief state
ascribing agency to the reasoner. In the case of the feeling of agency, agency is
not directly perceived nor concluded as outcome of an active reasoning process
but is experienced as a phenomenal quality based on a representation of what
the world is like. In contrast to the judgment, the feeling of agency is, thus, more
akin to a perceptual state than to a belief state.
      </p>
      <p>
        From an HLAI perspective, both concepts pose di erent challenges when
considering an implementation. The judgment of agency requires a reasoning process
determining oneself as the most likely cause for the observed changes in the state
of the environment. Implementing this reasoning in a cognitive system returns to
several facets of the Frame Problem [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. If demanding the judgment of agency to
be infallible, the system must be able to rule out all possible alternative causes
(observed and unobserved) for the respective change in the world. Alas, already
deciding which aspects of the perceptual input are relevant for performing the
judgment of agency carries the danger of computational intractability, as does
the subsequent reasoning process. Luckily, infallibility imposes an unreasonable
standard not least because also humans can err when being asked to judge their
agency in settings where an immediate observation is not possible [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. In
practical terms, implementing the judgment of agency becomes equivalent to a form
of inference to the best explanation|and, thus, to abductive reasoning [
        <xref ref-type="bibr" rid="ref13 ref8">13, 8</xref>
        ]:
The system must decide if a change in its environment is most likely due to its
own actions, making it an agent within the corresponding situation (or not).
      </p>
      <p>
        The feeling of agency as perceptual state is often thought to arise from a
comparison between the predicted state of the world following one's action on the
one hand, and the observed state of the world on the other hand [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Motivated
by its contribution to other cognitive capacities, several groups of researchers
have engaged with the question of how to equip arti cial systems with a SoA or
related capacities [
        <xref ref-type="bibr" rid="ref12 ref15">15, 12</xref>
        ]. A commonality of these and similar projects is their
primary focus on contingency detection. However, while contingency detection
plays a major role in human SoA, by itself it is not su cient [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Instead, it
likely is the case that the detection of a sensorimotor contingency serves as a
cue for the causal inference that the action was caused by oneself, the output of
which is a mental causal representation that in part characterizes the SoA.
Returning to the context of arti cial cognitive systems, obtaining the corresponding
evaluations also necessitates one further step beyond the detection of the
contingency between predicted and observed world state. Still, while inferential in
nature, this step does not have to involve forms of complex high-level reasoning
as would be the case for the judgment of agency. It could be carried out following
the general pattern for phenomenal experiences laid out above: Provided with
the perceived world state as sensory input, and the predicted world state within
the system-internal information at the current point in time, the detection of an
equality relation between both causes a mapping to `sense of agency' as
experiential category. This, of course, unavoidably triggers the question for the genesis
of the required mapping function. Di erent approaches are imaginable, including
a priori hardcoding by the system architect, learning from observed statistical
regularities, or via an explicit `teaching' e ort through the designer or the user.
      </p>
      <p>Generally, the challenge in engineering a functional equivalent of the human
feeling of agency resides in leaving out the actual qualitative dimension of human
phenomenal experiences (cf. the corresponding discussion in Section 3) without
also stripping away the bene ts of having phenomenal experiences. A possible
solution is a direct mapping of certain sensory ranges combined with a
snapshot of the internal system state onto immediate \phenomenal values". Given
the system state at one particular point in time, certain sensory inputs ought
to give rise to arti cial phenomenology. Arti cial counterparts of phenomenal
experiences and their rich qualitative properties can be de ned as immediate
mappings from the output ranges of the available sensors of the system,
combined with speci c information regarding the internal state of the system. At this
point, the important property is the nite and known range of both, the sensors
and the internal representational mechanisms of the system. By cutting out the
reasoning step, the phenomenally-inspired approach neither requires an
exhaustive enumeration and interpretation (and, thus, in practice a restriction) of the
space of possible percepts and their representations, nor does it involve an often
times computationally costly evaluation of the current system and world state
relative to any goal state(s). Reducing the relevant information to the percept
representations together with system-internal properties, and applying a direct
mapping to qualitative categories with associated evaluation values therefore
increases the tractability of the computational process and the generality of the
approach. The output values can then serve as direct functional counterparts
of human phenomenal experiences, for example triggering evasive reactions if
\pain" is encountered or providing positive reward and consequently motivation
to continue an action if \pleasure" arises.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>We have argued that imbuing HLAI systems with capacities paralleling the role
of phenomenal experiences in humans facilitates learning and acting in the world
in a more general and tractable way than currently possible. Returning to our
comparison in Section 2 between a process involving perception, representation,
reasoning and evaluation versus the shorter perception-representation-evaluation
cycle of arti cial phenomenology, the latter promises to enable the system to
self-evaluate its state in the world without the use of knowledge-rich,
domainspeci c evaluation functions or intractable reasoning processes. This could in
turn facilitate learning and acting in the world in terms of assigning actions
based on their predicted outcomes and assessing actual action outcomes.</p>
      <p>In terms of applications, beyond the already mentioned obvious advantages
regarding the progress towards creating HLAI as a research endeavor, arti
cial phenomenology promises to unlock a new qualitative dimension in
humancomputer interaction (HCI) settings. Arti cial phenomenology would greatly
contribute to system behaviour closer resembling human agents, as well as to
complex user-modelling capacities providing more immediate|and likely
generally better-informed|accounts of a user's cognitive state(s) as basis of
interaction and collaboration. As such, several aspects motivate the need for arti cial
phenomenology and, therefore, the need for research into the possibilities. In this
paper, we have outlined a starting position for this enterprise.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Blakemore</surname>
            ,
            <given-names>S.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolpert</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frith</surname>
          </string-name>
          , C.D.:
          <article-title>Central cancellation of self-produced tickle sensation</article-title>
          .
          <source>Nature Neuroscience</source>
          <volume>1</volume>
          (
          <issue>7</issue>
          ),
          <volume>635</volume>
          (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Caspar</surname>
            ,
            <given-names>E.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cleeremans</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haggard</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Only giving orders? An experimental study of the sense of agency when giving or receiving commands</article-title>
          .
          <source>PloS ONE</source>
          <volume>13</volume>
          (
          <issue>9</issue>
          ),
          <year>e0204027</year>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Chalmers</surname>
            ,
            <given-names>D.J.:</given-names>
          </string-name>
          <article-title>The representational character of experience. The Future for Philosophy pp</article-title>
          .
          <volume>153</volume>
          {
          <issue>181</issue>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Chella</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manzotti</surname>
          </string-name>
          , R.:
          <article-title>Arti cial consciousness</article-title>
          . In: Cutsuridis,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Hussain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Taylor</surname>
          </string-name>
          , J.G. (eds.)
          <string-name>
            <surname>Perception-Action</surname>
            <given-names>Cycle</given-names>
          </string-name>
          : Models, Architectures, and Hardware, pp.
          <volume>637</volume>
          {
          <fpage>671</fpage>
          . Springer New York, New York, NY (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cummins</surname>
            , R.: Meaning and
            <given-names>Mental</given-names>
          </string-name>
          <string-name>
            <surname>Representation</surname>
          </string-name>
          . MIT Press (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Dehaene</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lau</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kouider</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Response to commentaries on what is consciousness, and could machines have it?</article-title>
          .
          <source>Science</source>
          <volume>359</volume>
          (
          <issue>6374</issue>
          ),
          <volume>400</volume>
          {
          <fpage>402</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Dehaene</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Naccache</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework</article-title>
          .
          <source>Cognition</source>
          <volume>79</volume>
          (
          <issue>1-2</issue>
          ),
          <volume>1</volume>
          {
          <fpage>37</fpage>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Denecker</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kakas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Abduction in logic programming</article-title>
          . In: Kakas,
          <string-name>
            <given-names>A.C.</given-names>
            ,
            <surname>Sadri</surname>
          </string-name>
          ,
          <string-name>
            <surname>F</surname>
          </string-name>
          . (eds.)
          <article-title>Computational Logic: Logic Programming and Beyond: Essays in Honour of Robert A</article-title>
          .
          <string-name>
            <surname>Kowalski Part</surname>
            <given-names>I</given-names>
          </string-name>
          , pp.
          <volume>402</volume>
          {
          <fpage>436</fpage>
          . Springer, Berlin/Heidelberg (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dennett</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>The frame problem of ai</article-title>
          .
          <source>Philosophy of Psychology: Contemporary Readings</source>
          <volume>433</volume>
          ,
          <issue>67</issue>
          {
          <fpage>83</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Haggard</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chambon</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Sense of agency</article-title>
          .
          <source>Current Biology</source>
          <volume>22</volume>
          (
          <issue>10</issue>
          ),
          <source>R390{R392</source>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Lagnado</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sloman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Learning causal structure</article-title>
          .
          <source>In: Proceedings of the Annual Meeting of the Cognitive Science Society</source>
          . vol.
          <volume>24</volume>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Lara</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hafner</surname>
            ,
            <given-names>V.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ritter</surname>
            ,
            <given-names>C.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schillaci</surname>
          </string-name>
          , G.:
          <article-title>Body representations for robot ego-noise modelling and prediction. towards the development of a sense of agency in arti cial agents</article-title>
          .
          <source>In: Proceedings of the Arti cial Life Conference</source>
          <year>2016</year>
          13. pp.
          <volume>390</volume>
          {
          <fpage>397</fpage>
          . MIT Press (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Mooney</surname>
          </string-name>
          , R.J.:
          <article-title>Integrating abduction and induction in machine learning</article-title>
          . In: Flach,
          <string-name>
            <given-names>P.A.</given-names>
            ,
            <surname>Kakas</surname>
          </string-name>
          , A.C. (eds.)
          <source>Abduction and Induction: Essays on their Relation and Integration</source>
          , pp.
          <volume>181</volume>
          {
          <fpage>191</fpage>
          . Springer Netherlands, Dordrecht (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>O</given-names>
            <surname>'Brien</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Opie</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.:</surname>
          </string-name>
          <article-title>The role of representation in computation</article-title>
          .
          <source>Cognitive Processing</source>
          <volume>10</volume>
          (
          <issue>1</issue>
          ),
          <volume>53</volume>
          {
          <fpage>62</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Pitti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mori</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kouzuma</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuniyoshi</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Contingency perception and agency measure in visuo-motor spiking neural networks</article-title>
          .
          <source>IEEE Transactions on Autonomous Mental Development</source>
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <volume>86</volume>
          {
          <fpage>97</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Rescorla</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Computational modeling of the mind: what role for mental representation</article-title>
          ?
          <source>Wiley Interdisciplinary Reviews: Cognitive Science</source>
          <volume>6</volume>
          (
          <issue>1</issue>
          ),
          <volume>65</volume>
          {
          <fpage>73</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Sloman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chrisley</surname>
          </string-name>
          , R.:
          <article-title>Virtual machines and consciousness</article-title>
          .
          <source>Journal of consciousness studies 10</source>
          (
          <issue>4-5</issue>
          ),
          <volume>133</volume>
          {
          <fpage>172</fpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Synofzik</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vosgerau</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Newen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Beyond the comparator model: A multifactorial two-step account of agency</article-title>
          .
          <source>Consciousness and Cognition</source>
          <volume>17</volume>
          (
          <issue>1</issue>
          ),
          <volume>219</volume>
          {
          <fpage>239</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Tsakiris</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <article-title>Schutz-</article-title>
          <string-name>
            <surname>Bosbach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gallagher</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>On agency and body-ownership: Phenomenological and neurocognitive re ections</article-title>
          .
          <source>Consciousness and Cognition</source>
          <volume>16</volume>
          (
          <issue>3</issue>
          ),
          <volume>645</volume>
          {
          <fpage>660</fpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Wegner</surname>
            ,
            <given-names>D.M.:</given-names>
          </string-name>
          <article-title>The Illusion of Conscious Will</article-title>
          . Bradford Books/MIT Press (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Zaadnoordijk</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Besold</surname>
            ,
            <given-names>T.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hunnius</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A match does not make a sense: On the su ciency of the comparator model for explaining the sense of agency (submitted)</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>