<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Operationalizing Consciousness</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Don Perlis</string-name>
          <email>perlis@cs.umd.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Justin Brody</string-name>
          <email>jdbrody@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Goucher College</institution>
          ,
          <addr-line>Towson MD 21204</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Maryland</institution>
          ,
          <addr-line>College Park MD 20742</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>David Chalmers (among others) is fond of saying that consciousness has no function; it can be there or not { it makes no di erence to behavior. In that sense, it supposedly is not like a pumping heart that helps keep one alive. Here we argue to the contrary: that consciousness has a critical function, and one that AI will be forced to deal with as a practical matter, as we probe more deeply into realtime commonsense reasoning. We will draw on a broad range of work { philosophical and otherwise { in making our argument.</p>
      </abstract>
      <kwd-group>
        <kwd>Consciousness</kwd>
        <kwd>Intentionality</kwd>
        <kwd>Self</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The topic of consciousness tends to lead to two kinds of claims: positive claims
about what it is, and negative claims about what it isn't. The latter include
Chalmers' claim that consciousness has no function, no physical consequences
{ it is an epiphenomenon [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]; and Searle's claim that subjective experience (in
the form of intentionality) cannot be achieved simply in virtue of a system's
executing a (formal) program [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Part of our purpose here is to examine { and
disagree with { both of these negative claims.
      </p>
      <p>
        Positive claims attempt to characterize the nature of consciousness. These
include Brentano's notion of intentionality [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] { that the mental is characterized
by its directedness toward objects of thought: to be conscious (i.e., in a mental
state) is to have thoughts (or feelings or attitudes) about something. Another
positive claim is due to Nagel: a conscious being is an entity that it is like
something to be [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. This latter notion essentially characterizes consciousness
as having a qualitative subjective experience, something happening to and in
oneself.
      </p>
      <p>So Nagel and Searle address similar notions of consciousness but with
different aims: Nagel says what it is; Searle says that it cannot occur via formal
computational processes alone (and in part bases his argument on a Nagel-like
experiential character). And Brentano provides a functional role for mind (the
relation of aboutness between a thought and its meaning), whereas Chalmers
denies any such functional role.</p>
      <p>
        We too seek to characterize consciousness positively, in terms of particular
processes. Nagel's characterization is less useful here than Brentano's. But the
two can be regarded as taking similar positions: being conscious amounts to
having (internal, qualitative, subjective) thoughts and feelings, and thoughts
and feelings are necessarily about something. Further, it is a common view in
Buddhism that consciousness is \that which is aware of objects" { seemingly
combining the two (Nagel's awareness and Brentano's aboutness) [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. So it
is tempting to consider whether a suitable subjective form of aboutness is an
essential ingredient of consciousness. The aboutness relation { as argued in [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ],
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] { not only connects symbols \in the head" to
(usually) external meanings but also is a key role of the self: the self is what
\intends" to refer to Joe Smith in employing the symbol \Joe". This will take
us on a somewhat meandering tour of various issues, from zombies to language
to robots to knowledge. Far from being a strict epiphenomenon, consciousness
seems to tie into a wide range of behaviors.
1.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Zombies and Re exivity</title>
      <p>
        A philosophical zombie (or just \zombie" if there is no confusion) is a
moleculefor-molecule identical copy of a normal human, subject to exactly the same
physical laws and thus producing indistinguishable physical behaviors; but (by
de nition) a zombie has no subjective experience, it is not like anything to be a
zombie [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The question then is: are zombies impossible? This is equivalent to
the question: is consciousness a physical process (i.e., something that performs
a physical function)? Chalmers has argued that zombies are possible ; we shall
not repeat his complex argument here (it is essentially based on the idea that
we seem to be able to imagine zombies), but rather present a counter-argument:
Suppose you have a zombie twin and each of you suddenly says \wow, I've got
a painful toothache and its getting worse." In your case, this is because you in
fact feel that toothache; but the zombie cannot feel pain (or anything else). Yet
identical brain processes are occurring in both brains (by de nition of zombie).
That is, whatever physical process led to your utterance also led to the zombie's.
Thus your utterance cannot have been based on (caused by) your feeling the pain
after all. This is a contradiction, so the possibility of such a zombie is ruled out.
The hidden premise here is that when we make a decision to honestly report on
a pain (or other subjective experience) it is in part dependent on there really
being such an experience. To deny our argument is to reject this highly intuitive
premise.
      </p>
      <p>But maybe subjectivity is an after-the-fact event: for instance, one makes
an utterance and then comes to feel whatever the utterance was about. In that
case, the zombie-twin might simply lack whatever (non-physical) competence
is involved in coming to have a feeling. This of course still ies in the face of
our intuitive premise and so does not seem much of an argument. But it brings
us to an important distinction, between re exive and re ective notions of self.
Looking back on an event and then forming a conclusion about it, is a process of
re ection. Thus, I can re ect on the toothache I had yesterday. But my toothache
today is even worse. I am not re ecting on this but rather am re exively knowing
the immediate pain itself. This is very easy to confuse. One knows one's pain
re exively, simply in virtue of there being pain (in oneself); it isn't rst there and
later (re ectively) known. More generally, subjective experience is experienced
(known) in and of itself, directly and immediately as part of being an experience;
there is no additional process that turns it into knowledge. The experience is the
experiencing of it { to have an experience is to know that experience.</p>
      <p>
        But this sounds very strange. How can something be its own experience? 3
And yet this is close to the mystery that seems to lie at the heart of consciousness
studies. There is a \sense of agency" (discussed more below) that is part and
parcel of being an agent. When performing a voluntary (\conscious") action, one
knows one is so doing; that knowledge does not arrive later on 4. But an action
that is known simply in virtue of its being performed, will be complex enough
that it already constitutes a kind of (self-knowing) agent. While space will not
allow a thorough treatment of re exivity, [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] o ers a fuller discussion.
      </p>
      <p>This may seem to be going in circles. But we are edging toward an unearthing
of less mystery and more practicality.
2</p>
      <sec id="sec-2-1">
        <title>Operationalization</title>
        <p>As noted above, an intentional agent will have its intentionality grounded in a
re exive model of a self. Our approach to operationalizing consciousness is via
operationalizing intentionality, and this will mean giving precise enough de
nitions of these terms so that they can be implemented. In this section, we will
report on preliminary work in both de ning and implementing these concepts.
2.1</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Enactive Minimal Self Models</title>
      <p>
        A minimal self is roughly a minimal process which could be said to constitute an
aware subject. The details vary on what precisely this entails; see (e.g.) [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] and
[
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] for two di erent but overlapping approaches to this idea. Our intention is
to model subjectivity in very short time-scales and ask what phenomena might
be constitutive of such; we explicitly leave out phenomena associated with more
re ective notions such as a narrative self [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ].
      </p>
      <p>
        Enactive cognitive science views cognition as something that occurs in
embodied agents which act on their environments; further this action is fundamental
to the extent that perception is itself an act and we perceive our world according
to our ability to act on it. Drawing on the work of Varela and Maturana[
        <xref ref-type="bibr" rid="ref30">30</xref>
        ],
some variants of the tradition further view the determined boundary between
an agent and its environment as the ground of meaning. As such, the tradition
3 It may be worth noting that the Mahayana Buddhist tradition has debated this
questions vigorously, with one party emphasizing the paradoxical nature of any notion
of self-knowledge and the other emphasizing that this is precisely what constitutes
mental life [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
4 This is of course not to say that one is only aware of what one is doing during
voluntary action; we thank the anonymous reviewer for pointing this out.
o ers a number of useful insights which we draw upon to develop an operational
notion of self.
      </p>
      <p>We will focus our discussion on endowing bodily selves with senses of agency
and ownership and re exive self-awareness. We have identi ed and worked on a
number of other essential features of computational selves, but omit discussion
of these in the interest of space 5.</p>
      <p>
        In accordance with the enactive tradition, we view selves as agents that act
in a world and have knowledge of themselves as such. Two fundamental forms of
knowledge will then be what Gallagher has termed a sense of ownership and a
sense of agency [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. These refer to agents' awareness that their actions are done
by them and that their body is theirs. For example, when I move my arm I know
that I caused it to move and that it is my arm that is moving. As our discussion
of Alice the robot below will illustrate, such knowledge is essential not just for
theoretical reasons but for basic functioning in the real world.
      </p>
      <p>We would like to give these notions a formal treatment { this is useful because
it grounds the philosophical concepts. By specifying precisely what we mean by,
say, sense of agency, we enable an analysis of the concept and an exploration
of the role it plays in a computational model of the self. It also allows a set
of criteria against which we can test an implementation; if we argue that an
agent endowed with a sense of agency has particular properties then it will be
critical that any implementation meet our de nition if it is to similarly posses
said properties.</p>
      <p>
        We ground our de nitions of ownership and agency in the neuroscienti c
concept of an e erence copy. This is a copy of a motor command that is thought
to be kept by an agent so that a prediction of the command's e ect on the world
can be compared to observed e ects. For example, before moving my hand a
half inch left my nervous system might make a prediction about what my hand
should look like after the action is complete. Such forward modeling is thought
to be the neurological basis of a sense of agency6 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Thus when a change in
my hand's position corresponds to the expect e ect of a self-initiated motor
command, I will have a sense that I moved my hand intentionally. Conversely,
a change which does not correspond to such a motor command will have me
looking for an external cause for my hand's movement.
      </p>
      <p>
        We generalize this story somewhat by allowing for a sense of agency to arise
from any kind of \full" representation of an agent's actions with respect to its
body. Ideas in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] suggest a representation of an action as a mapping
of the environment (as re ected in the agent's sensory state) onto some internal
state so that when the environment changes the internal state will change
accordingly. A straightforward example would be an internal image of the agent's
5 Some of these are that a self should be: cognitively situated with a rst-person
perspective; re exively self-aware; immediately self-aware in an essentially temporal way
and synchronically and diachronically uni ed
6 This need not be a literal copy of the command, but could (and arguably is) rather
some sparse representation of the command that allows for some kind of forward
modelling of the command's e ect.
body that shifts according to actions taken. If a single action (say rotating left
1 radian) has a consistent e ect on the representation (even that e ect is
rotating right one radian) then it will be a representation of the action. However,
notice that mapping all of our sensory information onto a single point will work
as well; our actions will end up being represented by that single point but this
representation is still consistent (if trivial). It is not particularly useful however,
so we also insist that as much information as possible be preserved; this can
be made formal by either invoking set-theoretics notions like bijectivity or the
concept of mutual information. Employing the latter concept gives a di
erentiable notion that can be deployed in machine learning algorithms. It is worth
noting that some philosophers of mind (especially in the analytic tradition) take
representation as constitutive of intentionality [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ].
      </p>
      <p>
        We have implemented such senses of agency and ownership in two di erent
projects. The rst of these used an analogue of e erence copy to allow our robot
Alice to recognize when she (as opposed to another agent) is making a particular
utterance [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. When Alice initiates a speech act A, that fact is recorded
in her knowledge base, and her perceptual apparatus monitors what happens
for comparison with expected results from the success of A. Furthermore, the
monitoring and the performance of A are iterated in parallel over tiny time-steps
so that ideally there is strong covariance between the two. Thus as Alice speaks,
she starts to speak and hears her voice, continues to speak and hear, and then
hears her voice stop as she nishes speaking; and in all this she simultaneously
knows she is so engaged. Such behavior is not a formal nicety, but rather is
central to intelligence. Imagine that a robot hears the utterance \Can you help
me?" { it will be crucial to its proper understanding and subsequent behavior,
whether it takes this to be an assertion made to it, or by it. Note that [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] takes
a very di erent approach, in having a robot infer that it is speaking based on
recognizing the sound of its voice { not on direct knowledge of ongoing voluntary
activity.
      </p>
      <p>
        Another implementation of agency and ownership used the ideas about
representations of agency outlined previously to force a deep neural network to
represent it's own agency and body while learning to play Atari games [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This
resulted in qualitatively sparse representations (over all representations, not just
the feature trained to recognize the agent's body) and improved game-play.
2.2
      </p>
    </sec>
    <sec id="sec-4">
      <title>Self-Awareness and Self-Modifying Utterances</title>
      <p>
        Agency, ownership and situatedness are fundamental properties of enactive
minimal self-models which are easily thought of in terms of lower-level, sub-symbolic
processing. Phenomenologically, subjects are also essentially characterized by
their self-awareness, and this seems better characterized in terms of symbolic
processing. Following Husserl (as relayed by Zahavi [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]), we take this self-awareness
to be grounded, re exive and occurring in \thick time" (see below).
      </p>
      <p>
        The temporal nature of self-awareness was analyzed extensively by Husserl,
who argued that awareness is not a phenomenon that unfolds instant-by-instant;
rather it is an extended but uni ed whole that consists of \retention, protention"
and \primal impression". Consistent with this view, and to avoid paradox, we
posit that a moment (of awareness) is not the durationless instant of physics,
but is rather an interval with small positive duration. This will allow actual
processing to occur, and corresponds to what Humphrey [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] calls \thick time"
and William James refers to as the \specious present" [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. By allowing moments
to have duration, we are given the opportunity to have something like rst-order
cognition and something like meta-cognition interact with su cient resolution
to be interdependent; we are developing a "diasynchronic logic" mechanism for
this based on the Active Logic formalism with a built-in Now(t) predicate which
gives agents an evolving representation for the current time [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>We are exploiting the features of Active Logic to model an agent's capacity to
reason about their own and others' ongoing inferences in real time while unifying
these into discrete statements. Such agents will be able to reason with changing
circumstances and the logical consequences of their thoughts and utterances,
knowingly speak truly or falsely, and reason with \benignly self-referential"
sentences. In particular, such an agent can utter sentences which self-modify as they
unfold, potentially modeling the thought process of a person who is speaking in
Spanish, notices her audience seems not to be following, and switches to English,
saying (truly, and simultaneously knowing it) \I'm now switching to English."</p>
      <p>
        The basic mechanisms of diasynchronic logic are intended to model sentences
as 1) unfolding over time and 2) demarcated by a self-determined end point. The
latter property (modeled on Maturana and Varela's notion of autopoiesis [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ])
allows for logical sentences to be self-unifying { the sentence itself can specify
where it stops. The former property allows us to take some of the mystery
out of self-reference and ground sentences in their own logical values. And this
hints at a resolution between our view and Searle's: special re exive processing
at multiple and overlapping timescales may be the juice that pulls action and
perception, semantics and syntax all together into one self-interacting cognitive
whole. Of course, almost everyone who suggests an approach to understanding
consciousness seems to arrive at a point where some "magic" is appealed to. But
we claim that our approach can be pursued at a practical { even computational
{ level.
3
      </p>
      <sec id="sec-4-1">
        <title>Conclusion</title>
        <p>
          We are in agreement with [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] in that consciousness will become more and more
central to AI as the latter pushes deeper into the nature of intelligence. This
is especially the case regarding recognition and recovery from errors, which in
turn require a detailed and real-time representation of self. Thus, far from being
an epiphenomenon, consciousness is part and parcel of what it is to be
intelligent: re exively knowing oneself to be engaged in ongoing processes (that same
knowing being among those same processes). And the nature of knowing will be
revealed as central and complex, well beyond a mere collection of data.
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomaa</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grant</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Active logic semantics for a single agent in a static world</article-title>
          .
          <source>Arti cial Intelligence</source>
          <volume>172</volume>
          (
          <issue>8-9</issue>
          ),
          <volume>1045</volume>
          {
          <fpage>1063</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Brentano</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Psychology from an empirical standpoint (</article-title>
          <year>1973</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bringsjord</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Licato</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Govindarajulu</surname>
            ,
            <given-names>N.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghosh</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Real robots that pass human tests of self-consciousness</article-title>
          .
          <source>In: Robot and Human Interactive Communication (RO-MAN)</source>
          ,
          <year>2015</year>
          24th IEEE International Symposium on. pp.
          <volume>498</volume>
          {
          <fpage>504</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Brody</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>An enactive self-model for sparse representations and improved performance</article-title>
          .
          <source>In: Brazilian Conference on Intelligent Systems</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Brody</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>An enactive self-model for sparse representations and improved performance</article-title>
          .
          <source>In: Intelligent Systems (BRACIS)</source>
          ,
          <source>2017 Brazilian Conference on</source>
          . pp.
          <volume>73</volume>
          {
          <fpage>78</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Brody</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shamwell</surname>
          </string-name>
          , J.:
          <article-title>Who's talking? e erence copy and a robot's sense of agency</article-title>
          .
          <source>In: 2015 AAAI Fall Symposium Series</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Brody</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shamwell</surname>
          </string-name>
          , J.:
          <article-title>Who's talking?e erence copy and a robot's sense of agency</article-title>
          .
          <source>In: 2015 AAAI Fall Symposium Series</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Chalmers</surname>
            ,
            <given-names>D.J.:</given-names>
          </string-name>
          <article-title>The conscious mind: In search of a fundamental theory</article-title>
          . Oxford university press (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Gallagher</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>How the body shapes the mind</article-title>
          . Cambridge Univ Press (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Humphrey</surname>
          </string-name>
          , N.:
          <article-title>Seeing red</article-title>
          . Harvard University Press (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>James</surname>
            ,
            <given-names>W.:</given-names>
          </string-name>
          <article-title>The perception of time</article-title>
          .
          <source>The Journal of speculative philosophy 20(4)</source>
          ,
          <volume>374</volume>
          {
          <fpage>407</fpage>
          (
          <year>1886</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Janzen</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>The re exive nature of consciousness</article-title>
          , vol.
          <volume>72</volume>
          . John Benjamins Publishing (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Maturana</surname>
            ,
            <given-names>H.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Varela</surname>
            ,
            <given-names>F.J.</given-names>
          </string-name>
          :
          <article-title>Autopoiesis and cognition: The realization of the living</article-title>
          , vol.
          <volume>42</volume>
          . Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Nagao</surname>
            ,
            <given-names>G.M.</given-names>
          </string-name>
          :
          <article-title>Madhyamika and yogacara: a study of Mahayana philosophies</article-title>
          . SUNY Press (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Nagel</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>What is it like to be a bat? The philosophical review 83(4</article-title>
          ),
          <volume>435</volume>
          {
          <fpage>450</fpage>
          (
          <year>1974</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.:</given-names>
          </string-name>
          <article-title>I am, therefore i think</article-title>
          .
          <source>In: APA Newsletter on Phil and Computers</source>
          . The American Philosophical Association (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Five dimensions of reasoning in the wild</article-title>
          .
          <source>In: AAAI</source>
          . pp.
          <volume>4152</volume>
          {
          <issue>4156</issue>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brody</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kraus</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The internal reasoning of robots (</article-title>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Putting one's foot in one's head{part i: Why</article-title>
          . Nou^s pp.
          <volume>435</volume>
          {
          <issue>455</issue>
          (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Putting one's foot in one's headpart ii: How? In: Thinking Computers</article-title>
          and Virtual Persons, pp.
          <volume>197</volume>
          {
          <fpage>224</fpage>
          .
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Consciousness and complexity: the cognitive quest</article-title>
          .
          <source>Annals of Mathematics and Arti cial Intelligence</source>
          <volume>14</volume>
          (
          <issue>2-4</issue>
          ),
          <volume>309</volume>
          {
          <fpage>321</fpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Consciousness as self-function</article-title>
          .
          <source>Journal of Consciousness Studies</source>
          <volume>4</volume>
          (
          <issue>5-6</issue>
          ),
          <volume>509</volume>
          {
          <fpage>525</fpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Purang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Alma/carne: implementation of a time-situated meta-reasoner</article-title>
          .
          <source>In: Tools with Arti cial Intelligence</source>
          ,
          <source>Proceedings of the 13th International Conference on</source>
          . pp.
          <volume>103</volume>
          {
          <fpage>110</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Reggia</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          :
          <article-title>Conscious machines: The ai perspective</article-title>
          .
          <source>In: AAAI Fall Symposium Series</source>
          , North America,
          <string-name>
            <surname>September</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Schechtman</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The narrative self</article-title>
          . In: Gallagher,
          <string-name>
            <surname>S</surname>
          </string-name>
          . (ed.)
          <article-title>The Oxford handbook of the self</article-title>
          . Oxford University Press (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Searle</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          :
          <article-title>Minds, brains, and programs</article-title>
          .
          <source>Behavioral and brain sciences</source>
          <volume>3</volume>
          (
          <issue>03</issue>
          ),
          <volume>417</volume>
          {
          <fpage>424</fpage>
          (
          <year>1980</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Siewert</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Consciousness and intentionality</article-title>
          . In: Zalta,
          <string-name>
            <surname>E.N.</surname>
          </string-name>
          <article-title>(ed.) The Stanford Encyclopedia of Philosophy</article-title>
          . Metaphysics Research Lab, Stanford University, spring
          <year>2017</year>
          edn. (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Sopa</surname>
            ,
            <given-names>G.L.</given-names>
          </string-name>
          :
          <article-title>Cutting Through Appearances: Practice and Theory of Tibetan Buddhism</article-title>
          .
          <source>Shambhala</source>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Strawson</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>The minimal subject</article-title>
          . In: Gallagher,
          <string-name>
            <surname>S</surname>
          </string-name>
          . (ed.)
          <article-title>The Oxford handbook of the self</article-title>
          . Oxford University Press (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Varela</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosch</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>The Embodied Mind</article-title>
          . MIT press (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Zahavi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Subjectivity and selfhood: Investigating the rst-person perspective</article-title>
          . MIT press (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>