<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Operationalizing Consciousness</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Don</forename><surname>Perlis</surname></persName>
							<email>perlis@cs.umd.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Maryland</orgName>
								<address>
									<postCode>20742</postCode>
									<settlement>College Park</settlement>
									<region>MD</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Justin</forename><surname>Brody</surname></persName>
							<email>jdbrody@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="institution">Goucher College</orgName>
								<address>
									<postCode>21204</postCode>
									<settlement>Towson</settlement>
									<region>MD</region>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Operationalizing Consciousness</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">281272CD6C12C1018DC5AAA89A4762CC</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:41+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Consciousness</term>
					<term>Intentionality</term>
					<term>Self</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>David Chalmers (among others) is fond of saying that consciousness has no function; it can be there or not -it makes no difference to behavior. In that sense, it supposedly is not like a pumping heart that helps keep one alive. Here we argue to the contrary: that consciousness has a critical function, and one that AI will be forced to deal with as a practical matter, as we probe more deeply into realtime commonsense reasoning. We will draw on a broad range of work -philosophical and otherwise -in making our argument.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">AI and Consciousness</head><p>The topic of consciousness tends to lead to two kinds of claims: positive claims about what it is, and negative claims about what it isn't. The latter include Chalmers' claim that consciousness has no function, no physical consequences -it is an epiphenomenon <ref type="bibr" target="#b7">[8]</ref>; and Searle's claim that subjective experience (in the form of intentionality) cannot be achieved simply in virtue of a system's executing a (formal) program <ref type="bibr" target="#b25">[26]</ref>. Part of our purpose here is to examine -and disagree with -both of these negative claims.</p><p>Positive claims attempt to characterize the nature of consciousness. These include Brentano's notion of intentionality <ref type="bibr" target="#b1">[2]</ref> -that the mental is characterized by its directedness toward objects of thought: to be conscious (i.e., in a mental state) is to have thoughts (or feelings or attitudes) about something. Another positive claim is due to Nagel: a conscious being is an entity that it is like something to be <ref type="bibr" target="#b14">[15]</ref>. This latter notion essentially characterizes consciousness as having a qualitative subjective experience, something happening to and in oneself.</p><p>So Nagel and Searle address similar notions of consciousness but with different aims: Nagel says what it is; Searle says that it cannot occur via formal computational processes alone (and in part bases his argument on a Nagel-like experiential character). And Brentano provides a functional role for mind (the relation of aboutness between a thought and its meaning), whereas Chalmers denies any such functional role.</p><p>We too seek to characterize consciousness positively, in terms of particular processes. Nagel's characterization is less useful here than Brentano's. But the two can be regarded as taking similar positions: being conscious amounts to having (internal, qualitative, subjective) thoughts and feelings, and thoughts and feelings are necessarily about something. Further, it is a common view in Buddhism that consciousness is "that which is aware of objects" -seemingly combining the two (Nagel's awareness and Brentano's aboutness) <ref type="bibr" target="#b27">[28]</ref>. So it is tempting to consider whether a suitable subjective form of aboutness is an essential ingredient of consciousness. The aboutness relation -as argued in <ref type="bibr" target="#b18">[19]</ref>, <ref type="bibr" target="#b19">[20]</ref>, <ref type="bibr" target="#b20">[21]</ref>, <ref type="bibr" target="#b21">[22]</ref>, <ref type="bibr" target="#b16">[17]</ref>, <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b17">[18]</ref> -not only connects symbols "in the head" to (usually) external meanings but also is a key role of the self: the self is what "intends" to refer to Joe Smith in employing the symbol "Joe". This will take us on a somewhat meandering tour of various issues, from zombies to language to robots to knowledge. Far from being a strict epiphenomenon, consciousness seems to tie into a wide range of behaviors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Zombies and Reflexivity</head><p>A philosophical zombie (or just "zombie" if there is no confusion) is a moleculefor-molecule identical copy of a normal human, subject to exactly the same physical laws and thus producing indistinguishable physical behaviors; but (by definition) a zombie has no subjective experience, it is not like anything to be a zombie <ref type="bibr" target="#b7">[8]</ref>. The question then is: are zombies impossible? This is equivalent to the question: is consciousness a physical process (i.e., something that performs a physical function)? Chalmers has argued that zombies are possible ; we shall not repeat his complex argument here (it is essentially based on the idea that we seem to be able to imagine zombies), but rather present a counter-argument: Suppose you have a zombie twin and each of you suddenly says "wow, I've got a painful toothache and its getting worse." In your case, this is because you in fact feel that toothache; but the zombie cannot feel pain (or anything else). Yet identical brain processes are occurring in both brains (by definition of zombie). That is, whatever physical process led to your utterance also led to the zombie's. Thus your utterance cannot have been based on (caused by) your feeling the pain after all. This is a contradiction, so the possibility of such a zombie is ruled out. The hidden premise here is that when we make a decision to honestly report on a pain (or other subjective experience) it is in part dependent on there really being such an experience. To deny our argument is to reject this highly intuitive premise.</p><p>But maybe subjectivity is an after-the-fact event: for instance, one makes an utterance and then comes to feel whatever the utterance was about. In that case, the zombie-twin might simply lack whatever (non-physical) competence is involved in coming to have a feeling. This of course still flies in the face of our intuitive premise and so does not seem much of an argument. But it brings us to an important distinction, between reflexive and reflective notions of self. Looking back on an event and then forming a conclusion about it, is a process of reflection. Thus, I can reflect on the toothache I had yesterday. But my toothache today is even worse. I am not reflecting on this but rather am reflexively knowing the immediate pain itself. This is very easy to confuse. One knows one's pain reflexively, simply in virtue of there being pain (in oneself); it isn't first there and later (reflectively) known. More generally, subjective experience is experienced (known) in and of itself, directly and immediately as part of being an experience; there is no additional process that turns it into knowledge. The experience is the experiencing of it -to have an experience is to know that experience.</p><p>But this sounds very strange. How can something be its own experience?<ref type="foot" target="#foot_0">3</ref> And yet this is close to the mystery that seems to lie at the heart of consciousness studies. There is a "sense of agency" (discussed more below) that is part and parcel of being an agent. When performing a voluntary ("conscious") action, one knows one is so doing; that knowledge does not arrive later on <ref type="foot" target="#foot_1">4</ref> . But an action that is known simply in virtue of its being performed, will be complex enough that it already constitutes a kind of (self-knowing) agent. While space will not allow a thorough treatment of reflexivity, <ref type="bibr" target="#b11">[12]</ref> offers a fuller discussion.</p><p>This may seem to be going in circles. But we are edging toward an unearthing of less mystery and more practicality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Operationalization</head><p>As noted above, an intentional agent will have its intentionality grounded in a reflexive model of a self. Our approach to operationalizing consciousness is via operationalizing intentionality, and this will mean giving precise enough definitions of these terms so that they can be implemented. In this section, we will report on preliminary work in both defining and implementing these concepts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Enactive Minimal Self Models</head><p>A minimal self is roughly a minimal process which could be said to constitute an aware subject. The details vary on what precisely this entails; see (e.g.) <ref type="bibr" target="#b21">[22]</ref> and <ref type="bibr" target="#b28">[29]</ref> for two different but overlapping approaches to this idea. Our intention is to model subjectivity in very short time-scales and ask what phenomena might be constitutive of such; we explicitly leave out phenomena associated with more reflective notions such as a narrative self <ref type="bibr" target="#b24">[25]</ref>.</p><p>Enactive cognitive science views cognition as something that occurs in embodied agents which act on their environments; further this action is fundamental to the extent that perception is itself an act and we perceive our world according to our ability to act on it. Drawing on the work of Varela and Maturana <ref type="bibr" target="#b29">[30]</ref>, some variants of the tradition further view the determined boundary between an agent and its environment as the ground of meaning. As such, the tradition offers a number of useful insights which we draw upon to develop an operational notion of self.</p><p>We will focus our discussion on endowing bodily selves with senses of agency and ownership and reflexive self-awareness. We have identified and worked on a number of other essential features of computational selves, but omit discussion of these in the interest of space <ref type="foot" target="#foot_2">5</ref> .</p><p>In accordance with the enactive tradition, we view selves as agents that act in a world and have knowledge of themselves as such. Two fundamental forms of knowledge will then be what Gallagher has termed a sense of ownership and a sense of agency <ref type="bibr" target="#b8">[9]</ref>. These refer to agents' awareness that their actions are done by them and that their body is theirs. For example, when I move my arm I know that I caused it to move and that it is my arm that is moving. As our discussion of Alice the robot below will illustrate, such knowledge is essential not just for theoretical reasons but for basic functioning in the real world.</p><p>We would like to give these notions a formal treatment -this is useful because it grounds the philosophical concepts. By specifying precisely what we mean by, say, sense of agency, we enable an analysis of the concept and an exploration of the role it plays in a computational model of the self. It also allows a set of criteria against which we can test an implementation; if we argue that an agent endowed with a sense of agency has particular properties then it will be critical that any implementation meet our definition if it is to similarly posses said properties.</p><p>We ground our definitions of ownership and agency in the neuroscientific concept of an efference copy. This is a copy of a motor command that is thought to be kept by an agent so that a prediction of the command's effect on the world can be compared to observed effects. For example, before moving my hand a half inch left my nervous system might make a prediction about what my hand should look like after the action is complete. Such forward modeling is thought to be the neurological basis of a sense of agency<ref type="foot" target="#foot_3">6</ref>  <ref type="bibr" target="#b8">[9]</ref>. Thus when a change in my hand's position corresponds to the expect effect of a self-initiated motor command, I will have a sense that I moved my hand intentionally. Conversely, a change which does not correspond to such a motor command will have me looking for an external cause for my hand's movement.</p><p>We generalize this story somewhat by allowing for a sense of agency to arise from any kind of "full" representation of an agent's actions with respect to its body. Ideas in <ref type="bibr" target="#b3">[4]</ref> and <ref type="bibr" target="#b16">[17]</ref> suggest a representation of an action as a mapping of the environment (as reflected in the agent's sensory state) onto some internal state so that when the environment changes the internal state will change accordingly. A straightforward example would be an internal image of the agent's body that shifts according to actions taken. If a single action (say rotating left 1 radian) has a consistent effect on the representation (even that effect is rotating right one radian) then it will be a representation of the action. However, notice that mapping all of our sensory information onto a single point will work as well; our actions will end up being represented by that single point but this representation is still consistent (if trivial). It is not particularly useful however, so we also insist that as much information as possible be preserved; this can be made formal by either invoking set-theoretics notions like bijectivity or the concept of mutual information. Employing the latter concept gives a differentiable notion that can be deployed in machine learning algorithms. It is worth noting that some philosophers of mind (especially in the analytic tradition) take representation as constitutive of intentionality <ref type="bibr" target="#b26">[27]</ref>.</p><p>We have implemented such senses of agency and ownership in two different projects. The first of these used an analogue of efference copy to allow our robot Alice to recognize when she (as opposed to another agent) is making a particular utterance <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b5">[6]</ref>. When Alice initiates a speech act A, that fact is recorded in her knowledge base, and her perceptual apparatus monitors what happens for comparison with expected results from the success of A. Furthermore, the monitoring and the performance of A are iterated in parallel over tiny time-steps so that ideally there is strong covariance between the two. Thus as Alice speaks, she starts to speak and hears her voice, continues to speak and hear, and then hears her voice stop as she finishes speaking; and in all this she simultaneously knows she is so engaged. Such behavior is not a formal nicety, but rather is central to intelligence. Imagine that a robot hears the utterance "Can you help me?" -it will be crucial to its proper understanding and subsequent behavior, whether it takes this to be an assertion made to it, or by it. Note that <ref type="bibr" target="#b2">[3]</ref> takes a very different approach, in having a robot infer that it is speaking based on recognizing the sound of its voice -not on direct knowledge of ongoing voluntary activity.</p><p>Another implementation of agency and ownership used the ideas about representations of agency outlined previously to force a deep neural network to represent it's own agency and body while learning to play Atari games <ref type="bibr" target="#b4">[5]</ref>. This resulted in qualitatively sparse representations (over all representations, not just the feature trained to recognize the agent's body) and improved game-play.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Self-Awareness and Self-Modifying Utterances</head><p>Agency, ownership and situatedness are fundamental properties of enactive minimal self-models which are easily thought of in terms of lower-level, sub-symbolic processing. Phenomenologically, subjects are also essentially characterized by their self-awareness, and this seems better characterized in terms of symbolic processing. Following Husserl (as relayed by Zahavi <ref type="bibr" target="#b30">[31]</ref>), we take this self-awareness to be grounded, reflexive and occurring in "thick time" (see below).</p><p>The temporal nature of self-awareness was analyzed extensively by Husserl, who argued that awareness is not a phenomenon that unfolds instant-by-instant; rather it is an extended but unified whole that consists of "retention, protention" and "primal impression". Consistent with this view, and to avoid paradox, we posit that a moment (of awareness) is not the durationless instant of physics, but is rather an interval with small positive duration. This will allow actual processing to occur, and corresponds to what Humphrey <ref type="bibr" target="#b9">[10]</ref> calls "thick time" and William James refers to as the "specious present" <ref type="bibr" target="#b10">[11]</ref>. By allowing moments to have duration, we are given the opportunity to have something like first-order cognition and something like meta-cognition interact with sufficient resolution to be interdependent; we are developing a "diasynchronic logic" mechanism for this based on the Active Logic formalism with a built-in Now(t) predicate which gives agents an evolving representation for the current time <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b22">[23]</ref>, <ref type="bibr" target="#b17">[18]</ref>.</p><p>We are exploiting the features of Active Logic to model an agent's capacity to reason about their own and others' ongoing inferences in real time while unifying these into discrete statements. Such agents will be able to reason with changing circumstances and the logical consequences of their thoughts and utterances, knowingly speak truly or falsely, and reason with "benignly self-referential" sentences. In particular, such an agent can utter sentences which self-modify as they unfold, potentially modeling the thought process of a person who is speaking in Spanish, notices her audience seems not to be following, and switches to English, saying (truly, and simultaneously knowing it) "I'm now switching to English."</p><p>The basic mechanisms of diasynchronic logic are intended to model sentences as 1) unfolding over time and 2) demarcated by a self-determined end point. The latter property (modeled on Maturana and Varela's notion of autopoiesis <ref type="bibr" target="#b12">[13]</ref>) allows for logical sentences to be self-unifying -the sentence itself can specify where it stops. The former property allows us to take some of the mystery out of self-reference and ground sentences in their own logical values. And this hints at a resolution between our view and Searle's: special reflexive processing at multiple and overlapping timescales may be the juice that pulls action and perception, semantics and syntax all together into one self-interacting cognitive whole. Of course, almost everyone who suggests an approach to understanding consciousness seems to arrive at a point where some "magic" is appealed to. But we claim that our approach can be pursued at a practical -even computational -level.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Conclusion</head><p>We are in agreement with <ref type="bibr" target="#b23">[24]</ref> in that consciousness will become more and more central to AI as the latter pushes deeper into the nature of intelligence. This is especially the case regarding recognition and recovery from errors, which in turn require a detailed and real-time representation of self. Thus, far from being an epiphenomenon, consciousness is part and parcel of what it is to be intelligent: reflexively knowing oneself to be engaged in ongoing processes (that same knowing being among those same processes). And the nature of knowing will be revealed as central and complex, well beyond a mere collection of data.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">It may be worth noting that the Mahayana Buddhist tradition has debated this questions vigorously, with one party emphasizing the paradoxical nature of any notion of self-knowledge and the other emphasizing that this is precisely what constitutes mental life<ref type="bibr" target="#b13">[14]</ref></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1"><ref type="bibr" target="#b3">4</ref> This is of course not to say that one is only aware of what one is doing during voluntary action; we thank the anonymous reviewer for pointing this out.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_2">Some of these are that a self should be: cognitively situated with a first-person perspective; reflexively self-aware; immediately self-aware in an essentially temporal way and synchronically and diachronically unified</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_3"><ref type="bibr" target="#b5">6</ref> This need not be a literal copy of the command, but could (and arguably is) rather some sparse representation of the command that allows for some kind of forward modelling of the command's effect.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Active logic semantics for a single agent in a static world</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Gomaa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Grant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">172</biblScope>
			<biblScope unit="issue">8-9</biblScope>
			<biblScope unit="page" from="1045" to="1063" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Brentano</surname></persName>
		</author>
		<title level="m">Psychology from an empirical standpoint</title>
				<imprint>
			<date type="published" when="1973">1973</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Real robots that pass human tests of self-consciousness</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bringsjord</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Licato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">S</forename><surname>Govindarajulu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ghosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Robot and Human Interactive Communication (RO-MAN)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="498" to="504" />
		</imprint>
	</monogr>
	<note>24th IEEE International Symposium on</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An enactive self-model for sparse representations and improved performance</title>
		<author>
			<persName><forename type="first">J</forename><surname>Brody</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Brazilian Conference on Intelligent Systems</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">An enactive self-model for sparse representations and improved performance</title>
		<author>
			<persName><forename type="first">J</forename><surname>Brody</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligent Systems (BRACIS), 2017 Brazilian Conference on</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="73" to="78" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Who&apos;s talking? efference copy and a robot&apos;s sense of agency</title>
		<author>
			<persName><forename type="first">J</forename><surname>Brody</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shamwell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI Fall Symposium Series</title>
				<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Who&apos;s talking?efference copy and a robot&apos;s sense of agency</title>
		<author>
			<persName><forename type="first">J</forename><surname>Brody</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shamwell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI Fall Symposium Series</title>
				<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Chalmers</surname></persName>
		</author>
		<title level="m">The conscious mind: In search of a fundamental theory</title>
				<imprint>
			<publisher>Oxford university press</publisher>
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Gallagher</surname></persName>
		</author>
		<title level="m">How the body shapes the mind</title>
				<imprint>
			<publisher>Cambridge Univ Press</publisher>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Seeing red</title>
		<author>
			<persName><forename type="first">N</forename><surname>Humphrey</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2006">2006</date>
			<publisher>Harvard University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The perception of time</title>
		<author>
			<persName><forename type="first">W</forename><surname>James</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Journal of speculative philosophy</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="374" to="407" />
			<date type="published" when="1886">1886</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">The reflexive nature of consciousness</title>
		<author>
			<persName><forename type="first">G</forename><surname>Janzen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>John Benjamins Publishing</publisher>
			<biblScope unit="volume">72</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">R</forename><surname>Maturana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Varela</surname></persName>
		</author>
		<title level="m">Autopoiesis and cognition: The realization of the living</title>
				<imprint>
			<publisher>Springer Science &amp; Business Media</publisher>
			<date type="published" when="1991">1991</date>
			<biblScope unit="volume">42</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Madhyamika and yogacara: a study of Mahayana philosophies</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Nagao</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1991">1991</date>
			<publisher>SUNY Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">What is it like to be a bat? The philosophical</title>
		<author>
			<persName><forename type="first">T</forename><surname>Nagel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">review</title>
		<imprint>
			<biblScope unit="volume">83</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="435" to="450" />
			<date type="published" when="1974">1974</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">I am, therefore i think</title>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">APA Newsletter on Phil and Computers. The American Philosophical Association</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Five dimensions of reasoning in the wild</title>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="4152" to="4156" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Brody</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kraus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Miller</surname></persName>
		</author>
		<title level="m">The internal reasoning of robots</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Putting one&apos;s foot in one&apos;s head-part i: Why</title>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Noûs</title>
		<imprint>
			<biblScope unit="page" from="435" to="455" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Putting one&apos;s foot in one&apos;s headpart ii: How? In</title>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Thinking Computers and Virtual Persons</title>
				<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="197" to="224" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Consciousness and complexity: the cognitive quest</title>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Mathematics and Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">2-4</biblScope>
			<biblScope unit="page" from="309" to="321" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Consciousness as self-function</title>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Consciousness Studies</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">5-6</biblScope>
			<biblScope unit="page" from="509" to="525" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Alma/carne: implementation of a time-situated meta-reasoner</title>
		<author>
			<persName><forename type="first">K</forename><surname>Purang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th International Conference on</title>
				<meeting>the 13th International Conference on</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="103" to="110" />
		</imprint>
	</monogr>
	<note>Tools with Artificial Intelligence</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Conscious machines: The ai perspective</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Reggia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI Fall Symposium Series</title>
				<meeting><address><addrLine>North America</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014-09">September (2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">The narrative self</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schechtman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Oxford handbook of the self</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Gallagher</surname></persName>
		</editor>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Minds, brains, and programs</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Searle</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behavioral and brain sciences</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">03</biblScope>
			<biblScope unit="page" from="417" to="424" />
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Consciousness and intentionality</title>
		<author>
			<persName><forename type="first">C</forename><surname>Siewert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Stanford Encyclopedia of Philosophy</title>
				<editor>
			<persName><forename type="first">E</forename><forename type="middle">N</forename><surname>Zalta</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2017">spring 2017. 2017</date>
		</imprint>
		<respStmt>
			<orgName>Metaphysics Research Lab, Stanford University</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<title level="m" type="main">Cutting Through Appearances: Practice and Theory of Tibetan Buddhism</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">L</forename><surname>Sopa</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1989">1989</date>
			<publisher>Shambhala</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">The minimal subject</title>
		<author>
			<persName><forename type="first">G</forename><surname>Strawson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Oxford handbook of the self</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Gallagher</surname></persName>
		</editor>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<title level="m" type="main">The Embodied Mind</title>
		<author>
			<persName><forename type="first">F</forename><surname>Varela</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Thompson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Rosch</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1991">1991</date>
			<publisher>MIT press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">Subjectivity and selfhood: Investigating the first-person perspective</title>
		<author>
			<persName><forename type="first">D</forename><surname>Zahavi</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>MIT press</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
