<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Behaviorism Revisited: Linking Perception and Action Through Symbolic Models of Cognition</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Pierre</forename><surname>Bonzon</surname></persName>
							<email>pierre.bonzon@unil.ch</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Dept of Information Systems</orgName>
								<orgName type="department" key="dep2">Faculty of HEC</orgName>
								<orgName type="institution">University of Lausanne</orgName>
								<address>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Behaviorism Revisited: Linking Perception and Action Through Symbolic Models of Cognition</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">F0DCFF26845B2BA7AB15B31DBEC19D72</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T01:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>brain simulation</term>
					<term>virtual machine</term>
					<term>neural dynamics</term>
					<term>cognitive process</term>
					<term>analogical reasoning</term>
					<term>meta-cognition</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Brain simulations as performed today in computational neuroscience rely on analytical methods i.e., mainly differential equations, to model physical processes. The question then arises: will cognitive abilities of real brains spontaneously emerge from these simulations, or should they be encoded and executed on top of them, similarly to the way computer software runs on hardware? Towards this later goal, a new framework linking neural dynamics to behaviors through a virtual machine has been reported and is used here to model brain functionalities in two domains, namely evolutive cases of analogical reasoning and a simple case of meta-cognition. It is argued that this approach to brain modeling could lead to an actualization of the concept of behaviorism as a model for the development of cognition.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Since the advent of the cognitive revolution <ref type="bibr" target="#b0">[1]</ref>, the psychological study of behavior has moved away from behaviorist theories based on associative explanations <ref type="bibr" target="#b1">[2]</ref> towards the description of mental states and cognitive processes <ref type="bibr" target="#b2">[3]</ref>. Early models of cognitive processes (see e.g., <ref type="bibr">Anderson et al 2004 [4]</ref>) were symbolic information processors originating from the research in Artificial Intelligence. These top down approaches allow for performing behavior simulations regardless of any grounding issues i.e., they do not address the problem of binding cognitive processes to underlying brain processes. In parallel, another field of research focuses on the actual support of cognition i.e., on brain structures. Computational platforms for simulating networks of neurons range from artificial neural networks (McCulloch &amp; Pitts 1943 <ref type="bibr" target="#b4">[5]</ref>) to enhanced neural networks simulators based on differential equations <ref type="bibr">(Hodgkin &amp; Huxley 1952 [6]</ref>). These bottom up approaches detailing the ground level constituents of the brain (i.e., the neuronal cells and their connections) are not explicitly related to the cognitive tasks they support, and as such are not concerned with the definition of processes. Attempts have been made to turn cognitive models into so-called "neurally plausible cognitive architectures" associated with some form Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning of brain modeling. As an example, symbolic connectionist models (see e.g., <ref type="bibr">Shastri &amp;</ref> Ajjanagadde 1993 <ref type="bibr" target="#b6">[7]</ref>; <ref type="bibr">Hummel &amp; Holyoak 2003 [8]</ref>) constitute an attempt to conciliate the benefits of artificial neural networks and the versatility of the symbolic models of Artificial Intelligence. According to a notable attempt in this direction <ref type="bibr" target="#b8">(Jilk et al. 2008</ref>  <ref type="bibr" target="#b8">[9]</ref>), "the incommensurable categories at the various levels of description will remain necessary to explain the full range of phenomena".</p><p>Another recurrent theme is illustrated by the progressive shift from the traditional "sense-think-act" paradigm of cognitivist systems <ref type="bibr" target="#b9">[10]</ref> towards the simplified "senseact" paradigm of embodied cognition <ref type="bibr" target="#b10">[11]</ref>. According to this new approach, cognitive processes are executed in connection structures that link sensory circuits (i.e., perception) with motors (i.e., action) <ref type="bibr" target="#b11">[12]</ref>. In this perspective, bottom-up approaches based on artificial neural networks, as well as top-down approaches via analytical and/or abstract mathematical tools such as Bayesian inference rules, are well suited for describing "computations" according to Marr's classical tri-level hypothesis <ref type="bibr" target="#b12">[13]</ref> that distinguishes computational, algorithmic and implementation levels. They fail however to "identify algorithms and underlying circuits" <ref type="bibr" target="#b13">[14]</ref>. In other words, the mechanisms for structure learning remain to be fully understood. What is then needed is a ''middle-out'' approach that can lead to plausible structures linking biology and cognition.</p><p>As a possible solution, we recently proposed <ref type="bibr" target="#b14">[15]</ref> to resort to computer science methods that allow for a delineation and implementation of successive levels of complexity. Among the concepts that could be applied towards this goal, two are of particular relevance, namely concurrent communicating systems, on one hand, and virtual machines, on the other. The notion of a concurrent communicating system, which can be used to model the interaction of objects obeying various communication protocols, reflects a high level view of a network of interactive neurons. The concept of a virtual machine interpreting a compiled code that differs from a processor's native code constitutes the key mechanism that allows for interfacing high level abstract objects i.e., software, with their low level physical support i.e., hardware. In the context of a multi-level model of brain structures and processes, this means that low level physiological details can be put aside, and grounded models of cognition be formulated by relating input and output (i.e., perception and behavior) at a symbolic level. The corresponding formal framework distinguishes itself from the usual methods by focusing on processes. It relies for this on a fundamental concept of mathematical logic i.e., the formal notions of an object in context represented by symbolic expressions in a logical language <ref type="bibr" target="#b15">[16]</ref>. In other words, our virtual machine, which does function as an interface between the neural and cognitive levels, actually performs contextual deductions. This formalism will be reviewed in the Material and methods section below. An assessment of brain functionalities in two domains, namely evolutive analogical reasoning, on one hand, and a simple case of meta-cognition, on the other, is proposed in the Results section under the form of virtual circuits. It is argued in the Conclusion that this overall approach to brain modeling could lead to a revival of the old concept of behaviorism as a model for the development of cognition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Material and methods</head><p>Our overall methodology can be described as follows (see <ref type="bibr" target="#b14">[15]</ref> for more details): a) micro scale virtual circuits implementing synaptic plasticity through asynchronous communicating processes are first defined b) meso scale virtual circuits corresponding to basic cognitive processes are then composed out of these micro scale circuits c) both types of virtual circuits are finally compiled into virtual code to be interpreted by a virtual machine.</p><p>The concept of a virtual machine that we use here basically allows for emulating the execution of a program in language S on a system having its own language L. Such a machine constitutes an interface which allows for defining mesoscale circuits independently of the way the microcircuits are actually implemented. Mesoscale circuits thus somehow correspond to cognitive software running on top of a biological substrate. More precisely, these circuits define source code expressed in S which are then compiled into virtual object code expressed in L to be interpreted by the virtual machine. We follow a bidirectional approach and present in turn the bottom up design of virtual circuits followed by the top down construction of a virtual machine.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Bottom up design of virtual circuits</head><p>The basic entities of the proposed formalism for representing virtual circuits are constituted by threads. In computer science, a thread is a sequence of instructions that executes concurrently with other threads, may coexist with other threads to form a process and share resources such as memory. In the present context, a thread corresponds to a single or a group of neurons and will be represented by a symbolic expression enclosing an instruction tree (see below for the definition of the corresponding formal language S). Threads are communicating entities. Each communication does involve a pair of threads and entails on one side the signal transmitted by a pre-synaptic (source) thread, and on the other side its reception, via a given synapse, by a post-synaptic (recipient) thread. Similarly to a neuron, a thread can be both a source and a recipient and functions as a gate receiving incoming signals from different sources and sending an outgoing signal to possibly many recipients. There are however two essential differences between threads and neurons that allow for a single thread to represent a group of neurons i.e.,  contrary to a neuron that alternates roles, a thread can be simultaneously a source and a recipient by maintaining parallel communications  contrary to traditional neuron models in which incoming signals are summed into an integrated value, thread inputs can be processed individually.</p><p>Threads can be grouped into disjoint sets, or fibers, to model neural assemblies <ref type="bibr">(Palm 1982 [17]</ref>), and discrete weights can be attached to pairs of communicating threads belonging to the same fiber. The interaction of threads within a given fiber obeys various communication protocols. To illustrate this, we reproduce below from <ref type="bibr" target="#b14">[15]</ref> circuits modeling two cases of simple animal behaviors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.1">A case of classical conditioning</head><p>As a general evolution principle, organisms devise and use "tricks" for their survival. The ability to evaluate a threat by learning predictive relationships e.g., by associating a noise and the presence of a predator, is an example of such tricks realized by classical conditioning. Let us consider the classical conditioning in the defensive siphon and gill withdrawal reflex of aplysia californica <ref type="bibr" target="#b17">[18]</ref>. In this experiment, a light tactile conditioned stimulus cs elicits a weak defensive reflex, and a strong noxious unconditioned stimulus us produces a massive withdrawal reflex. After a few pairings of stimuli cs and us, where cs slightly precedes us, a stimulus cs alone triggers a significantly enhanced withdrawal reflex i.e., the organism has learned a new behavior. This can be represented by virtual circuit given in Fig. <ref type="figure">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>sense(cs)-*-&gt;=&gt;-motor(cs) /|\ ltp | sense(us)-+-&gt;=&gt;-motor(us)</head><p>Fig. <ref type="figure">1</ref>. A mesoscale virtual circuit implementing classical conditioning.</p><p>In Figure <ref type="figure">1</ref>, the threads sense(us) and sense(cs) are coupled with sensors capturing external stimuli us and cs and correspond to sensory neurons. The threads motor(us) and motor(cs) are coupled with effectors and correspond to motor neurons. Finally, the thread ltp (for long term potentiation) acts as a facilitatory interneuron reinforcing the pathway (i.e. augmenting its weight) between sense(cs) and motor(cs). The communication protocols depicted by the symbol -&gt;=&gt;-and /|\ represent a synaptic transmission (i.e., the symbol -&gt;=&gt;-stands for a synapse) and the modulation of a synapse, respectively. The symbols * and + stand for conjunctive and disjunctive operators i.e., they are used to represent the convergence of incoming signals and the dissemination of an outgoing signal, respectively. Classical conditioning then follows from the application of hebbian learning <ref type="bibr" target="#b18">[19]</ref> i.e., "neurons that fire together wire together". Though it is admitted today that classical conditioning in aplysia is mediated by multiple neuronal mechanisms including a postsynaptic retroaction on a presynaptic site, the important issue is that the learning of a new behavior requires a conjoint activity of multiple neurons. This activity in turn depends on the temporal pairing of the conditioned and unconditioned stimuli, which in conclusion leads to implement the thread ltp as a detector of coincidence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.2">A simple case of operant conditioning</head><p>The ability to assess and to remember the consequences of one's own actions is another example of associative learning providing survival advantages. In this case, operant conditioning <ref type="bibr" target="#b1">[2]</ref> associates an action and its result, which can be positive or negative. Toward this goal, the organism will receive either an excite or an inhibit internal stimulus (corresponding for instance to a reward or punishment) that will lead in turn to a reinforcement or a rejection of the action.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</head><p>Let us consider a simple thought experiment where a pigeon is learning to discriminate between grains and pebbles corresponding to two possible vectors I of external visual stimuli. The circuit given in Fig. <ref type="figure">2</ref>, represents the interaction of four threads sense(I), learn(accept(I)), accept(I) and reject(I), together with two threads ltp and two opposite threads ltd (for long term depression). In addition to the external stimuli, this circuit incorporates two internal stimuli excite(accept(I)) and inhibit(accept(I)) that correspond to feedbacks from probing the food according to a predefined set of accepted elements.  <ref type="figure">---------------------------------------</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>-----------------------------------------| | | ltp | \|/ ---*-&gt;=&gt;-reject(I)</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 2. A meso scale virtual circuit implementing simple operant conditioning</head><p>At the beginning of the simulation, the pathways from sense(I) to learn(accept(I)) is open, while the pathways to both accept(I) and reject(I) are closed. After a few trials, the pigeon will have learned to close learn(accept(I)) and to open either accept(I) or reject(I). This process matches a fundamental principle in circuit neuroscience according to which inhibition in neuronal networks during baseline conditions allows in turn for disinhibition, which then stands as a key mechanism for circuit plasticity, learning, and memory retrieval <ref type="bibr" target="#b19">[20]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.3">Representing virtual circuits by symbolic expressions</head><p>The circuit given in Fig. <ref type="figure">1</ref> gives rise to the following symbolic expression: In this example, the instruction tree of each thread reduces to a sequence of virtual instructions such as fire, send, merge, etc, but more generally an instruction tree can contain alternatives commanded by guards. Formally, expressions for instruction trees belong to a language S whose BNF syntax is: Whereas the non-terminal symbol &lt;guard&gt; represents conditions derived from internal stimuli, &lt;instruction&gt; stands for virtual machine instructions This language S of instruction trees is not to be confused with thelogical language L that will be used to define virtual code implications into which instruction trees will be then compiled.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.4">Compiling instruction trees into virtual code implications</head><p>Virtual code implications are compiled from thread expressions and have the form</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Guard =&gt; T:Instruction</head><p>where Instruction is a virtual machine instruction and T its clock time. As an example, virtual code implications compiled from the above thread sense(us)are true =&gt; 1:fire(ltp(sense(cs),motor(cs))) true =&gt; 2:send(motor(us)) true =&gt; 3:end</p><p>In this simple example, successive clock time values (i.e., 1,2,3) correspond to a linear list traversal with no guards. More generally, this will give rise to a descent into trees containing guards.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.5">Microcircuits implementing synaptic plasticity</head><p>Virtual circuits rely on communication protocols that are pictured in thread diagrams by iconic symbols representing themselves microcircuits. These protocols can be defined by means of procedures that operate in pairs:</p><p>send/receive denoted by -&gt;=&gt;-or -&lt;=&lt;represents a synaptic transmission join/merge denoted by /|\ or \|/ implements long term potentiation/depression (ltp/ltd) push/pull denoted by -&lt;A&gt;models a short term cache memory store/retrieve denoted by -{P}models an associative memory based on long term storage and retrieval (lts/ltr).</p><p>These protocols are detailed in <ref type="bibr" target="#b14">[15]</ref> together with their corresponding microcircuits.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Top down construction of a virtual machine</head><p>Let us consider a set of fibers together with sets of initial weights for pairs of communicating threads within fibers and sets of elements accepted by fibers, and let Model designate the state of the virtual machine associated with these sets. The machine itself functions as non deterministic learning automaton that operates on the constrained neural substrate represented by Model. Formally, it consists in repeating a sense-react cycle of embodied cognition defined by the following run procedure:</p><formula xml:id="formula_0">run(Model) loop sense(Model) react(Model)</formula><p>At the next level below, the sense procedure reflects the triggering of spike trains directed to sensory neurons. After possibly capturing an interrupt from sensors directed to a given active fiber, or stream, it updates Model using a transition function input:</p><formula xml:id="formula_1">sense(Model) if interrupt(Stream(Input)) then Model input(Model(Stream),Input)</formula><p>The function input first terminates the interrupted stream by clearing all its registers and queues and then resets the clocks of the sensory threads associated with sensors.</p><p>The react procedure consists of a loop calling on each active thread in any stream to first deduce a virtual machine instruction and then update Model The transition function output corresponds to the execution of virtual machine instructions implementing communication protocols. The ist predicate (standing for "is true") implements contextual deduction <ref type="bibr" target="#b15">[16]</ref>. Clock register values T are used to deduce, for each active thread, the next instruction satisfying the guard. Whenever a transition initiated by a thread succeeds, the thread clock is advanced and the next Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning instruction is deduced and executed, and whenever it fails, the current instruction is executed again i.e., the transition is attempted until it eventually succeeds. Altogether, this amounts to descending into an instruction tree, with its local clock time corresponding to the currently reached depth. As postulated by Zeki 2015 <ref type="bibr" target="#b20">[21]</ref>, there is no central clock, leading thus to the modeling of the brain as a massively asynchronous, parallel organ . The complete specifications of this machine are given in open access in <ref type="bibr" target="#b14">[15]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>Previous application results <ref type="bibr" target="#b14">[15]</ref> include a simulation of successive models of animal awareness up to its third level according to Pepperberg &amp; Lynn 2001 <ref type="bibr" target="#b21">[22]</ref>. Briefly, the first level of animal awareness corresponds to the ability to follow a simple rule involving the perception of a specific item or event and then either its acceptation or its rejection (i.e., a case of matching/oddity to sample). Whereas this first level does not allow for an immediate transfer to a similar task, an organism with the second level is aware enough of a rule to transfer it across situations and thus to adopt for example a win/stay lose/shift rule. The third level of animal awareness provides an organism with the additional capacity to integrate two different sets of stored information in order to make a categorical judgment (e.g., to sort items). The results presented below lie outside of this hierarchy and propose evolutive simulations of analogical reasoning, on one hand, and of memory awareness, on the other.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Modeling evolutive cases of analogical reasoning</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.1">Learning a simple analogical inference schema</head><p>Let us first consider a simple analogical inference schema involving two predicates p and q applied to objects x1 and x2, i.e.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>{p(x1)} {big(dog)} {p(x2)}</head><p>e.g.,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>{big(bear)} q(x1) strong(dog) q(x2) strong(bear)</head><p>where {F} represents a fact F, or proposition, that has been previously memorized.</p><p>This inference schema can be viewed as first inducing an implication i.e.,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>p(X) -&gt; q(X)</head><p>where X is a variable, and then applying modus ponens i.e, p(X) -&gt; q(X) p(X) q(X)</p><p>The corresponding circuits (where A, B are parameters defining a context e.g., left, right, and I,J vectors of percepts representing properties p,q) are given in Fig. <ref type="figure" target="#fig_5">3  and 4</ref>. In  <ref type="figure">--| +--------------------------------------------|</ref>  <ref type="figure">-------------------------------------------|</ref> </p><formula xml:id="formula_2">| | | ltd | p(x1)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>| \|/ | sense(A(X1(I)))-+--*-&gt;=&gt;-learn(see(A(X1(I))))|excite(see(A(I)))sense(B(X2(I)))-+--*-&gt;=&gt;-learn(see(B(X2(I))))|excite(see(B(I)))p(x2)</head><formula xml:id="formula_3">| /|\ | | ltd | | | | | +-</formula><formula xml:id="formula_4">| --- | | | | | ltp</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>| lts | \|/ | \|/ --*&gt;=&gt;-see(B(X2(I)))-+---*--{see(B(X2(I)))} p(x2) {p(x2)}</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 3. Virtual circuit for memorizing perceptions</head><p>In Fig. <ref type="figure">4</ref>, a circuit pattern representing an implication is first build in the upper half and then applied in the lower half to make an inference. More precisely, fact q(x1), as perceived by the thread sense(A(X1(J))), is first learned through a simple case of operant conditioning and accepted by thread see(A(X1(J))). In parallel, fact p(x1) is recalled from its trace {see(A(X1(I)))} via an ltr process and classically conditioned (see section 2.1.1) to infer q(x1) in thread infer(A(X1(J))). Simultaneously (or rather, slightly after), fact q(x2) is perceived by thread sense(B(X2(_)))carrying a non instantiated property and is not accepted. In parallel, fact p(x2) is recalled from its trace {see(B(X2(I)))} into thread recall(B(X2(I))), which is conditioned to be eventually matched with p(x1) and then to infer q(x2) in thread infer(B(X2(J))). Finally, thread sense(B(X2(_))) is conditioned to be accepted in thread see(B(X2(J))), thus representing a contextual and/or constrained reactive processes relating a perception to an action.  This inference consists in first inducing a second order implication representing a generic transitive relation, where P,X,Y,Z are variables</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</head><formula xml:id="formula_5">q(x2) | {p(x2)} q(x2) | | ltp | \|/ +------------------------------------------------------------------------------*-&gt;=&gt;-see(B(X2(J))) | q(x2) | --------------------------------------------- | | | | ltd | | \|/ | --*-&gt;=&gt;-learn(see(B(X2(_))))|inhibit(see(B(_)))-</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>P(X,Y), P(Y,Z) -&gt; P(X,Z)</head><p>and then applying modus ponens</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>{P(X,Y), P(Y,Z) -&gt; P(X,Z)} P(X,Y) P(Y,Z) P(X,Z)</head><p>Let us extend the definition of the perceptual context by adding two parameters C, D e.g., front, rear. The circuit implementing this inference schema is given in Fig. <ref type="figure">5</ref>. As a distinctive difference from the previous circuit, where properties in remembered facts {p(x1)} and {p(x2)} need to match, the circuit in Fig. <ref type="figure">5</ref> relies on facts {p(x1,y1),p(y1,z1)} and {q(x2,y2),q(x2,z2)} with properties that don't need to match. Whereas the analogical inference implemented in Fig. <ref type="figure">4</ref> allows for the cognitive transfer of a property, the circuit of Fig. <ref type="figure">5</ref> allows for the transfer of an object. Consequently, the conditioning step leading to infer(D(_(K),Z2(L))) in Fig. <ref type="figure">5</ref> still carries a non instantiated object that can be instantiated only in the final step relating sense(D(X2(_),Z2(_))) to see(D(X2(K),Z2(L).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.3">Combining analogical inference schemas</head><p>Let us finally turn to an example of relational inference of the following type {p(x1,y1),q(y1,z1)} {father(bill,mary),mother(mary,sam)} {p(x2,y2),q(y2,z2)} e.g</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>{father(tom,cathy),mother(cathy,jack)} r(x1,z1) grandfather(bill,sam) r(x2,z2) grandfather(tom,jack)</head><p>This extension involves the two types of cognitive transfers just considered. A combination of the two corresponding circuits is however far from being straightforward, the difficulty being here the parallel matching of multiple interleaved properties. Whereas behaviors relying on simple analogical reasoning and transitive inference, as modeled by the circuits of Fig. <ref type="figure">4 and 5</ref>, have been observed in animals, this more complex example is unarguably out of their reach. It is interesting to note that previous modeling approaches relying on substitutions <ref type="bibr" target="#b7">[8]</ref> have led to similar conclusions <ref type="bibr" target="#b22">[23]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</head><formula xml:id="formula_6">--*-&gt;=&gt;-learn(see(C(X1(I),Z1(J))))|excite(see(C(I,J)))- | /|\ | | ltd | | | | | +---------------------------------------------------- | | | ltp | \|/ p(x1,z1) +--*-&gt;=&gt;-see(C(X1(I),Z1(J)))------------------------ | | | ltp p(x1,</formula><formula xml:id="formula_7">{q(x2,z2)} | q(x2,y2) q(y2,z2) q(x2,y2) | | ltp | \|/ +----------------------------------------------------------------------------------------------------------*-&gt;=&gt;-see(D(X2(K),Z2(L))) | q(x2,z2) | ----------------------------------------------------- | | | | ltd | | \|/ | --*-&gt;=&gt;-learn(see(D(X2(_),Z2(_))))|inhibit(see(D(_,_)))-</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 5. Virtual circuit implementing transitive inferences</head><p>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Modeling a simple case of meta-cognition</head><p>The additional simulations presented below pertain to the domain of meta-cognition.</p><p>In broad terms, meta-cognition refers to "monitoring and control processes that allow for maintaining a self-referential check and balance" <ref type="bibr">(Fleming &amp; al. 2012 [24]</ref>). A simple case of meta-cognition is constituted by memory awareness. When applied to humans, this allows them in particular to feel certain or uncertain, and ultimately, "to know when they do not know" <ref type="bibr">(Smith &amp; al. 2012 [25]</ref>). Some non-human animals (thereafter, animals), who selectively decline tests when their memory is weak, seem to exhibit such capabilities. In order to validate this hypothesis, Templer and Hampton 2012 <ref type="bibr" target="#b25">[26]</ref> designed a two phase experiments. In the first phase, monkeys are trained to take a "memory test". Each trial in this phase consists of an information stage, in which a randomly chosen cup is openly baited in front of the subject and then momentarily removed from its view, followed by a choice stage, in which the subject is asked to retrieve the food. In the second phase of this experiment, a decline option (also commonly referred to as the "uncertainty response") is introduced. This option consists in a fixed cup offering a lesser, but guaranteed reward.</p><p>Actually, much on the debate around meta-cognition does focus on the following question: can animals' uncertainty responses be interpreted within an associative framework, or should they be considered as the expression of a higher-level cognitive mechanism? An assessment of the research in this field has been proposed by Dickinson 2012 <ref type="bibr" target="#b26">[27]</ref>, who argues that mechanisms based on constrained associative processes can implement psychological rationality. By pointing out to an issue originally raised by Holland 1990 <ref type="bibr" target="#b27">[28]</ref>, he raised the possibility of using retrospective revaluation <ref type="bibr">(Dickinson &amp; Burke 1996 [29]</ref>): whereas simple associative theory assumes that learning occurs when a stimulus is present, retrospective revaluation allows for learning about associatively retrieved representations (see below in section 3.2.2).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1">Modeling a reliable memory.</head><p>Let us consider Templer and Hampton's memory test <ref type="bibr" target="#b25">[26]</ref> and let us first assume a perfectly reliable memory without a decline option. Without any loss of generality, two possible cups only i.e., A and B, are taken into account here. This memory test can be implemented (see the circuit in Fig. <ref type="figure" target="#fig_10">6</ref>) as an extended case of operant conditioning with a choice between two alternatives A and B captured by the random internal stimuli fetch(A) and fetch(B)and coupled with a recall from a short term memory &lt;A&gt; and &lt;B&gt;. It is interesting to note that this circuit matches the one used in <ref type="bibr" target="#b14">[15]</ref> to implement the second level of animal awareness: in both cases, subjects are trained to solve a task by learning a win/stay lose/shift rule. <ref type="figure">-------------------------------------------------------|</ref>   <ref type="figure">sense(A(I),B(J))-+---*-&gt;=&gt;-learn(pick(A(I),B(J</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning sense(A(I),B([]))--&lt;A&gt;-----*-recall(A)-&gt;=&gt;-pick(</head><formula xml:id="formula_8">A) | ---*-&gt;=&gt;- | /|\ | ltp | | | +-</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2">Modeling an unreliable memory</head><p>According to Templer and Hampton 2012 <ref type="bibr" target="#b25">[26]</ref>, memory awareness provides subjects with the capacity to make "judgments" about the quality of their memory e.g., to discriminate between strong and weak memories. Let us further quote Templer and Hampton literal description of how monkeys possibly use their memory awareness when confronted with a decline option: "When memory is weak, monkeys with memory awareness should opt for the decline response rather than running the risk of receiving no raisins by guessing where the two raisins were located. When memory is strong, and they have a high probability of choosing the correct choice cup, they should select the remembered cup". This description, which distinguishes the correct cup (i.e., the one that actually holds the preferred reward) from the remembered cup (i.e., the one the memory has, correctly or incorrectly, associated with the preferred reward), can be rewritten under the form of two rules: if memory is weak -&gt; there is a low chance of choosing the correct cup -&gt; opt for the decline response if memory is strong -&gt; there is a good chance of choosing the correct cup -&gt; select the remembered cup.</p><p>These two rules rely on the definition and access to a mental state, representing in this case a memory that besides holding an objective content (i.e., the remembered cup) is subjectively perceived as either weak or strong. Obviously, this perception involves a wide range of cumulative phenomena (such as distraction, fatigue, or other decay mechanisms), and thus cannot be reduced to a single deterministic value e.g., "weak" or "strong". A possible way to represent and apply these rules is to use a probability distribution as follows:</p><p>in order to represent a weak memory -&gt; assign a low probability to the correct location and complementary higher probabilities to the other options -&gt; perform a weighted random choice.</p><p>A strong memory can be similarly represented by assigning a high probability to choosing the correct location. Retrieving from the memory then reduces to weighted random choices. For this scheme to be consistent, the "remembered location" is an elusive concept that needs be replaced by the association of the correct location with its probability. In other words, the physiological mechanism that creates the shortterm memory (i.e., -&lt;A&gt;-in our wiring model) is supposed to be reliable, but the process of retrieving from this memory is degraded by cumulative factors resulting in an unreliable noisy transmission.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Model of an unreliable memory without a decline option</head><p>Let us now consider a model of unreliable memory, but without yet a decline option, as given in Fig. <ref type="figure" target="#fig_12">7</ref>, where additional wiring is in italics. In neural terms, two interneurons noise(A| </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Model of an unreliable memory with a decline option</head><p>Memorizing a decline option at location C constitutes another example of an associative long term memory given by a memory trace (see Fig. <ref type="figure" target="#fig_5">3</ref> of section 3.1.1), in this case {accept(C)}. Introducing this option as an additional choice gives rise to a new layer of noisy transmission. This scheme (see Fig. <ref type="figure" target="#fig_13">8</ref>), which allows for animals to learn about directly perceived stimuli as well as retrieved representations such as the decline option, constitutes a particular case of retrospective revaluation <ref type="bibr" target="#b28">[29]</ref> alluded to above. Turning out to look like an extension of the second and third levels of animal awareness combined together, it shows how a simple case of metacognition can be reduced to successive layers of associative memories.  <ref type="figure">--+-----------------------------------------------------</ref> <ref type="figure">--+------------------------------------------------------|</ref> | Various proposals have been made to close the gap between the level of individual neurons and higher levels supporting behavior. A possible solution is to consider group of neurons, or neural assemblies. It is proposed here to model neural assemblies in a simulation framework driven by a virtual machine linking neural dynamics to behaviors. While the usual approach to simulating neural dynamics starts with current flows represented by differential equations, we opted for a conceptual abstraction of synaptic plasticity represented by communicating processes. As a consequence, there is no reference to any specific neural network model. While these simulated neural structures are general-purpose entities, symbolic models of cognition constrain them into specific-purpose processes driven by a virtual machine. As reflected by the terms in the title of <ref type="bibr" target="#b14">[15]</ref> ("i.e., linking neural dynamics to behaviors through asynchronous communications") and of this contribution (linking perception and action through symbolic models of cognition), this virtual machine clearly stands as an interface between the neural and cognitive levels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>|catch(B) sense(A(I),B([]))--&lt;A&gt;-----*-recall(A|[A,B])-*-&gt;=&gt;-filter(A|[A,B])| | /|\ |catch(A)-pick(A) | ltp | | ---*-&gt;=&gt;-+-noise(A|[A,B])--</head><formula xml:id="formula_9">| /|\ | ltp | | | +-------------------------------------------------------- | | | | ltd | | \|/ |fetch(A)|excite(pick(A(I)))- sense(A(I),B(J))-+---*-&gt;=&gt;-learn(pick(A(I),B(J)))| | /|\ |fetch(B)|excite(pick(B(J)))- | ltd | | | | | +-------------------------------------------------------- | | | ltp | \|/ ---*-&gt;=&gt;-+-noise(B|[A,B])-- | | | ltp | \|/ |catch(B)-pick(B) sense(A([]),B(J))--&lt;B&gt;-----*-recall(B|[A,B])-*-&gt;=&gt;-filter(B|[A,B])| |catch(A)</formula><formula xml:id="formula_10">- | |catch(B)- sense(A(I),B([]))--&lt;A&gt;-----*-recall(A|[A,B])-*-&gt;=&gt;-filter(A|[A,B])| | /|\ |catch(A)-pick(A) | ltp | | ---*-&gt;=&gt;-+-noise(A|[A,B])-- | /|\ | ltp | | | +-------------------------------------------------------- | | | | ltd | | \|/ |fetch(A)|excite(pick(A(I)))- sense(A(I),B(J))-+---*-&gt;=&gt;-learn(pick(A(I),B(J)))| | /|\ |fetch(B)|excite(pick(B(J)))- | ltd | | | | | +-------------------------------------------------------- | | | ltp | \|/ ---*-&gt;=&gt;-+-noise(B|[A,B])-- | | | ltp | \|/ |catch(B)-pick(B) sense(A([]),B(J))--&lt;B&gt;-----*-recall(B|[A,B])-*-&gt;=&gt;-filter(B|[A,B])| |catch(A)- | -</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>| ltp | \|/ ---*-&gt;=&gt;-+-noise(C|[A,C])--| | | ltp | \|/ |catch(C)-pick(C) sense(C(K))-+--*-&gt;=&gt;-------*-recall(C|[A,C])-*-&gt;=&gt;-filter(C|[A,C])| | /|\ |catch(A)-pick(B) | ltr | | {accept(C)}--*--</head><p>Following previous results on modeling animal cognition up to the third level of animal awareness, this formalism has been applied in section 3.1 to simulate the learning of evolutive analogical inference schemas involving two kinds of cognitive transfer, namely that of an object and that of a property. Whereas schemas involving one kind of transfer correspond to cognitive abilities observed in animals, schemas involving a combination of these two kinds correspond to abilities presumably enjoyed by humans only. As these circuits show, higher level cognitive facilities can emerge through learning schemas involving in turns both classical and operant conditioning. If indeed classical and operand conditioning may be combined into hybrid models, then (as speculated by Carew2002 <ref type="bibr" target="#b29">[30]</ref>) "an exciting principle might emerge: evolution may have come up with a neural 'associative cassette' that can be used in either type of conditioning, depending of the neural circuit in which it is embedded". Altogether, this approach to brain modeling could thus lead to an actualization of the old concept of behaviorism <ref type="bibr" target="#b1">[2]</ref>, which was founded through the experimental exploration of these two types of conditioning.</p><p>The same formalism has been applied in section 3.2 to simulate memory awareness as a case of retrospective revaluation. This result shows that simple cases of metacognition can be implemented by successive layers of associative processes reflecting noisy transmissions.</p><p>In conclusion, it is argued that these results could lead to a reconsideration of the whole concept of a "neural code" relating perception and behavior. Such a neural code may well reside in the spatial arrangement of mesoscale circuit patterns (i.e., a kind of population or sparse coding, as opposed to the more traditional rate or temporal coding associated with spike trains). One might then even consider that there is actually no code at all (in the sense of a specific arrangement always associating the same response to a given stimulus), and that "the code is the overall structure itself". More precisely, perception might be related to behaviors through the paths found by evolution via iterative hebbian learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</head></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>&lt;tree&gt; ::= [] || &lt;sequence&gt; || [&lt;alternative&gt;] &lt;sequence&gt; ::= [&lt;instruction&gt;|&lt;tree&gt;] &lt;alternative&gt; ::= &lt;branch&gt; || (&lt;branch&gt;;&lt;alternative&gt;) &lt;branch&gt; ::= (&lt;guard&gt;|&lt;tree&gt;)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>using a transition function output: react(Model) for each Stream(Thread),T:Instruction, such that ist(Model(Stream)(Thread),(clock(T), T:Instruction)) do Model  output(Model(Stream)(Thread), T:Instruction)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 3 ,</head><label>3</label><figDesc>both circuits build an associative long term memory of perceived facts. More precisely, facts p(x1) and p(x2), as perceived by the threads Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning sense(A(X1(I))) and sense(B(X2(I))) are first learned through a simple case of operant conditioning (see section 2.1.2) and accepted by the threads see(A(X1(I))) and see(B(X2(I))), which are then memorized via a long term storage (lts) process into a memory trace {see(A(X1(I)))} and {see(</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>-</head><label></label><figDesc>-*-&gt;=&gt;-learn(see(A(X1(J))))|excite(see(A(J)))----------------------------------------------sense(A(X1(J)))-+--*-&gt;=&gt;-recall(A(X1(I)))-+--*-&gt;=&gt;-infer(A(X1(J)))-------X2(_)))-+--*-&gt;=&gt;-recall(B(X2(I)))-*--&gt;=&gt;-match(A(X1(I),B(X2(I)))-*-&gt;=&gt;-infer(B(X2(J)))-</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Fig. 4 . 3 . 1 . 2</head><label>4312</label><figDesc>Fig. 4. Virtual circuit for implementing a simple case of analogical inference</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head></head><label></label><figDesc>,z1) sense(C(X1(I),Z1(J)))-+--*-&gt;=&gt;-recall(C(A(…)))---+--*-&gt;=&gt;-recall(C(B(…)))-*-&gt;=&gt;-infer(C(X1(I),Z1(J)))-X1(I),Y1(J)))}--*--{see(B(Y1(I),Z1(J)))}-*--| | | {see(A(X2(K),Y2(L)))}--*--{see(B(Y2(K),Z2(L)))}------------*--| {q(x2,y2)} | | {q(y2,z2)} X2(_),Z2(_)))-+--*-&gt;=&gt;-recall(D(A(X2(K),Y2(L)))))---+--*-&gt;=&gt;-recall(D(B(Y2(K),Z2(L)))))-------*-&gt;=&gt;-infer(D(_(K),Z2(L)))-</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. Circuit implementing a memory test</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head></head><label></label><figDesc>[A,B]) and noise(B|[A,B]) open a new pathway between the memory neurons recall(A|[A,B]) and recall(B|[A,B]), on one hand, and two new transmitter neurons filter(A|[A,B]) and filter(B|[A,B]), on the other. Furthermore, whereas the internal stimuli fetch(A) and fetch(B) attached to learn are purely random, the stimuli catch(A) and catch(B) attached to filter to reflect a noisy transmission integrate multiple physiological sources and, as argued above, follow from weighted random choices.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Fig. 7 .</head><label>7</label><figDesc>Fig. 7. Circuit implementing an unreliable memory without a decline option</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_13"><head>Fig. 8 .</head><label>8</label><figDesc>Fig. 8. Circuit implementing an unreliable memory with a decline option</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The cognitive revolution: a historical perspective</title>
		<author>
			<persName><forename type="first">G</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Trends in Cognitive Sciences</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">3</biblScope>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Science and Human Behavior</title>
		<author>
			<persName><forename type="first">B</forename><surname>Skinner</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1953">1953</date>
			<publisher>MacMillan</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Mind: Introduction to Cognitive Science</title>
		<author>
			<persName><forename type="first">P</forename><surname>Thagard</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An integrated theory of the mind</title>
		<author>
			<persName><forename type="first">J</forename><surname>Anderson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological Review</title>
		<imprint>
			<biblScope unit="volume">111</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1036" to="1060" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A logical calculus of the ideas immanent in nervous activity</title>
		<author>
			<persName><forename type="first">W</forename><surname>Mcculloch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Pitts</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bulletin of Mathematical Biophysics</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="115" to="133" />
			<date type="published" when="1943">1943</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A quantitative description of membrane current &amp; its application to conduction &amp; excitation in nerve</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Hodgkin</surname></persName>
		</author>
		<author>
			<persName><surname>Huxley</surname></persName>
		</author>
		<author>
			<persName><surname>Af</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Physiology</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="500" to="544" />
			<date type="published" when="1952">1952</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">From simple associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal synchrony</title>
		<author>
			<persName><forename type="first">L</forename><surname>Shastri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ajjanagadde</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behavioral and Brain Sciences</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="417" to="51W" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A Symbolic-Connectionist theory of Relational Inference and Generalization</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hummel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Holyoak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological Review</title>
		<imprint>
			<biblScope unit="volume">110</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="220" to="264" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">SAL: An explicitly pluralistic cognitive architecture</title>
		<author>
			<persName><forename type="first">D</forename><surname>Jilk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lebiere</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O'</forename><surname>Reilly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Experimental and Theoretical Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="197" to="218" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Artificial cognitive systems: a primer</title>
		<author>
			<persName><forename type="first">D</forename><surname>Vernon</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>The MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Intelligence without representation</title>
		<author>
			<persName><forename type="first">R</forename><surname>Brooks</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page" from="139" to="159" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The necessity of connection structures in neural models of variable binding</title>
		<author>
			<persName><forename type="first">F</forename><surname>Van Der Velde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kamps</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cogntiive Neurodynamics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="359" to="396" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Vision: A Computational Investigation into the Human Representation and Processing of Visual Information</title>
		<author>
			<persName><forename type="first">D</forename><surname>Marr</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1982">1982</date>
			<publisher>Freeman</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Cortical Correlates of Low-Level Perception: From Neural Circuits to Percepts</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Frégnac</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bathellier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neuron</title>
		<imprint>
			<biblScope unit="volume">88</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Towards neuro-inspired symbolic models of cognition: linking neural dynamics to behaviors through asynchronous communications</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bonzon</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11571-017-9435-3</idno>
	</analytic>
	<monogr>
		<title level="j">Cognitive Neurodynamics</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Formal Aspects of Context</title>
	</analytic>
	<monogr>
		<title level="j">Applied Logic Series</title>
		<editor>Bonzon, P, Cavalcanti, M, Nossum, R.</editor>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<date type="published" when="2000">2000</date>
			<publisher>Kluver Academic Publ</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Neural Assemblies. An Alternative Approach to Artificial Intelligence</title>
		<author>
			<persName><forename type="first">G</forename><surname>Palm</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1982">1982</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Heterosynaptic facilitation in neurones of the abdominal ganglion of Aplysia depilans</title>
		<author>
			<persName><forename type="first">E</forename><surname>Kandel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Tauc</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Physiology</title>
		<imprint>
			<biblScope unit="volume">181</biblScope>
			<date type="published" when="1965">1965</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The organization of behavior. A neuropsychological theory</title>
		<author>
			<persName><forename type="first">D</forename><surname>Hebb</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Wiley</title>
		<imprint>
			<date type="published" when="1949">1949</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Disinhibition, a Circuit Mechanism for Associative Learning &amp; Memory</title>
		<author>
			<persName><forename type="first">J</forename><surname>Letzkus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wolff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lüthi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neuron</title>
		<imprint>
			<biblScope unit="volume">88</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="264" to="276" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A massively asynchronous, parallel brain</title>
		<author>
			<persName><forename type="first">S</forename><surname>Zeki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Phil. Tran. R. Soc. B</title>
		<imprint>
			<biblScope unit="volume">370</biblScope>
			<biblScope unit="page">20140174</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Possible Levels of Animal Consciousness with Reference to Grey Parrots (Psittaccus erithacus)</title>
		<author>
			<persName><forename type="first">I</forename><surname>Pepperberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lynn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">American Zoologist</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="page" from="893" to="901" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Darwin&apos;s mistake: Explaining the discontinuity between human and nonhuman minds</title>
		<author>
			<persName><forename type="first">D</forename><surname>Penn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Holyoak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Povinelli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behavioral and Brain Sciences</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="109" to="178" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Metacognition: computation, biology and function</title>
		<author>
			<persName><forename type="first">S</forename><surname>Fleming</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Dolan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Frith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophical Transactions of the Royal Society B: Biological Sciences</title>
		<imprint>
			<biblScope unit="volume">367</biblScope>
			<biblScope unit="page" from="1280" to="1286" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">The highs and lows of theoretical interpretation in animal-metacognition research</title>
		<author>
			<persName><forename type="first">J</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Couchman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Beran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</title>
				<meeting>the KI 2017 Workshop on Formal and Cognitive Reasoning</meeting>
		<imprint>
			<date type="published" when="1594">2012. 1594</date>
			<biblScope unit="volume">367</biblScope>
			<biblScope unit="page" from="1297" to="1309" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Rhesus monkeys (Macaca mulatta) show robust evidence for memory awareness across multiple generalization tests</title>
		<author>
			<persName><forename type="first">V</forename><surname>Templer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hampton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Animal Cognition</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="409" to="419" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Associative learning and animal cognition</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dickinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philos Trans R Soc Lond B Biol Sci</title>
		<imprint>
			<biblScope unit="volume">367</biblScope>
			<biblScope unit="page" from="2733" to="2742" />
			<date type="published" when="1603">2012. 1603</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Event representations in Pavlovian conditioning: image and action</title>
		<author>
			<persName><forename type="first">P</forename><surname>Holland</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cognition</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="105" to="131" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Within-compound associations mediate retrospective revaluation of causality judgements</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dickinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Burke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Quart J of Exp Psychology</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="60" to="80" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Neurology, Understanding the consequences</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">J</forename><surname>Carew</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the KI 2017 Workshop on Formal and Cognitive Reasoning</title>
				<meeting>the KI 2017 Workshop on Formal and Cognitive Reasoning</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="volume">407</biblScope>
			<biblScope unit="page" from="803" to="806" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
