<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Compact Crossbars of Multi-Purpose Binders for Neuro-Symbolic Computation Work in Progress Report</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Gadi</forename><surname>Pinkas</surname></persName>
							<email>gadip@mla.ac.il</email>
							<affiliation key="aff0">
								<orgName type="department">Center for Academic Studies</orgName>
								<address>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Priscila</forename><forename type="middle">M V</forename><surname>Lima</surname></persName>
							<email>priscilamvl@ufrrj.br</email>
							<affiliation key="aff1">
								<orgName type="institution">Universidade Federal Rural do Rio de Janeiro Seropédica</orgName>
								<address>
									<country key="BR">Brazil</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Shimon</forename><surname>Cohen</surname></persName>
							<email>shamon51@gmail.com</email>
							<affiliation key="aff2">
								<orgName type="department">Center for Academic Studies</orgName>
								<address>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Compact Crossbars of Multi-Purpose Binders for Neuro-Symbolic Computation Work in Progress Report</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DA3820BB08723075CF7BCD60A97741D7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T22:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We present a compact -yet expressive -Multipurpose, distributed binding mechanism, which is useful for encoding complex symbolic knowledge and computation, using Artificial Neural Networks (ANNs) or using Satisfiability (SAT) solvers. The technique is demonstrated by encoding unrestricted First Order Logic (FOL) unification problems as Weighted Max SAT problems and then translating the later into ANNs (or learning them). It is capable of capturing the full expressive power of FOL, and of economically encoding a large Knowledge Base either as long term synapses or as clamped units in Working Memory. Given a goal, the mechanism is capable of retrieving from the synaptic knowledge just what is needed, while creating novel, compound structures in the Working Memory. Two levels of size reduction are shown. First, we build a Working Memory, using a pool of multipurpose binders, based on the assumption that the number of bindings that are actually needed is far less than the number of all theoretically possible bindings. The second level of compactness is due to the fact that, in many symbolic representations, when two objects are bound, there is a many-to-one relationship between them. This happens because, frequently, either only one value is pointed by variable or only one variable point to a value. A crossbar binding network of n × k units with such restriction, can be transformed into an equivalent neural structure of size O(n log(k)). We show that, for performing unrestricted FOL unifications, the Working Memory created is only log dependent on the KB size; i.e., O(n log(k)). The variable binding technique described is inherently fault tolerant as there are no fatal failures, when some random neurons become faulty and the ability to cope with complex structures decays gracefully. Processing is distributed and there is no need for a central control even to allocate binders. The mechanism is general, and can further be used for other applications, such as language processing, FOL inference and planning.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction 1.The Binding Problem</head><p>Human cognition is capable of producing combinatorial structures. The general binding problem concerns how items that are encoded in distinct circuits of a massively parallel computing device (such as the brain or ANN) can be combined in complex ways for perception, reasoning or for action <ref type="bibr" target="#b5">[Feldman 2010]</ref>. Consider for example, a planning problem, where the task is to pick up an object and move it from its current position to another place. In order to meet a goal, a "brain"-like device, must be able to represent the object, its properties, its position and the ways to manipulate it, in such a way that the goal is achieved. The object and its properties must be bound together, and this rather complex structure should also be used in conjunction with other entities and rules, such as the action consequences (e.g., moving X from Y to Z clears position Y while occupying position Z). In another example, consider the sentence: "Sally ate": In language processing, the verb "EAT" is a predicate with at least two roles -EAT("Sally",X). The noun "Sally" should be bound to the first role, while an existentially quantified variable (representing "something") should be bound to the second role. Once we get the information that "Sally ate salad", and knowing the rule: EAT(Y,X)⇒ DI-GESTED(X) we should reason that "the salad is digested". In order to do that, we must bind the variable X to the noun "salad", while X must be bounded to both EAT(,X) and DIGESTED(X).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Connectionism and Variable Binding</head><p>During the years, connectionist systems have been criticized for "Propositional Fixation" <ref type="bibr" target="#b11">[McCarthy 1988</ref>]. In <ref type="bibr" target="#b6">[Fodor, Phylyshyn 1988</ref>] connectionism was criticized for lacking abilities to construct combinatorial representations and for performing processes that are sensitive to complex structure. Exactly how compositionality can occur is a fundamental question in cognitive science and the binding aspect of it has been identified as a key to any neural theory of language [Jackendoff 2002]. Several attempts have been made to approach the variable binding problem in a connectionist framework <ref type="bibr" target="#b12">[Shastri, Ajjanagadde 1993]</ref>, <ref type="bibr" target="#b1">[Browne, Sun 2000]</ref>, <ref type="bibr" target="#b16">[Zimmer et al. 2006</ref>], [Van der Velde, Kamps, Kamps 2006], <ref type="bibr">[Barret et al. 2008]</ref>, <ref type="bibr" target="#b14">[Velik 2010</ref>]; yet, virtually all these suggestions, have limitations, related to either limited expressiveness, size and memory requirements, central control demands, lossy information, etc.</p><p>For example, compositionality can be provided using Hollographic Reduced Represenations [Plate 1995]; however, the convolution operation used, is lossy and errors are introduced as structures become more complex or as more operations are done. The BlackBoard Architecture [Van der Velde, Kamps, Kamps 2006] can form complex structures but does not manipulate those structures to perform cognition. Shastri's temporal binding has only limited FOL expressiveness and no mechanism for allocating temporal binders. Finally, all the above systems need neurons in numbers that is at best linear in the KB; while some use much more neurons than that.<ref type="foot" target="#foot_0">1</ref> For FOL compositionality in ANNs see <ref type="bibr" target="#b0">[Ballard 1986</ref>], [Pinkas 1992], <ref type="bibr">[Shastri 1999</ref>], <ref type="bibr" target="#b10">[Lima 2000</ref>], <ref type="bibr" target="#b7">[Garcez, Lamb 2006]</ref>. For partial-FOL encodings in Satisfiability, see <ref type="bibr">[Domingos 2008</ref><ref type="bibr" target="#b1">], [Clark et al. 2001]</ref>.</p><p>The ability to represent combinatorial structures and reasoning with them, still presents challenges to theories of neurocognition [Marcus 2001], while the variable binding problem is fundamental to such ability <ref type="bibr" target="#b5">[Feldman 2010</ref>].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3">Unification</head><p>In conventional computing, unification is a key operation for realizing inference, reasoning, planning and language processing. It is the main vehicle for conventional symbolic systems to match rules with facts, or rules with other rules. In unification, two or more distinct hierarchical entities (terms) are merged, to produce a single, unified, tree-like structure. This unified structure adheres to the constraints of both the original entities. Formally, unification is an operation which produces from two or more logic terms, a set of substitutions, which either identifies the terms or makes the terms equal modulo some equational theory. For connectionist approaches to unification see <ref type="bibr" target="#b7">[Hölldobler 1990</ref>], <ref type="bibr" target="#b15">[Weber 1992</ref>], <ref type="bibr" target="#b9">[Komendantskaya 2010</ref>].</p><p>For easiness of reading, we have chosen to demonstrate our compact variable binding mechanism on the more fundamental unification function, rather than on full FOL inference.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.4">Artificial Neural Networks and SAT</head><p>ANNs may be seen as constraint satisfaction networks, where neuron-units stand for Boolean variables, and where the synapse weights represent constraints imposed on the variables. Any ANN may be seen as such a constraint net-work; yet, for ANNs with symmetric weights (e.g. Hopfield, Boltzmann Machines, MFT) a simple conversion has been shown for translating any Weighted MAX SAT problem into symmetric <ref type="bibr">ANN and vice-versa [Pinkas, 1991]</ref>. Any such SAT problem could be compiled into an ANN, which performs stochastic gradient descent on an energy function that basically counts the number of unsatisfied logical constraints. The size of the generated network is linear in the size of the original formula, though additional hidden units may be required. In addition to compilation, the logical constraints of a network could be PAC learnt using Hebbianlike rule [Pinkas 1995], thus, for small-size constraints, a network can efficiently learn its weights and structure from a training set that is composed of the satisfying models. The performance efficiency of this neural mechanism can be attributed to the similarities of symmetric ANNs to stochastic local search algorithms, such as WALKSAT <ref type="bibr" target="#b7">[Kautz et al 2004]</ref>. Due to the tight relationship between ANNs and Weighted Max SAT, our methodology is to specify an ANN designed for certain symbolic computation (e.g. unification), using a set of Boolean variables (the visible units) and a set of constraints; i.e., Boolean formulae designed for restricting the values of the visible units. The constraints specified are used to force the visible units to converge to a valid solution that satisfies as many (weighted) formulae as possible. We have written a compiler that translates such specifications into either weighted CNF (for Weighted Max SAT Solvers) or for ANN with symmetric weights.</p><p>We believe that our fault tolerant mechanism and methods for dynamically forming recursive structures will scale and be useful for both the engineering of massively parallel devices, and for modeling of high-level cognitive processes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Improving CrossBar Binding</head><p>The simplest, most naïve binding techniques is CrossBar binding. The term was mentioned in <ref type="bibr" target="#b0">[Barrett et al. 2008</ref>], yet it was intuitively used by many connectionist systems in the past <ref type="bibr" target="#b0">[Ballard 1986</ref>], <ref type="bibr">[Anandan 1989</ref>] and in many SAT reductions; e.g., <ref type="bibr" target="#b8">[Kautz, at el 2006]</ref>. Formally, we define crossbar binding as a Boolean matrix representation of a relation between 2 sets of items, using Characteristic Matrix of the relation; i.e., if A contains m objects and B contains n objects, then the characteristic matrix R has m lines and n columns, containing m × n Boolean variables (neurons). We say that item i is bound to item j iff R(i,j)=1. In this naïve, binding mechanism, a neuron should be allocated for each possible binding, and all theoretic combinations of two items must be pre-enumerated as rows and columns of the matrix. A crossbar matrix, that needs to represent a complex tree or a graph, must bind together not just simple constituents, but all the compounded entities representing partial trees (or sub-graphs). It is possible to represent a FOL KB this way at the cost of using an enormous number of neurons, and with an extremely localist approach. Even more frustrating is the fact that this technique will not be suitable for dynamically creating novel, nested structures upon demand. The number of theoretic bindings, for all possible tree structures, grows exponentially with the number of constituent items and must be computed in advance. We improve this simplistic binding mechanism in several steps:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Using Binders as "pointers" to form Graphs</head><p>First, we introduce<ref type="foot" target="#foot_2">2</ref> a special kind of entities called General Purpose Binders (GPBs). GPBs are similar to pointers except for the fact that a single GPB can point to several objects, as the crossbar paradigm permits implementation of arbitrary relations between binders and objects. In the special case where binders point to binders, arbitrary directed graphs can be built. In this scenario, we can interpret each GPB as a node in the graph, and the crossbar, as specifying the arcs of the graph (adjacency matrix). In such graph interpretation, each node may be labeled using a labeling crossbar, that ties together binders, with symbols such as, predicates, functions or constants in FOL. Arcs can also be labeled, as the binder-to-binder crossbar, may have a third dimension which relates one or more labels to each arc. This enables the formation of arbitrary complex graph structures, that can be used to represent language constituents and in particular, FOL terms, predicates, literals and clauses. Unlike in the naïve crossbar approach, unrestricted graphs can be built directly out of simple constituents, with GPB as the mechanism for gluing them together.</p><p>Because the binders are general-purpose entities, we can construct a working memory out of a pool of such binders. As long as GPBs remain unallocated, they can be used for dynamic creation of novel, goal oriented structures. To do so, the "right" constraints should be embedded in the synapses, forcing binders first to be allocated and then to assume a desired structure for solving the goal. These constraints, stored at the synaptic weights, are the driving force that causes the visible units to converge to the needed graph-like structures.</p><p>Using this technique, we show that arbitrary KB of size k, can be encoded in a working Memory (WM) with O(k) binders and with a total size of O(k 2 ). Unfortunately, when the KB tends to grow, the WM and the set of constraints may become too large for the mechanism to be used in real applications.<ref type="foot" target="#foot_3">3</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Using a pool of binders "As Needed"</head><p>Luckily, we can reduce that size requirement, drastically, as we can assume that, at a certain time, only few binders are actually needed for the processing of a given goal. This is supported by cognitive studies <ref type="bibr">[Cowan 1981</ref>] and constitutes a common assumption of several connectionist systems <ref type="bibr" target="#b12">[Shastri, Ajjanagadde 1993]</ref>, <ref type="bibr" target="#b0">[Barrett et al 2008]</ref>. We therefore can design a Working Memory of neural units, which uses only a pool of General Purpose Binders, labeled and nested within each other; i.e., a small set of binders, for representing only those graphs that are actually needed for computing the goal. It turns out that this approach is consistent with cognitive theories, where a large KB is stored in synapses (long term memory); and a smaller size working memory is used for retrieving only few KB items at a time. Only those items that are necessary to the process<ref type="foot" target="#foot_4">4</ref> get to be retrieved from the synaptic KB. For example, if our purpose is to find a plan for a goal, expressed in FOL, we need to design the WM with enough binders to represent a valid plan. We retrieve the facts and rules of the world from that KB only if they are required by the plan we desire to make.</p><p>To implement a pool of binders for FOL unification, the WM should contain three crossbar matrices: One for labeling nodes by symbols (predicates, functions, constants). The second is for nesting of the nodes in Graphs and labeling the arcs according to slots of the predicates and functions. The third crossbar is for retrieving items from the long term memory where the KB is stored (e.g., terms, literals or clauses). This third matrix ties a binder to a KB item and triggers the constraints of that item to be activated so that the binder node is forced to assume the structure of the KB item retrieved. The mechanism starts working as goal activated constraints cause some binders to be tied to KB items and activate some KB constraints. Those constraints, in turn, activate other constraints, till the WM converges to a valid solution. When we implement unification problems, the size of the WM is O(n ×k) where n is the maximal number of nodes in a solution; k is the size of the KB and n&lt;&lt;k. This constitutes a drastic improvement, as the WM size is linear in the size of the KB, instead of being quadratic. <ref type="foot" target="#foot_5">5</ref> Actually, we can do even better:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Crossbars with n*log(k) size complexity</head><p>In the next size improvement, we further reduce the size of many crossbar matrices from O(n*k) to O(n* log(k)). Thus, in our unification example, a WM of O(n* log (k)) is created, where n is the maximal size of a unification tree and k is the size of the KB. This means that the WM size is only log dependent<ref type="foot" target="#foot_6">6</ref> on the KB size; rather than linearly as in previous section.</p><p>The key to this log-reduction, is the fact that frequently, binding relationships have many-to-one or one-to-many restrictions. For example, the crossbar matrix for node labeling, allows for a binder to point only to a single symbol (whereas many binders could point to the same symbol). This many-to-one relationship causes the rows of the crossbar labeling matrix to be Winner-Takes-All (WTA) arrays, where only one neural unit (if any) may fire. Normally, we need mutual exclusion constraints to force the rows of the matrix to be either all-zeros or have a single variable set to one. In such a scenario, however, we can replace each WTA line (with k-variables), with a much smaller size line of only O(log(k)) variables. Each such line of log(k) variables (neurons), represents an index (or a signature) to the target label. Therefore, if a binder may point to just a single object (out of k possible objects), we may use only log(k) bit signatures. Fig 1 illustrates, how one binder with WTA line that points to object 6 (out of 15 objects) is reduced to only 4 bits LOG WTA array, representing the signature of that item. This signature, once it emerges in a binder's row, activates a set of constraints associated with the bounded object. These constraints force the binder to get the retrieved item's structure and may cause a chain reaction of more constraints, retrieving more KB items and so forth. It should be noted that, once a LOG WTA encoding is used instead of the standard WTA, the constraints imposed on WM might need to be adjusted.<ref type="foot" target="#foot_7">7</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Fault Tolerance</head><p>The variable binding mechanism suggested and its application to unification are inherently fault tolerant, if each variable is allocated a processing unit (a neuron). Small random damage to the neurons does not radically affect the unification process (if at all). For example, if a single neuron related to a binder, in one of the crossbar matrices, becomes faulty and stops firing, then the binder cannot point to a certain symbol; however, other binders from the pool can be used for pointing to that symbol if such is needed. In the meantime, this "faulty" binder may still be used, as it can be allocated to point to other symbols. Even if the faulty neuron starts firing constantly, it may still participate in the process if the symbol that is pointed by that "faulty" binder happens to be needed. The binder will simply not be used, if that symbol is irrelevant to the goal. If the damage to the WM neurons is more widespread, so that a binder cannot take part in the process, then this binder will not be allocated, and therefore will not be used in the graph construction. This may shorten the number of available GPB nodes in the largest graph but will not destroy the ability of the WTA to unify less complex terms (shallower trees).<ref type="foot" target="#foot_8">8</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusions</head><p>We have shown a general purpose binding mechanism that uses a pool of general purpose binders, and allocates them to KB items, only when they are necessary for achieving the goal. A large KB may be stored in long term connections rather than in the Working Memory. KB constraints are activated only upon need, and only if they are supportive for achieving the goal. We then showed that further log reduction is possible if the binding represents a many-to-one relationship. The size of a crossbar matrix is then reduced from O(n*k) to O(n*log(k)) and the number of constraints is also reduced. 9 We demonstrated the use of the suggested binding technique in ANN that performs FOL unification with size 10 that is <ref type="bibr">O(n×log(k)</ref>). The mechanism is distributed since there is no central control and even binder allocation is done in a totally distributed way. It is also inherently robust, as no fatal failures occur when neurons "die". We have performed initial experiments with the GPB pool mechanism (without the LOG WTA reduction), these experiments indicate the feasibility of the approach on rather complex unification tasks including multi-instance parallel-unification and recursive occurs checking. LOG WTA and fault tolerance experiments are the subject of ongoing work. The mechanism described is general and can further be used for other applications such as: language processing, FOL inference and planning. We are working on extending the techniques, for full FOL inference and conjecture that these techniques will also improve other SAT encodings that use crossbar-like bindings, e.g. as in <ref type="bibr">[Kautz, et al 2006]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. On top: A Standard WTA pointing to the 6th object; Bellow: a LOG WTA array with a binary value of 6 representing the 6 th object's signature '0110'.</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">The BlackBoard architecture uses billions of neurons to represent thousands of atomic concepts; HRR Production systems[Stewart,  </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2008" xml:id="foot_1">Elliasmith 2008]  needs about one million neurons.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_2">The method was suggested in [Pinkas 1992] and used in [Lima 2000],<ref type="bibr" target="#b11">[Lima 2007</ref>] for clamping a KB in Working Memory.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_3">3 O(k 3 ) constraints are needed for unification in this paradigm.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_4">When an item is already in WM, no retrieving is needed.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_5">The number of constraints needed for unification is O(n 2 k); linear in the KB size, when n&lt;&lt;k.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_6">Even, if occurs check is used, the WM size is still linear in the KB size when n&lt;&lt;k.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_7">E.g., mutual exclusion constraints -for enforcing WTA are eliminated. Long OR constraints of O(k) size, become only of log(k) length.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_8">This property may help in supporting neuro-linguistic theories that relate certain symptoms of aphasia, with loosing abilities</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Parallel logical inference and energy minimization</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">H</forename><surname>Ballard</surname></persName>
		</author>
		<author>
			<persName><surname>Ballard</surname></persName>
		</author>
		<author>
			<persName><surname>Barrett</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI National Conference on Artificial Intelligence</title>
				<meeting>the AAAI National Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="1986">1986. 1986. 2008</date>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="2361" to="2237" />
		</imprint>
	</monogr>
	<note>A (somewhat) new solution to the binding problem</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Bounded Model Checking Using Satisfiability Solving</title>
		<author>
			<persName><forename type="first">Sun</forename><forename type="middle">;</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Browne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Clarke</forename><forename type="middle">Et</forename></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Raimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">tree pruning in Agrammatism and Broca&apos;s Aphasia</title>
				<meeting><address><addrLine>Heidelberg; Friedman</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2000">2000. 2000. 2001. July 2001. 2002</date>
			<biblScope unit="volume">19</biblScope>
		</imprint>
	</monogr>
	<note>Formal Methods in System Design archive</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">that some constraints may grow in size complexity; while others may shrink</title>
		<author>
			<persName><surname>Note</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">(n 2 +log(k)) with occurs check</title>
		<author>
			<persName><forename type="first">O</forename></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The magical number of 4 in short term memory: A reconsideration of mental storage capacity</title>
		<author>
			<persName><forename type="first">; N</forename><surname>Cowan</surname></persName>
		</author>
		<author>
			<persName><surname>Cowan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behavioral and Brain Sciences</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">24</biblScope>
			<biblScope unit="page">519</biblScope>
			<date type="published" when="2001">2001. 2001. 2008. 2008</date>
		</imprint>
	</monogr>
	<note>CIKM</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">J</forename><surname>Feldman</surname></persName>
		</author>
		<author>
			<persName><surname>Feldman</surname></persName>
		</author>
		<ptr target="http://www.computational-logic.org/content/events/iccl-ss-2010/slides/feldman/papers/Binding8.pdf" />
		<title level="m">The Binding Problem(s</title>
				<imprint>
			<date type="published" when="2010-05">2010. May 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">.Syntactic tree pruning and question production in agrammatism</title>
		<author>
			<persName><forename type="first">Phylyshyn</forename><surname>Fodor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Fodor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">W</forename><surname>Phylyshyn ; Friedman ; Friedmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Connectionism and Symbols</title>
				<editor>
			<persName><forename type="first">Mehler</forename><surname>Pinker</surname></persName>
		</editor>
		<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="1988">1988. 1988</date>
			<biblScope unit="volume">83</biblScope>
			<biblScope unit="page" from="117" to="120" />
		</imprint>
	</monogr>
	<note>Connectionism and cognitive architecture: A critical analysis</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A Connectionist Computational Model for Epistemic and Temporal Reasoning</title>
		<author>
			<persName><forename type="first">Lamb</forename><surname>Garcez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Avila Garcez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Lamb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Avila Garcez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Lamb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hölldobler</surname></persName>
		</author>
		<author>
			<persName><surname>Kautz</surname></persName>
		</author>
		<idno>TR-90-012</idno>
	</analytic>
	<monogr>
		<title level="m">Walksat in the 2004 SAT Competition, International Conference on Theory and Applications of Satisfiability Testing</title>
				<editor>
			<persName><forename type="first">Ray</forename><surname>Jackendoff</surname></persName>
		</editor>
		<meeting><address><addrLine>Berkeley; Vancouver</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="1990">2006. July 2006. July 2006. 1990. 1990. 2002. 2004. 2004</date>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page">477</biblScope>
		</imprint>
		<respStmt>
			<orgName>International Computer Science Institute</orgName>
		</respStmt>
	</monogr>
	<note>Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">SatPlan: Planning as Satisfiability</title>
		<author>
			<persName><forename type="first">Henry</forename><surname>Kautz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bart</forename><surname>Kautz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joerg</forename><surname>Selman</surname></persName>
		</author>
		<author>
			<persName><surname>Hoffmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Abstracts of the 5th International Planning Competition</title>
				<imprint>
			<date type="published" when="2006">2006. 2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Unification neural networks: unification by error-correction learning</title>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">E</forename><surname>Komendantskaya</surname></persName>
		</author>
		<author>
			<persName><surname>Komendantskaya</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Logic Jnl IGPL</title>
		<imprint>
			<date type="published" when="2010">2010. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Resolution-Based Inference on Artificial Neural Networks</title>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">P M V</forename><surname>Lima</surname></persName>
		</author>
		<author>
			<persName><surname>Lima</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2000">2000. 2000</date>
			<pubPlace>, UK</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Department of Computing. Imperial College London</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. Thesis</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Reasoning, non-monotonicity and learning in connectionist networks that capture propositional knowledge</title>
		<author>
			<persName><surname>Lima</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Brain, Vision, and Artificial Intelligence Lecture Notes in Computer Science</title>
				<editor>
			<persName><forename type="first">G</forename><forename type="middle">F</forename><surname>Marcus</surname></persName>
		</editor>
		<meeting><address><addrLine>MA</addrLine></address></meeting>
		<imprint>
			<publisher>AIJ</publisher>
			<date type="published" when="1988">2007. 2007. 2001. 2001. 1988. 1991. 1991. 1992. 1995</date>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="623" to="641" />
		</imprint>
	</monogr>
	<note>IEEE Trans. On Neural Networks</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">From associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal synchrony</title>
		<author>
			<persName><forename type="first">Ajjanagadde</forename><surname>Shastri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">L</forename><surname>Shastri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ajjanagadde</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behavioral and Brain Sciences</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="417" to="494" />
			<date type="published" when="1993">1993. 1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Building Production Systems with Realistic Spiking Neurons</title>
		<author>
			<persName><forename type="first">Eliasmith</forename><surname>Stewart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Stewart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Eliasmith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Cognitive Science Conference</title>
				<meeting><address><addrLine>Washington, DC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008-08">2008. August, 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">From single neuron-firing to consciousness-towards the true solution of the binding problem</title>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">R</forename><surname>Velik</surname></persName>
		</author>
		<author>
			<persName><surname>Velik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neuroscience Behavioral Rev</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="993" to="1001" />
			<date type="published" when="2010">2010. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Connectionist Unification with a distributed Representation</title>
		<author>
			<persName><forename type="first">; V</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><surname>Weber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCNN-92</title>
				<imprint>
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><surname>Zimmer</surname></persName>
		</author>
		<author>
			<persName><surname>Al</surname></persName>
		</author>
		<title level="m">Handbook of binding and memory -Perspectives from cognitive neuroscience</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Hd Zimmer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">U</forename><surname>Mecklinger</surname></persName>
		</editor>
		<editor>
			<persName><surname>Lindenberger</surname></persName>
		</editor>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2006">2006. 2006</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
