<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Work in Progress Report</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gadi Pinkas</string-name>
          <email>gadip@mla.ac.il</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Priscila M. V. Lima</string-name>
          <email>priscilamvl@ufrrj.br</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Academic Studies</institution>
          ,
          <country country="IL">Israel</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universidade Federal Rural do Rio de Janeiro Seropédica</institution>
          ,
          <country country="BR">Brazil</country>
        </aff>
      </contrib-group>
      <fpage>14</fpage>
      <lpage>18</lpage>
      <abstract>
        <p />
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>We present a compact – yet expressive –
Multipurpose, distributed binding mechanism, which is
useful for encoding complex symbolic knowledge
and computation, using Artificial Neural Networks
(ANNs) or using Satisfiability (SAT) solvers. The
technique is demonstrated by encoding unrestricted
First Order Logic (FOL) unification problems as
Weighted Max SAT problems and then translating
the later into ANNs (or learning them). It is
capable of capturing the full expressive power of FOL,
and of economically encoding a large Knowledge
Base either as long term synapses or as clamped
units in Working Memory. Given a goal, the
mechanism is capable of retrieving from the synaptic
knowledge just what is needed, while creating
novel, compound structures in the Working Memory.
Two levels of size reduction are shown. First, we
build a Working Memory, using a pool of
multipurpose binders, based on the assumption that the
number of bindings that are actually needed is far
less than the number of all theoretically possible
bindings. The second level of compactness is due
to the fact that, in many symbolic representations,
when two objects are bound, there is a many-to-one
relationship between them. This happens because,
frequently, either only one value is pointed by
variable or only one variable point to a value. A
crossbar binding network of n × k units with such
restriction, can be transformed into an equivalent
neural structure of size O(n log(k)). We show that,
for performing unrestricted FOL unifications, the
Working Memory created is only log dependent on
the KB size; i.e., O(n log(k)). The variable binding
technique described is inherently fault tolerant as
there are no fatal failures, when some random
neurons become faulty and the ability to cope with
complex structures decays gracefully. Processing
is distributed and there is no need for a central
control even to allocate binders. The mechanism is
general, and can further be used for other
applications, such as language processing, FOL inference
and planning.</p>
    </sec>
    <sec id="sec-2">
      <title>1.1 The Binding Problem</title>
      <p>Human cognition is capable of producing combinatorial
structures. The general binding problem concerns how items
that are encoded in distinct circuits of a massively parallel
computing device (such as the brain or ANN) can be
combined in complex ways for perception, reasoning or for
action [Feldman 2010]. Consider for example, a planning
problem, where the task is to pick up an object and move it
from its current position to another place. In order to meet a
goal, a “brain”-like device, must be able to represent the
object, its properties, its position and the ways to manipulate
it, in such a way that the goal is achieved. The object and its
properties must be bound together, and this rather complex
structure should also be used in conjunction with other
entities and rules, such as the action consequences (e.g., moving
X from Y to Z clears position Y while occupying position
Z). In another example, consider the sentence: “Sally ate”:
In language processing, the verb “EAT” is a predicate with
at least two roles - EAT(“Sally”,X). The noun “Sally”
should be bound to the first role, while an existentially
quantified variable (representing “something”) should be
bound to the second role. Once we get the information that
“Sally ate salad”, and knowing the rule: EAT(Y,X)⇒
DIGESTED(X) we should reason that “the salad is digested”.
In order to do that, we must bind the variable X to the noun
“salad”, while X must be bounded to both EAT(,X) and
DIGESTED(X).
1.2</p>
    </sec>
    <sec id="sec-3">
      <title>Connectionism and Variable Binding</title>
      <p>During the years, connectionist systems have been criticized
for “Propositional Fixation” [McCarthy 1988]. In [Fodor,
Phylyshyn 1988] connectionism was criticized for lacking
abilities to construct combinatorial representations and for
performing processes that are sensitive to complex structure.
Exactly how compositionality can occur is a fundamental
question in cognitive science and the binding aspect of it has
been identified as a key to any neural theory of language
[Jackendoff 2002]. Several attempts have been made to
approach the variable binding problem in a connectionist
framework [Shastri, Ajjanagadde 1993], [Browne, Sun
2000], [Zimmer et al. 2006], [Van der Velde, Kamps,
Kamps 2006], [Barret et al. 2008], [Velik 2010]; yet,
virtually all these suggestions, have limitations, related to either
limited expressiveness, size and memory requirements,
central control demands, lossy information, etc.</p>
      <p>For example, compositionality can be provided using
Hollographic Reduced Represenations [Plate 1995]; however,
the convolution operation used, is lossy and errors are
introduced as structures become more complex or as more
operations are done. The BlackBoard Architecture [Van der
Velde, Kamps, Kamps 2006] can form complex structures
but does not manipulate those structures to perform
cognition. Shastri’s temporal binding has only limited FOL
expressiveness and no mechanism for allocating temporal
binders. Finally, all the above systems need neurons in
numbers that is at best linear in the KB; while some use
much more neurons than that.1 For FOL compositionality in
ANNs see [Ballard 1986], [Pinkas 1992], [Shastri 1999],
[Lima 2000], [Garcez, Lamb 2006]. For partial-FOL
encodings in Satisfiability, see [Domingos 2008], [Clark et al.
2001].</p>
      <p>The ability to represent combinatorial structures and
reasoning with them, still presents challenges to theories of
neurocognition [Marcus 2001], while the variable binding
problem is fundamental to such ability [Feldman 2010].
1.3</p>
    </sec>
    <sec id="sec-4">
      <title>Unification</title>
      <p>In conventional computing, unification is a key operation
for realizing inference, reasoning, planning and language
processing. It is the main vehicle for conventional symbolic
systems to match rules with facts, or rules with other rules.
In unification, two or more distinct hierarchical entities
(terms) are merged, to produce a single, unified, tree-like
structure. This unified structure adheres to the constraints of
both the original entities. Formally, unification is an
operation which produces from two or more logic terms, a set of
substitutions, which either identifies the terms or makes the
terms equal modulo some equational theory. For
connectionist approaches to unification see [Hölldobler 1990],
[Weber 1992], [Komendantskaya 2010].</p>
      <p>For easiness of reading, we have chosen to demonstrate
our compact variable binding mechanism on the more
fundamental unification function, rather than on full FOL
inference.
1.4</p>
    </sec>
    <sec id="sec-5">
      <title>Artificial Neural Networks and SAT</title>
      <p>ANNs may be seen as constraint satisfaction networks,
where neuron-units stand for Boolean variables, and where
the synapse weights represent constraints imposed on the
variables. Any ANN may be seen as such a constraint
net1 The BlackBoard architecture uses billions of neurons to
represent thousands of atomic concepts; HRR Production systems
[Stewart, Elliasmith 2008] needs about one million neurons.
work; yet, for ANNs with symmetric weights (e.g. Hopfield,
Boltzmann Machines, MFT) a simple conversion has been
shown for translating any Weighted MAX SAT problem
into symmetric ANN and vice-versa [Pinkas, 1991]. Any
such SAT problem could be compiled into an ANN, which
performs stochastic gradient descent on an energy function
that basically counts the number of unsatisfied logical
constraints. The size of the generated network is linear in the
size of the original formula, though additional hidden units
may be required. In addition to compilation, the logical
constraints of a network could be PAC learnt using
Hebbianlike rule [Pinkas 1995], thus, for small-size constraints, a
network can efficiently learn its weights and structure from
a training set that is composed of the satisfying models. The
performance efficiency of this neural mechanism can be
attributed to the similarities of symmetric ANNs to
stochastic local search algorithms, such as WALKSAT [Kautz et al
2004]. Due to the tight relationship between ANNs and
Weighted Max SAT, our methodology is to specify an ANN
designed for certain symbolic computation (e.g. unification),
using a set of Boolean variables (the visible units) and a set
of constraints; i.e., Boolean formulae designed for
restricting the values of the visible units. The constraints specified
are used to force the visible units to converge to a valid
solution that satisfies as many (weighted) formulae as
possible. We have written a compiler that translates such
specifications into either weighted CNF (for Weighted Max SAT
Solvers) or for ANN with symmetric weights.</p>
      <p>We believe that our fault tolerant mechanism and
methods for dynamically forming recursive structures will scale
and be useful for both the engineering of massively parallel
devices, and for modeling of high-level cognitive processes.
2</p>
    </sec>
    <sec id="sec-6">
      <title>Improving CrossBar Binding</title>
      <p>The simplest, most naïve binding techniques is CrossBar
binding. The term was mentioned in [Barrett et al. 2008],
yet it was intuitively used by many connectionist systems in
the past [Ballard 1986], [Anandan 1989] and in many SAT
reductions; e.g., [Kautz, at el 2006]. Formally, we define
crossbar binding as a Boolean matrix representation of a
relation between 2 sets of items, using Characteristic Matrix
of the relation; i.e., if A contains m objects and B contains n
objects, then the characteristic matrix R has m lines and n
columns, containing m × n Boolean variables (neurons). We
say that item i is bound to item j iff R(i,j)=1. In this naïve,
binding mechanism, a neuron should be allocated for each
possible binding, and all theoretic combinations of two
items must be pre-enumerated as rows and columns of the
matrix. A crossbar matrix, that needs to represent a complex
tree or a graph, must bind together not just simple
constituents, but all the compounded entities representing partial
trees (or sub-graphs). It is possible to represent a FOL KB
this way at the cost of using an enormous number of
neurons, and with an extremely localist approach. Even more
frustrating is the fact that this technique will not be suitable
for dynamically creating novel, nested structures upon
demand. The number of theoretic bindings, for all possible tree
structures, grows exponentially with the number of
constituent items and must be computed in advance.</p>
      <p>We improve this simplistic binding mechanism in several
steps:
2.1</p>
    </sec>
    <sec id="sec-7">
      <title>Using Binders as “pointers” to form Graphs</title>
      <p>First, we introduce2 a special kind of entities called
General Purpose Binders (GPBs). GPBs are similar to pointers
except for the fact that a single GPB can point to several
objects, as the crossbar paradigm permits implementation of
arbitrary relations between binders and objects. In the
special case where binders point to binders, arbitrary directed
graphs can be built. In this scenario, we can interpret each
GPB as a node in the graph, and the crossbar, as specifying
the arcs of the graph (adjacency matrix). In such graph
interpretation, each node may be labeled using a labeling
crossbar, that ties together binders, with symbols such as,
predicates, functions or constants in FOL. Arcs can also be
labeled, as the binder-to-binder crossbar, may have a third
dimension which relates one or more labels to each arc. This
enables the formation of arbitrary complex graph structures,
that can be used to represent language constituents and in
particular, FOL terms, predicates, literals and clauses.
Unlike in the naïve crossbar approach, unrestricted graphs can
be built directly out of simple constituents, with GPB as the
mechanism for gluing them together.</p>
      <p>Because the binders are general-purpose entities, we can
construct a working memory out of a pool of such binders.
As long as GPBs remain unallocated, they can be used for
dynamic creation of novel, goal oriented structures. To do
so, the “right” constraints should be embedded in the
synapses, forcing binders first to be allocated and then to assume
a desired structure for solving the goal. These constraints,
stored at the synaptic weights, are the driving force that
causes the visible units to converge to the needed graph-like
structures.</p>
      <p>Using this technique, we show that arbitrary KB of size k,
can be encoded in a working Memory (WM) with O(k)
binders and with a total size of O(k2). Unfortunately, when
the KB tends to grow, the WM and the set of constraints
may become too large for the mechanism to be used in real
applications. 3
2.2</p>
    </sec>
    <sec id="sec-8">
      <title>Using a pool of binders “As Needed”</title>
      <p>Luckily, we can reduce that size requirement, drastically, as
we can assume that, at a certain time, only few binders are
actually needed for the processing of a given goal. This is
supported by cognitive studies [Cowan 1981] and
constitutes a common assumption of several connectionist
systems [Shastri, Ajjanagadde 1993], [Barrett et al 2008]. We
therefore can design a Working Memory of neural units,
which uses only a pool of General Purpose Binders, labeled
and nested within each other; i.e., a small set of binders, for
2 The method was suggested in [Pinkas 1992] and used in
[Lima 2000], [Lima 2007] for clamping a KB in Working Memory.
3 O(k3) constraints are needed for unification in this paradigm.
representing only those graphs that are actually needed for
computing the goal. It turns out that this approach is
consistent with cognitive theories, where a large KB is stored in
synapses (long term memory); and a smaller size working
memory is used for retrieving only few KB items at a time.
Only those items that are necessary to the process4 get to be
retrieved from the synaptic KB. For example, if our purpose
is to find a plan for a goal, expressed in FOL, we need to
design the WM with enough binders to represent a valid
plan. We retrieve the facts and rules of the world from that
KB only if they are required by the plan we desire to make.</p>
      <p>To implement a pool of binders for FOL unification, the
WM should contain three crossbar matrices: One for
labeling nodes by symbols (predicates, functions, constants). The
second is for nesting of the nodes in Graphs and labeling the
arcs according to slots of the predicates and functions. The
third crossbar is for retrieving items from the long term
memory where the KB is stored (e.g., terms, literals or
clauses). This third matrix ties a binder to a KB item and
triggers the constraints of that item to be activated so that
the binder node is forced to assume the structure of the KB
item retrieved. The mechanism starts working as goal
activated constraints cause some binders to be tied to KB items
and activate some KB constraints. Those constraints, in
turn, activate other constraints, till the WM converges to a
valid solution. When we implement unification problems,
the size of the WM is O(n ×k) where n is the maximal
number of nodes in a solution; k is the size of the KB and n&lt;&lt;k.
This constitutes a drastic improvement, as the WM size is
linear in the size of the KB, instead of being quadratic.5
Actually, we can do even better:</p>
    </sec>
    <sec id="sec-9">
      <title>2.3 Crossbars with n*log(k) size complexity</title>
      <p>In the next size improvement, we further reduce the size of
many crossbar matrices from O(n*k) to O(n* log(k)). Thus,
in our unification example, a WM of O(n* log (k)) is
created, where n is the maximal size of a unification tree and k is
the size of the KB. This means that the WM size is only log
dependent6 on the KB size; rather than linearly as in
previous section.</p>
      <p>The key to this log-reduction, is the fact that frequently,
binding relationships have many-to-one or one-to-many
restrictions. For example, the crossbar matrix for node
labeling, allows for a binder to point only to a single symbol
(whereas many binders could point to the same symbol).
This many-to-one relationship causes the rows of the
crossbar labeling matrix to be Winner-Takes-All (WTA) arrays,
where only one neural unit (if any) may fire. Normally, we
need mutual exclusion constraints to force the rows of the
matrix to be either all-zeros or have a single variable set to
one. In such a scenario, however, we can replace each WTA
4 When an item is already in WM, no retrieving is needed.
5 The number of constraints needed for unification is O(n2k);
linear in the KB size, when n&lt;&lt;k.</p>
      <p>6 Even, if occurs check is used, the WM size is still linear in
the KB size when n&lt;&lt;k.
line (with k-variables), with a much smaller size line of only
O(log(k)) variables. Each such line of log(k) variables
(neurons), represents an index (or a signature) to the target label.
Therefore, if a binder may point to just a single object (out
of k possible objects), we may use only log(k) bit
signatures. Fig 1 illustrates, how one binder with WTA line that
points to object 6 (out of 15 objects) is reduced to only 4
bits LOG WTA array, representing the signature of that
item. This signature, once it emerges in a binder’s row,
activates a set of constraints associated with the bounded object.
These constraints force the binder to get the retrieved item’s
structure and may cause a chain reaction of more
constraints, retrieving more KB items and so forth.</p>
      <p>It should be noted that, once a LOG WTA encoding is used
instead of the standard WTA, the constraints imposed on
WM might need to be adjusted. 7
3</p>
    </sec>
    <sec id="sec-10">
      <title>Fault Tolerance</title>
      <p>The variable binding mechanism suggested and its
application to unification are inherently fault tolerant, if each
variable is allocated a processing unit (a neuron). Small random
damage to the neurons does not radically affect the
unification process (if at all). For example, if a single neuron
related to a binder, in one of the crossbar matrices, becomes
faulty and stops firing, then the binder cannot point to a
certain symbol; however, other binders from the pool can be
used for pointing to that symbol if such is needed. In the
meantime, this “faulty” binder may still be used, as it can be
allocated to point to other symbols. Even if the faulty
neuron starts firing constantly, it may still participate in the
process if the symbol that is pointed by that “faulty” binder
happens to be needed. The binder will simply not be used, if
that symbol is irrelevant to the goal. If the damage to the
WM neurons is more widespread, so that a binder cannot
take part in the process, then this binder will not be
allocated, and therefore will not be used in the graph construction.
This may shorten the number of available GPB nodes in the
largest graph but will not destroy the ability of the WTA to
unify less complex terms (shallower trees).8</p>
    </sec>
    <sec id="sec-11">
      <title>Conclusions</title>
      <p>We have shown a general purpose binding mechanism that
uses a pool of general purpose binders, and allocates them to
KB items, only when they are necessary for achieving the
goal. A large KB may be stored in long term connections
rather than in the Working Memory. KB constraints are
activated only upon need, and only if they are supportive for
achieving the goal. We then showed that further log
reduction is possible if the binding represents a many-to-one
relationship. The size of a crossbar matrix is then reduced from
O(n*k) to O(n*log(k)) and the number of constraints is also
reduced.9 We demonstrated the use of the suggested binding
technique in ANN that performs FOL unification with size10
that is O(n×log(k)). The mechanism is distributed since
there is no central control and even binder allocation is done
in a totally distributed way. It is also inherently robust, as no
fatal failures occur when neurons “die”. We have performed
initial experiments with the GPB pool mechanism (without
the LOG WTA reduction), these experiments indicate the
feasibility of the approach on rather complex unification
tasks including multi-instance parallel-unification and
recursive occurs checking. LOG WTA and fault tolerance
experiments are the subject of ongoing work. The mechanism
described is general and can further be used for other
applications such as: language processing, FOL inference and
planning. We are working on extending the techniques, for
full FOL inference and conjecture that these techniques will
also improve other SAT encodings that use crossbar-like
bindings, e.g. as in [Kautz, et al 2006].</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Ballard 1986]
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Ballard</surname>
          </string-name>
          .
          <article-title>Parallel logical inference and energy minimization</article-title>
          .
          <source>In Proceedings of the AAAI National Conference on Artificial Intelligence</source>
          , pages
          <fpage>203</fpage>
          -
          <lpage>208</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [Barrett et al 2008]
          <string-name>
            <surname>Barrett</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feldman</surname>
            <given-names>JA</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mac Dermed L.</surname>
          </string-name>
          <article-title>A (somewhat) new solution to the binding problem</article-title>
          .
          <source>Neural Computation</source>
          ,
          <volume>20</volume>
          :
          <fpage>2361</fpage>
          -
          <lpage>237</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Brown, Sun 2000]
          <string-name>
            <given-names>A.</given-names>
            <surname>Browne</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Sun</surname>
          </string-name>
          .
          <article-title>Connectionist variable binding</article-title>
          . Springer, Heidelberg,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>[Clarke et</article-title>
          .al 2001]
          <string-name>
            <given-names>E.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Raimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Bounded Model Checking Using Satisfiability Solving, in Formal Methods in System Design archive</article-title>
          , Volume
          <volume>19</volume>
          Issue 1,
          <string-name>
            <surname>July</surname>
          </string-name>
          <year>2001</year>
          . 7 E.g.,
          <article-title>mutual exclusion constraints - for enforcing WTA are</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>log(k) length. 8 This property may help in supporting neuro-linguistic theo-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>Agrammatism and Broca's Aphasia [Friedman</source>
          <year>2002</year>
          ].
          <article-title>9 Note, that some constraints may grow in size complexity;</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>while others may shrink. 10 O(n2 +log(k)) with occurs check</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Cowan</source>
          <year>2001</year>
          ]
          <string-name>
            <given-names>N</given-names>
            <surname>Cowan</surname>
          </string-name>
          ,
          <article-title>The magical number of 4 in short term memory: A reconsideration of mental storage capacity</article-title>
          ,
          <source>Behavioral and Brain Sciences</source>
          ,
          <volume>1</volume>
          (
          <issue>24</issue>
          ),
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[Domingos</source>
          <year>2008</year>
          ]
          <string-name>
            <given-names>P.</given-names>
            <surname>Domingos</surname>
          </string-name>
          ,
          <article-title>Markov logic: a unifying language for knowledge and information management</article-title>
          .
          <source>CIKM</source>
          <year>2008</year>
          :
          <volume>519</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>[Feldman</source>
          <year>2010</year>
          ]
          <string-name>
            <given-names>J.</given-names>
            <surname>Feldman</surname>
          </string-name>
          ,
          <source>The Binding Problem(s)</source>
          , http://www.computational-logic.org/content/events/icclss-2010/slides/feldman/papers/Binding8.pdf, accessed on May
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Fodor, Phylyshyn 1988]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Fodor</surname>
          </string-name>
          and
          <string-name>
            <given-names>Z. W.</given-names>
            <surname>Phylyshyn</surname>
          </string-name>
          ,
          <article-title>Connectionism and cognitive architecture: A critical analysis</article-title>
          .
          <source>In Pinker and Mehler (eds): Connectionism and Symbols</source>
          ,
          <fpage>3</fpage>
          -
          <lpage>71</lpage>
          , MIT Press,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [Friedman 2002] Friedmann,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <article-title>Syntactic tree pruning and question production in agrammatism</article-title>
          .
          <source>Brain and Language</source>
          ,
          <volume>83</volume>
          ,
          <fpage>117</fpage>
          -
          <lpage>120</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [Garcez, Lamb,
          <year>2006</year>
          ]
          <string-name>
            <given-names>A. S. d'Avila</given-names>
            <surname>Garcez</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Lamb</surname>
          </string-name>
          .
          <article-title>A Connectionist Computational Model for Epistemic and Temporal Reasoning</article-title>
          .
          <source>Neural Computation</source>
          <volume>18</volume>
          (
          <issue>7</issue>
          ):
          <fpage>1711</fpage>
          -
          <lpage>1738</lpage>
          , MIT Press,
          <year>July 2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [[Garcez, Lamb 2006]
          <article-title>A. S. d'Avila Garcez</article-title>
          and
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Lamb</surname>
          </string-name>
          .
          <article-title>A Connectionist Computational Model for Epistemic and Temporal Reasoning</article-title>
          .
          <source>Neural Computation</source>
          <volume>18</volume>
          (
          <issue>7</issue>
          ):
          <fpage>1711</fpage>
          -
          <lpage>1738</lpage>
          , MIT Press,
          <year>July 2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [Hölldobler 1990]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hölldobler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Connectionist</given-names>
            <surname>Unification</surname>
          </string-name>
          <article-title>Algorithm</article-title>
          . International Computer Science Institute, Berkeley, TR-
          <volume>90</volume>
          -012:
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [Jackendoff 2002] Jackendoff,
          <string-name>
            <surname>Ray</surname>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>Foundations of Language: Brain, Meaning</article-title>
          , Grammar, Evolution. Oxford: Oxford University Press. pp.
          <fpage>477</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [Kautz et al 2004]
          <article-title>Henry Kautz</article-title>
          , Bart Selman, &amp;
          <string-name>
            <surname>David McAllester</surname>
          </string-name>
          ,
          <article-title>Walksat in the 2004 SAT Competition</article-title>
          ,
          <source>International Conference on Theory and Applications of Satisfiability Testing</source>
          , Vancouver,
          <year>2004</year>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [Kautz at el 2006]
          <article-title>Henry Kautz, Bart Selman, and Joerg Hoffmann</article-title>
          .
          <article-title>SatPlan: Planning as Satisfiability, Abstracts of the 5th International Planning Competition, 2006</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <source>[Komendantskaya</source>
          <year>2010</year>
          ]
          <string-name>
            <given-names>E.</given-names>
            <surname>Komendantskaya</surname>
          </string-name>
          ,
          <article-title>Unification neural networks: unification by error-correction learning</article-title>
          ,
          <source>Logic Jnl IGPL</source>
          ,
          <year>2010</year>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>[Lima</source>
          , 2000]
          <string-name>
            <given-names>P. M. V.</given-names>
            <surname>Lima</surname>
          </string-name>
          :
          <source>Resolution-Based Inference on Artificial Neural Networks. Ph.D. Thesis</source>
          , Department of Computing. Imperial College London, UK (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [Lima et al 2007]
          <string-name>
            <given-names>P. M. V.</given-names>
            <surname>Lima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mariela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>MorveliEspinoza and F. M. G. França</surname>
          </string-name>
          ,
          <article-title>Logic as Energy: A SATBased Approach</article-title>
          , Advances in Brain,
          <source>Vision, and Artificial Intelligence Lecture Notes in Computer Science</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [Marcus 2001]
          <string-name>
            <given-names>G. F.</given-names>
            <surname>Marcus</surname>
          </string-name>
          ,
          <article-title>The algebraic mind</article-title>
          . Cambridge, MA: MIT Press,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>[McCarthy 1988] J. McCarthy</surname>
          </string-name>
          ,
          <article-title>Epistemological challenges for connectionism</article-title>
          .
          <source>Behavioral and Brain Sciences</source>
          ,
          <volume>11</volume>
          :
          <fpage>44</fpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [Pinkas 1991] G. Pinkas, “
          <article-title>Symmetric neural networks and logic satisfiability</article-title>
          ,
          <source>” Neural Computation</source>
          <volume>3</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>282</fpage>
          -
          <lpage>291</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [Pinkas 1992] G. Pinkas, “
          <article-title>Constructing syntactic proofs in symmetric networks</article-title>
          ”
          <source>in Advances in Neural Information Processing Systems - 4 (NIPS-91)</source>
          , pp.
          <fpage>217</fpage>
          -
          <lpage>224</lpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [Pinkas 1995] G. Pinkas, “
          <article-title>Reasoning, non-monotonicity and learning in connectionist networks that capture propositional knowledge”</article-title>
          ,
          <source>Artificial Intelligence Journal</source>
          <volume>77</volume>
          ,
          <string-name>
            <surname>(</surname>
            <given-names>AIJ</given-names>
          </string-name>
          ) pp.
          <fpage>203</fpage>
          -
          <lpage>247</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [Plate 1995]
          <string-name>
            <given-names>T.</given-names>
            <surname>Plate Holographic Reduced Representations</surname>
          </string-name>
          ,
          <source>IEEE Trans. On Neural Networks</source>
          <volume>6</volume>
          (
          <issue>3</issue>
          ),
          <fpage>623</fpage>
          -
          <lpage>641</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [Shastri, Ajjanagadde 1993]
          <string-name>
            <surname>L. Shastri L.</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Ajjanagadde</surname>
          </string-name>
          ,
          <article-title>From associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal synchrony</article-title>
          .
          <source>Behavioral and Brain Sciences</source>
          <year>1993</year>
          ;
          <volume>16</volume>
          (
          <issue>3</issue>
          ):
          <fpage>417</fpage>
          -
          <lpage>494</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [Stewart, Eliasmith 2008]
          <string-name>
            <given-names>T.C.</given-names>
            <surname>Stewart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Eliasmith</surname>
          </string-name>
          ,
          <source>Building Production Systems with Realistic Spiking Neurons”, Cognitive Science Conference</source>
          . Washington, DC.
          <year>August</year>
          ,
          <year>2008</year>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <source>[Velik</source>
          <year>2010</year>
          ]
          <string-name>
            <given-names>R.</given-names>
            <surname>Velik</surname>
          </string-name>
          ,
          <article-title>From single neuron-firing to consciousness- towards the true solution of the binding problem</article-title>
          ,
          <source>Neuroscience Behavioral Rev.</source>
          ,
          <year>2010</year>
          ,
          <volume>34</volume>
          (
          <issue>7</issue>
          ): pp.
          <fpage>993</fpage>
          -
          <lpage>1001</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <source>[Weber</source>
          <year>1992</year>
          ]
          <string-name>
            <given-names>V</given-names>
            <surname>Weber</surname>
          </string-name>
          ,
          <article-title>Connectionist Unification with a distributed Representation</article-title>
          ,
          <source>In IJCNN-92</source>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <article-title>[Zimmer et</article-title>
          . Al.
          <year>2006</year>
          ],
          <string-name>
            <given-names>HD</given-names>
            <surname>Zimmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mecklinger</surname>
          </string-name>
          , U. Lindenberger (eds),
          <article-title>Handbook of binding and memory - Perspectives from cognitive neuroscience</article-title>
          . Oxford University Press 2006.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>