<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Reasoning by Analogy Using Past Experiences</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>F. Leuzzi</string-name>
          <email>fabio.leuzzi@uniba.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>S. Ferilli</string-name>
          <email>stefano.ferilli@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Centro Interdipartimentale per la Logica e sue Applicazioni - Universita` di Bari</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dipartimento di Informatica - Universita` di Bari</institution>
        </aff>
      </contrib-group>
      <fpage>115</fpage>
      <lpage>129</lpage>
      <abstract>
        <p>Reasoning by analogy is essential to provide new conclusions helpful to solve a problem. Here we present the definition of a new operator aimed at reasoning by analogy. The proposed reasoner relies on the Roles Mapping Engine. It finds analogous roles encoded in descriptions that use domain-specific terminology, overcoming syntactical constraints that limit the relations to have the same name. We employ also a structural similarity function to face cases affected by ambiguity. We evaluate our approach using examples proposed in other works, producing comparable results.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        As pointed out by [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], reasoning by analogy is essential in order to produce new
conclusions helpful to solve a problem. In particular, some perspectives make this
type of reasoning a primary issue. First, in the study of learning, analogies are
important in the transfer of knowledge and inferences across different concepts,
situations, or domains. Second, analogies are often used in problem solving and
reasoning. Third, analogies can serve as mental models to understand new
domains. Fourth, analogy is important in creativity. Studies in history science show
that analogy was a frequent mode of thought for such great scientists as Faraday,
Maxwell, and Kepler. Fifth, analogy is used in communication and persuasion.
Sixth, analogy and its cousin, similarity, underlie many other cognitive process.
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] defines the analogies as partial similarities between different situations that
support further inferences. Specifically, analogy is defined as a kind of similarity
in which the same system of relations holds across different objects. Analogies
thus capture parallels across different situations.
      </p>
      <p>This proposal consists in the definition of a new operator aimed at reasoning
by analogy. The reasoner relies on the Roles Mapping Engine (RME), that finds
common roles across descriptions. The long term objective is to embed in such
definition a strategy for the cooperation with other reasoning operators.</p>
      <p>The remainder of this work is organized as follow: in Section 2 related works
are presented with related criticisms, in Section 3 some considerations about the
analogy process are reported, then we present the proposed mapping procedure
in details, presenting an evaluation in the successive section, finally we conclude
with some considerations and future works.</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        Plenty of works studied an operator aimed to perform reasoning by analogy.
The most popular line of thought regards the research of identities between
predicates across domain, using as representation formalism propositional or first
order logic. In particular, in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] the author aims to compose goal and sub-goal
using analogous experiences. The authors claim that retrieving one (or many)
analogies consists in finding similar past cases, then imposing to find the same
predicate in both the experiences. This proposal contrasts with the canonical
definition of analogy, in which only a correlation between roles is expected.
      </p>
      <p>
        Similar assumption can be found in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], in which the author proposes a
strategy of knowledge projection between domains. This procedure is based on
a definition of analogy presented in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], that relies on the assumption that some
given terms are analogous across domains if they are tied by the same predicate
(as stated also in [
        <xref ref-type="bibr" rid="ref11 ref8">8, 11</xref>
        ]). In [
        <xref ref-type="bibr" rid="ref3 ref5 ref7">5, 3, 7</xref>
        ], the central statement is not so different:
the author claims that analogy is characterized by the mapping of relations
(having same predicate name) between objects, rather than attributes of objects,
from base to target. The particular contribution that is slightly different from
the other works is that this work isolates four primary classes of matching:
literal similarity (in which both predicates and attributes are mapped), analogy
(in which mainly predicates are mapped), abstraction (having an analogy like
behaviour, but aimed to use the mapping for different goals) and anomaly (that
is a wrong trying of analogical mapping). The current class depends on the
domain in which an analogy is sought.
      </p>
      <p>
        Again the same assumption is kept into account in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], in which the author
trains a classifier with a set of word pairs having as label the name of their
relationship. The output of its algorithm is a classification, where the learned
class describes a given relationship. All the pairs that are part of this class are
claimed as analogues. We point out some limitations. In first place, the evaluation
of a single relationship for each time is equivalent to consider each relationship
in a complex context as independent from the others. In second place, this work
proposes a supervised approach that require a concept description with a fixed
list of features, such requirement is not always available in real cases.
      </p>
      <p>
        A different approach has been proposed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], in which analogies are carried
out using Evolutionary Computation. The authors represent the knowledge in
semantic graphs and generate the dataset using common sense knowledge bases.
At this point potential analogies are generated through evolution (i.e. using pairs
of descriptions as parents probabilistically selected from population with
reselection allowed) and evaluated through the Structure Mapping Engine (SME, [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ])
that provides a fitness measure score. Despite the novelty of this approach, the
methodology relies on the mapping of equal relation only.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the authors use the SME [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] in order to recognize similar sketches.
In particular, they propose a software system in which an expert can draw a
sketch that is stored as ground truth. In such a way a student can draw a sketch
in turn, that is compared to the stored one with the aim to catch analogical
aspects, checking its correctness. Unfortunately, this is a typical task of pattern
recognition, that fits with similarity evaluations. In fact, this work proposes
an algorithm that exploits a numerical technique to revise the SME mistakes,
instead of considering that analogy involves semantic aspects of reasoning that
are far from pattern recognition in sketches.
      </p>
      <p>In a general view, the attempt is to produce analogies searching identities
between predicates across domains, we must underline that, despite the
reliability of these works (shown by the experimental results), the research of identical
predicates alone could be not enough to generate useful analogies.
3</p>
    </sec>
    <sec id="sec-3">
      <title>The analogical reasoner</title>
      <p>In this Section we will present our approach. For the sake of clarity, the
description used as prior knowledge is referred as base domain, whereas the description
of the current problem is referred as target domain.</p>
      <p>We introduce some assumptions that limit our scopes: (1) we assume that all
the descriptions are encoded using the same abstraction level, (2) this work does
not keep into account the formalization of the goal, suggesting then all plausible
analogies.</p>
      <p>Retrieving oldest useful knowledge We want to face the retrieval of
potentially analogues experiences. A good starting point can be an evaluation of
common knowledge, that provides hints about potential points of contact between
descriptions. Furthermore, the evaluation of the subnet of common knowledge
relations allows to include the direct dependences between common statements.
The addition of relational dimension allows the soundness check of the results.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], the authors propose to face this step using a structural similarity
evaluation, because they hope that this choice would increase the possibility of
retrieval surprising and creative source domains. They describe the experiences
in a n-dimensional structure space, where each dimension represents a particular
topological quality of that domain. In such a way they can project the
experiences in a vector space. Unfortunately, using such representation formalism some
information are lost (e.g. connections among concepts).
      </p>
      <p>
        For this reason, we decided to evaluate the similarity using the structural
measure proposed in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and denoted as fs, because it performs a multi-level
evaluation combining the similarities at terms, literals and clauses level. We
apply the fs measure between a given description and each other one in the
background knowledge. Unfortunately, such measure presents a drawback.
Suppose given two short similar descriptions. In such a case the fs score will indicate
similarity. Despite its score, the knowledge that can be effectively used for
inferences could be poor. For this reason, such score is smoothed by a multiplicative
factor denoted as mf. Given a set of clauses S, two clauses C0 ∈ S and C00 ∈ S,
mf is computed as:
Algorithm 1 Roles Mapping Engine.
      </p>
      <p>Input: A pair of Horn clauses &lt; C0, C00 &gt;.</p>
      <p>Output: Two sets: term mappings θt and predicate mappings θp.</p>
      <p>θp = ∅
H =&lt; h0, h00 &gt;| h0 is the head of C0, h00 is the head of C00
θt ← θt ∪ {term mappings in H}
RA = literals having the same predicate in C0 and C00
θt ← θt ∪ {term mappings in RA}
θrpep←eaθtp ∪ {predicate mappings in RA}
where L = arg maxC∈S |C|. Such a trick could avoid obvious or useless analogies
between too short or too unbalanced descriptions.</p>
      <p>Roles Mapping Engine Reasoning by analogy cannot be reduced to
looking for equal predicates across domains, because the derived inference could be
useless or trivial. In this section our mapping strategy is presented through the
RME.</p>
      <p>Each concept is represented in Horn clause logic. Telling our strategy in a
nutshell, starting points are sought, then the mapping is expanded in breadth.
Both steps are executed keeping consistency requirements.</p>
      <p>We will discuss the RME approach using the Algorithm 1. In particular, such
algorithm presents iterations on three levels: the external level has an iteration
enclosing the whole life cycle of the analogical mapping; the middle level has an
iteration that checks whether one of the inner researches for mappings of identical
and non-identical predicates produce novel mappings or not; the internal level
has an iteration which aims to search for mappings of identical predicates and
another one aimed to research mappings of non-identical predicates.</p>
      <p>Such phases will be described in a formal notation and they will be equipped
with a running example. Such example refers to the analogy between a proverb
stating that “when the fox cannot reach the grapes, he says they are not ripe.”
and a life context in which “a man cannot have a girl, he spreads a bad
opinion about her”. We propose such example in order to clarify each step of our
proposal.</p>
      <p>Example 1 (Proverb and life context).</p>
      <p>proverb(fox, grape) :- wants(fox, grape), cannot take(fox, grape, fox does not reach
grape), is(fox, crafty), cause(fox does not reach grape, bad opinion), says(fox, grape is
not ripe), is(grape, not ripe, grape is not ripe), have(john, bad opinion).</p>
      <p>situation(john, carla) :- loves(john, carla), cannot have(john, carla, john cannot
have carla), says(john, carla is bad), is(carla, bad, carla is bad), uses(jealous,
craftiness).</p>
      <p>In order to understand the mapping procedure, we need to define what are
starting points and how to map terms and predicates as well.</p>
      <p>Definition 1 (Starting point). Given two clauses C0 and C00, a starting point
S is a binding S = [e0/e00] where (e0 ∈ predicates(C0) ∧ e00 ∈ predicates(C00)) ∨
(e0 ∈ terms(C0) ∧ e00 ∈ terms(C00)).</p>
      <p>Definition 2 (Term mapping). Given two clauses C0 and C00, two sets of
literals L0 ⊆ C0 and L00 ⊆ C00, a pair of terms t0 and t00, a consistent term
association θ ⊆ terms(C0) × terms(C00); then t0/t00 is a term mapping if
– {t0/t00} ∪ θ is a consistent term association;
– either ∀l0 ∈ L0 s.t. t0 is a term of l0, ∀l00 ∈ L00 s.t. t00 is a term of l00, both
t0 and t00 have position p; or ∃l0, l00 ∈ L0 × L00 s.t. t0 is a term of l0, t00 is a
term of l00 and both t0 and t00 have name n and position p.</p>
      <p>A term association θ is said to be consistent if it is a bijection, inconsistent
otherwise.</p>
      <p>Definition 3 (Predicate mapping). Given two clauses C0 and C00, two sets
of literals L0 ⊆ C0 and L00 ⊆ C00 where ∀l0 ∈ L0, l0 = p0(t01, ..., t0a) and ∀l00 ∈
L00, l00 = p00(t010, ..., t0a0), a consistent predicate association θp ⊆ predicates(C0) ×
predicates(C00), a consistent term association θt ⊆ terms(C0)×terms(C00); then
p0/p00 is a predicate mapping if
– {p0/p00} ∪ θp is a consistent predicate association;
– either p0 = p00; or a matching association defined as θl0/l00 = {t01/t010, ..., t0a/t0a0}
s.t. ∀i = 1, ..., a, t0i/t0i0 is a term mapping, holds (i.e. θl0/l00 ⊆ θt).
A predicate association θp is said to be consistent if it is a bijection, inconsistent
otherwise.
Definition 4 (Compatibility under term and predicate mapping). Two
term mappings θ0 and θ00 are t-compatible iff θ0 ∪ θ00 is consistent. Two literals,
or two sequences of literals, are p-compatible iff all their predicate mappings are
defined and consistent.</p>
      <p>Suppose given two clauses C0 = p0(t01, ..., t0n) :- l10, ..., lk0 and C00 = p00(t010, ..., t0m0)
:- l100, ..., lh00 that the Algorithm 1 takes in input. Initially, we have an empty global
term mapping θt and an empty global predicate mapping θp.</p>
      <p>The heads mapping could be a good starting point. Since an analogy is sought
between the descriptions of the concepts represented by the heads, this choice
reflects the intrinsic strength of the heads relationship. More formally, if n = m,
then θt0 = {t01/t010, ..., t0n=m/t0n0=m} is a term mapping. Since θt is empty, θt and
θt0 are t-compatible, then θt ← θt ∪ θt0.</p>
      <p>Although the entities in the descriptions will play specific roles (likely using a
domain specific terminology), it is plausible that part of explanations are encoded
using common sense, providing potential points of contact. Then a research of
equal predicates across descriptions makes sense. The underlying idea is that two
domains in the same knowledge share the representation formalism (implying the
common sense as intersection between any pair of descriptions).</p>
      <p>The first iteration of the internal level in the Algorithm 1 can be formalized
as follow. Using Definitions 2, 3 and 4, we carry on our mappings searching
for shared knowledge. Given two subsets L0 ⊆ C0 and L00 ⊆ C00 s.t. ∀l0 ∈ L0,
l0 = p(t01, ..., t0a) and ∀l00 ∈ L00, l00 = p(t010, ..., t0a0), then θp0 = {p/p} is a predicate
mapping. If θp and θp0 are p-compatible, then θp ← θp ∪ θp0.</p>
      <p>Example 2 (Proverb and life context). Firstly, the heads are mapped, since they
have the same arity, obtaining &lt;grape, carla&gt;, &lt;fox, john&gt;. At this point the
literals having a predicate used by both clauses are isolated, in order to search
for deterministic alignment. Then we have {says(fox, grape is not ripe)} and
{says(john, carla is bad)}, from which &lt;says/2, says/2&gt; becomes a mapped
predicate and &lt;grape is not ripe, carla is bad&gt; becomes a mapped term. Going
on, the sets {is(grape, not ripe, grape is not ripe)} and {is(carla, bad, carla is
bad)} are isolated, in which some terms are mapped, and from which we can add
to the mapped predicates &lt;is/3, is/3&gt; and to the mapped terms &lt;not ripe,
bad&gt;.</p>
      <p>In Algorithm 1, the second iteration of the internal level tries to map
nonidentical predicates expanding in breadth the previous mappings to those
concepts that are syntactically different but that play an analogous role. The
research of predicates and terms association goes on until new mappings are done.
More formally, suppose given two subsets of literals L0 ⊆ C0 and L00 ⊆ C00 for
which ∃t0/t00 ∈ θt s.t. ∀l0 ∈ L0, t0 is a term of l0 and ∀l00 ∈ L00, t00 is a term of
l00. If exists a term mapping θt0 s.t. ∀t0/t00 ∈ θt0, t0 is a term of l0 ∈ L0 in
position w, t00 is a term of l00 ∈ L00 in position w, θt and θt0 are t-compatible, then
θt ← θt ∪ θt0. Moreover, if exists a predicate mapping θp0 s.t. ∀p0/p00 ∈ θp0, ∀l0 ∈ L0,
l0 = p0(t01, ..., t0a), ∀l00 ∈ L00, l00 = p00(t010, ..., t0a0), ∀i = 1, ..., a then ∃t0i/t0i0 ∈ θt, θp
and θp0 are p-compatible, then θp ← θp ∪ θp0.
Example 3 (Proverb and life context). In the Example 2 a set of mapped
predicates and a set of mapped terms have been obtained. Since {&lt;grape, carla&gt;,
&lt;fox, john&gt;} ⊂ θt (see Algorithm 1), the expansion in breadth produces the
pair of sets {wants(fox, grape)} and {loves(john, carla)}, from which &lt;wants/2,
loves/2&gt; is added to θp. Consequently, {cannot take(fox, grape, fox does not
reach grape)} and {cannot have(john, carla, john cannot have carla)} is found,
allowing to map &lt;cannot take/3, cannot have/3&gt; that can be added to θp and
&lt;fox does not reach grape, john cannot have carla&gt; that can be added to θt.</p>
      <p>The middle level iteration in Algorithm 1 ensures that the internal iterations
are repeated until at least one of them extends the mappings. Then, the middle
level performs all deterministic mappings based on previous ones.</p>
      <p>Unfortunately, it could be the case in which common knowledge does not
exist, or cannot be used as starting point, or deterministic mappings are not
available. A strategy relying on the structure analysis becomes a primary issue,
in order to suggest starting points that do not share the representation. Then
a structural similarity is exploited in order to obtain a reliability score between
literals (denoted as rs). In such a way we design a pair of literals as new starting
point, if it is the most similar, it is not already mapped and its literals have the
same arity. This is the reason for which the external level exists. It attempts to
restart the mappings expansion using such evaluation to overcome the absence
of deterministic mappings. This attempt can be seen as the last opportunity to
make novel deterministic mappings after the end of the algorithm.</p>
      <p>More formally, we obtain a pair l0/l00 ∈ C0 × C00 s.t. l0 = p0(t01, ..., t0a), l00 =
p00(t010, ..., t0a0), rs(l0, l00) is the maximum score, l0/l00 has not been fully mapped
through term and predicate mappings. Then θt0 = {t01/t010, ..., t0a/t0a0} and θp0 =
{p0/p00}. If θt ∪θt0 is t-compatible, then θt ← θt ∪θt0. If θp and θp0 are p-compatible,
then θp ← θp ∪ θp0.</p>
      <p>Example 4 (Proverb and life context). At this point, the mapping expanded
in breadth pursued in the Example 3 (see Table 1) cannot be carried on
because there are not novel deterministic bindings. The similarity function suggests
{is(fox, crafty)} and {uses(jealous, craftiness)}. Here, only &lt;crafty, craftiness&gt;
is a deterministic mapping. Since fox is already in θt as &lt;fox, john&gt;, jealous
cannot be mapped, impeding the association of the respective predicates.
Ranking of hypotheses A known problem is the ranking of the hypotheses to
learn. Such problem affects reasoning by analogy, since each new mapping relies
on previous ones, and so on. In this proposal, the hypotheses ranking problem
has been faced using relative frequencies.</p>
      <p>Given a clause C and a literal l ∈ C, the score of l is obtained averaging the
relative frequencies for each term t in l, then:</p>
      <p>Pa
rfμ(l) = i=1 oi
n ∗ a
where o is the number of literals in C in which the t appears, n is the total
number of literals in C, a is the arity of l. The idea behind the rfμ(l) scores is
to represent the centrality of l in its clause. Then, given two literals l0 and l00,
the reliability score rs(l0, l00) for their hypothesis of mapping is computed as:
rs(l0, l00) = rfμ(l0) ∗ rfμ(l00)
Mappings hypotheses are ranked using a descendant rs(l0, l00) score. A ranking
so defined means that each new mapping maximizes the coverage of literals in
both clauses.</p>
      <p>
        Making inference During the mapping phase, the attention has been focused
on the recognition of analogous roles cross-domains (both for objects and
relations). As highlighted in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], one-to-one alignment has a primary importance,
since part of the structural consistency is verified if the bijection holds. For this
reason, the inference is carried out starting from a one-to-one alignment of the
mapped literals (i.e. both for the predicate and its arguments), and ending with
the projection of all the residual knowledge.
      </p>
      <p>Giving a more practical view of the procedure, after the filtering out of the
one-to-one correspondences, the procedure seeks base knowledge that could be
novel in the target domain. Consistently with the assumption that common
knowledge is fundamental for analogical inference, it is possible that the lacking
mappings are part of common knowledge, then it is projected using a Skolem
function. An inference having Skolem functions is a hypothesis having an
unreliability degree directly proportional to the number of Skolem functions.
Example 5 (Proverb and life context). The expected explanation of the
phenomena sounds like: “John has a bad opinion about Carla because he cannot have
the love of Carla, then he uses his craftiness in order to do not appear rejected
from the girl”. Let us examine the inference hypotheses from the proverb to the
life context:
1. skolem is(john, craftiness)
2. skolem cause(john cannot have carla, skolem bad opinion)
3. skolem has(john, skolem bad opinion)</p>
      <p>These hypotheses fully satisfy the expected interpretation.
Meta-Pattern formalization Using the RME we can carry out an
analogical mapping between each pair of clauses representing experiences, contexts or
concepts. From such a mapping we can outline a pattern.</p>
      <p>Let us to give a formal view of a pattern. Given two clauses C0 = h0
:l10, ..., ln0 and C00 = h00 :- l100, ..., lm, a set θt containing the mapped terms and a
set θp containing the mapped predicates, we can outline a generalized pattern
C = h :- l1, ..., lk with k ≤ min(n, m), s.t. ∀l ∈ C, ∃l0/l00 s.t. l0 ∈ C0, l00 ∈ C00,
l0 = p0(t01, ..., t0a), l00 = p00(t010, ..., t0a0), p0/p00 ∈ θp and t0i/t0i0 ∈ θt (∀i = 1, ..., a).
Moreover, l = p(t1, ..., ta) where: if p0 = p00, then p = p0 = p00, otherwise p = p∗,
where p∗ is a new predicate; if t0i = t0i0, then ti = t0i = t0i0, otherwise ti = t∗i,
where t∗i is a new term.</p>
      <p>Since each analogy has a reason to exist with respect to a specific perspective,
we cannot expect that each pattern will become general. Conversely, very often
analogies remain specific. Then each refinement of a pattern needs to be consider
a new pattern, because we have not any reason to consider too specific or useless
the older pattern. In any case, we can say that trying an analogical mapping
between a pattern and a third description (or another pattern), if the pattern(s)
is not fully mapped, a novel and more general meta-pattern arises.</p>
      <p>The reason behind such a choice is that if the pattern is used to make a novel
analogy, many domains can support a mapping or suggest the argument of the
relative Skolem function. To note that for each use or refinement of a pattern,
the novel domain contributes to support each survived mapping in the pattern,
giving further confirmation that the mappings in which the name of the original
predicate or term has been preserved are effectively common sense expressions.</p>
      <p>The story of each predicate/term in the pattern can be recognized, since the
origin of each predicate/term in the pattern is stored using the formalism:
db(head, type, pattern name, original name)
where ‘head’ stands for the head of the original clause, ‘type’ indicates if we are
storing a predicate or a term, ‘pattern name’ represents the name reported in
the pattern and the ‘original name’ reports the name in the original clause.
Example 6 (Proverb and life context). The mappings presented in Table 1 bring
to the literals alignment in Table 2, from which we obtain the following pattern.</p>
      <p>pattern(proverb(fox, grape), situation(john, carla))
:wants OR loves(fox OR john, grape OR carla),
cannot take OR cannot have(fox OR john, grape OR carla,</p>
      <p>fox does not reach grape OR john cannot have carla),
says(fox OR john, grape is not ripe OR carla is bad),
is(grape OR carla, not ripe OR bad, grape is not ripe OR carla is bad).
What is an analogy? In order to provide a formal definition of analogy, we
need to formalize what is the role that an object plays in a description. Keeping
in mind that an object is an abstract entity that can be materialized differently
in each description, let us to define a role.</p>
      <p>Definition 5 (Role). Given a clause C, and a literal l = p(t1, ..., ta) s.t. l ∈ C,
then the role of a term ti(1 ≤ i ≤ a) is a set Rti = {l1, ..., lk} s.t. ∀l0 ∈ Rti ,
l0 ∈ C and ti appears in l0. Such a set refers to the role that the term ti plays
with respect to the other terms involved in the relations in which it is involved.</p>
      <p>In any situation a task can be recognized. Any task requests a focus, i.e. the
set of objects and relations necessary and sufficient to carry out the current task.
Sometimes objects or relations lack, then an analogy could represent a way to
hypothesize them from previous experiences. We need to retrieve the experience
having an alignable focus, in order to derive hypotheses. In the light of such
premises, we define the analogical perspective.</p>
      <p>Definition 6 (Analogical perspective). Given two descriptions &lt; D0, D00 &gt;
representing respectively base and target domain, K0 and K00 denote the aligned
knowledge among D0 and D00, T 0 and T 00 denote the aligned knowledge among
D0 and D00 that solves the current task, an analogical perspective holds if either</p>
      <p>T 0 ⊆ K0 ⊆ D0 ∧ T 00 ⊆ K00 ⊆ D00</p>
      <p>T 0 ⊆ K0 ⊆ D0 ∧ T 00 ⊆ (K00 ∪ I) ⊆ D00
or
Where I is the inference obtained from the base domain.</p>
      <p>Definition 7 (Analogy). Given two relational descriptions &lt; D0, D00 &gt;
representable as sets of roles &lt; RD0 , RD00 &gt; and a perspective P , we say that D0 and
D00 are analogous if there exist two subsets SD0 ⊆ RD0 and SD00 ⊆ RD00 s.t. a
bijective function f : SD0 → SD00 holds, and f satisfies P . f satisfies P if for
each role r ∈ RD0 ∨ RD00 necessary to explain P , f holds (i.e., r ∈ SD0 ∨ SD00 ).
Remark 1 (Analogy). Given a description, each object plays a specific role with
respect to each other. Common roles across descriptions are essential to the
analogy, relations analysis is fundamental for roles identification, common
objects can be a clue for relations analysis.</p>
    </sec>
    <sec id="sec-4">
      <title>Evaluation</title>
      <p>
        We present a qualitative evaluation, in such a way we can further clarify our
proposal. In [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] the analogical reasoning is explored from the cognitive psychology
perspective. The authors want to study the analogical process of reasoning in
humans. Then they give to a group of humans two stories: the former talks about
a general that wants to capture a fortress, whereas the latter talks about a doctor
that wants to defeat a tumor. These stories are respectively the base and the
target domain. The human subjects completed the latter story in the light of the
former one (i.e. trying to recognize the knowledge that solves the problem in the
target story). Without any suggestion, only the 57% of the subjects provided
a complete solution to the analogy, whereas our software system implementing
the RME provides directly the correct analogical solution.
      </p>
      <p>RME assessment Let us give details about the stories and the relative
expected solution. The base story follows.</p>
      <p>A fortress was located in the center of the country. Many roads radiated out
from the fortress. A general wanted to capture the fortress with his army. The
general wanted to prevent mines on the roads from destroying his army and
neighbouring villages. As a result the entire army could not attack the fortress
along one road. However, the entire army was needed to capture the fortress.
So an attack by one small group would not succeed. The general therefore
divided his army into several small groups. He positioned the small groups at
the heads of different roads. The small groups simultaneously converged on
the fortress. In this way the army captured the fortress.</p>
      <p>The base story is translated in a Horn clause having as head conquer(fortress).
Each item in the following list encodes a sentence in the story.
1. located(fortress,center), partof(center,country),
2. radiated(oneroad,fortress), radiated(roads,fortress), partof(oneroad,roads),
3. capture(general,fortress), use(general,army),
4. prevent(general,mines), located(mines,oneroad), located(mines,roads),
destroy(mines,army), destroy(mines,villages),
5. couldnotuse(army,oneroad),
6. capture(army,fortress),
7. couldnotuse(subgroup,oneroad),
8. splittable(army,subgroups), partof(subgroup,subgroups), partof(subgroups,army),
destroy(mines,subgroup), notenough(subgroup),
9. distribute(subgroups,roads),
10. converge(subgroups,fortress),
11. capture(subgroups,fortress).</p>
      <p>The target story follows.</p>
      <p>A tumor was located in the interior of a patient’s body. A doctor wanted to
destroy the tumor with rays. The doctor wanted to prevent the rays form
destroying healthy tissue. As a result the high-intensity rays could not be
applied to the tumor along one path. However, high-intensity rays were needed
to destroy the tumor. So applying one low-intensity ray would not succeed.
The target story becomes a Horn clause having the head heal(tumor). The last
item encodes implicit knowledge. In particular such item says that the cancer
can be reached from one or many directions. It encodes also that a slit is one of
many slits, that the rays can be splitted and that healthy tissue can be damaged
and/or destroyed from rays or sub-rays.</p>
      <p>The expected result must contain the lacking knowledge of the target story,
that is: “The doctor therefore divided the rays into several low-intensity rays.
He positioned the low-intensity rays at multiple locations around the patient’s
body. The low intensity rays simultaneously converged on the tumor. In this way
the rays destroyed the tumor.”</p>
      <p>Given the mapping in Table 3, the inference hypotheses from base to target
domain are:
1. defeat(subrays,tumor)
2. splittable(rays,subrays)
3. skolem distribute(subrays,slits)
4. skolem converge(subrays,tumor)
5. aredestroyed(healthytissue,skolem villages)
The hypothesis 1 can be identified as the goal of the problem, it is made of
fully mapped components, making it a conclusive inference. The hypotheses 2,
3 and 4 represent the procedure useful to reach the goal, their predicates are
not mapped, then skolem has been added (i.e. a Skolem function), in order to
emphasize that the relation could be common sense knowledge, making it
projectable without further elaborations. In any case, this type of inference needs to
be considered contingent instead of conclusive. The hypothesis 5 does not make
sense, then it is a case of fake inference. It is straightforward to highlight that
all these statements was absent in the target domain, and have been completely
obtained using the base domain. Finally, it is easy to note the consistency with
the expected analogical reasoning.</p>
      <p>
        In order to evaluate the similarity of the relational structure, the similarity
between the clauses has been evaluated using the fs measure [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] before and after
the use of the RME. The fs ranges between ]0,1[. The original clauses score is
0.65, whereas after alignments, the score became 0.85. The alignment allowed
the recognition of the 20% of the structure. Such portion of the clauses appeared
unrelated before the RME.
      </p>
      <p>Patterns assessment The proposed approach has been evaluated using a
qualitative experiment that carries on the running example of analogical reasoning
between military and medical domains, that produced a pattern as described in
Section 3. We chosen a third domain telling about the purchase of a medicine
for which an offertory is needed. We refer to this story as Pharmaceutical.</p>
      <p>A medicine is located in the warehouse of a pharmacy. A patient needs to
purchase the medicine. The patient must face the problem that his money is
not enough. As a result the patient cannot purchase the medicine paying the
total price. However, the total amount is needed to purchase the medicine. So
applying a minor amount cannot succeed.</p>
      <p>The clause encoding such concepts has the head get(medicine).</p>
      <p>In the light of the mappings in Table 4, the RME suggests analogous domains
using the stored original mappings, reported in Table 5. For the predicates use/2,
located/2, couldnotuse/2 and partof/2 there is no novelty. Instead, the predicates
mustface/2 and purchase/2 are more interesting because the suggestion indicates
that mustface/2 is the “difficulty” that the protagonist must solve, whereas
purchase/2 stands for the main action on which Pharmaceutical story is built.</p>
      <p>The term mapping suggestions have not shared knowledge, then each term
is traced to different terms in base domains. For instance patient is traced to
the main actors in the other domains (general and doc), such as medicine that
is the target object in the story is traced to fortress and tumor, and so on.</p>
      <p>As for the analogy between Military and Medical domains, also here the fs
measure has been exploited to evaluate the gain in the alignment of the portion
of the structure for which the analogy holds. The original clauses score is 0.4,
whereas the score after the RME is 0.74. Then the 34% of the structure that
appeared not related in the original clauses, has been aligned in order to bring
out the analogy.</p>
      <p>The evaluation of the inference step is important in turn. Since the projection
of knowledge is not empty, we can conclude that the analogy with
Pharmaceutical domain is well represented by another pattern. The consequence is that a
novel pattern has been produced.
In this work we propose a strategy aimed to recognize analogies and to build
meta-patterns for further reasoning, generalizing the analogical schemas. Our
proposal differs from the existing literature since it allows to learn patterns
representing both the intuition to know any potential solution to the problem, and
a computational trick that allows to reuse analogies computed in the past.
Moreover, the RME captures non-syntactic alignments without meta-descriptions.</p>
      <p>Future improvements will regard relations with opposite sense (perhaps using
a common sense knowledge). Another interesting direction could be the use of
a probabilistic approach to mappings of non identical predicates. We plan also
to face the factual validity using the abductive procedure, in order to check if
the inferred knowledge (mapped or projected) is consistent with the constraints
of the world. Last but not least, we will equip the solution with an abstraction
operator, in order to shift the representation if needed.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Atilim</given-names>
            <surname>Gu</surname>
          </string-name>
          ¨nes Baydin, Ramon Lo´pez de Ma´ntaras, and
          <article-title>Santiago Ontan˜o´n. Automated generation of cross-domain analogies via evolutionary computation</article-title>
          .
          <source>CoRR, abs/1204.2335</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2] Maria de los Angeles Chang and
          <string-name>
            <given-names>Kenneth D.</given-names>
            <surname>Forbus</surname>
          </string-name>
          .
          <article-title>Using quantitative information to improve analogical matching between sketches</article-title>
          .
          <source>In IAAI</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Brian</given-names>
            <surname>Falkenhainer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Kenneth D.</given-names>
            <surname>Forbus</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Dedre</given-names>
            <surname>Gentner</surname>
          </string-name>
          .
          <article-title>The structuremapping engine: Algorithm and examples</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>41</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>63</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ferilli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Biba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. Di</given-names>
            <surname>Mauro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.M.</given-names>
            <surname>Basile</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Esposito</surname>
          </string-name>
          .
          <article-title>Plugging taxonomic similarity in first-order logic horn clauses comparison</article-title>
          .
          <source>In Emergent Perspectives in Artificial Intelligence, Lecture Notes in Artificial Intelligence</source>
          , pages
          <fpage>131</fpage>
          -
          <lpage>140</lpage>
          . Springer,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Dedre</given-names>
            <surname>Gentner</surname>
          </string-name>
          .
          <article-title>Structure-mapping: A theoretical framework for analogy</article-title>
          .
          <source>Cognitive Science</source>
          ,
          <volume>7</volume>
          (
          <issue>2</issue>
          ):
          <fpage>155</fpage>
          -
          <lpage>170</lpage>
          ,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Dedre</given-names>
            <surname>Gentner. Analogy</surname>
          </string-name>
          .
          <article-title>A companion to cognitive science</article-title>
          , pages
          <fpage>107</fpage>
          -
          <lpage>113</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Dedre</given-names>
            <surname>Gentner and Arthur B. Markman</surname>
          </string-name>
          .
          <article-title>Structure mapping in analogy and similarity</article-title>
          .
          <source>American psychologist</source>
          ,
          <volume>52</volume>
          :
          <fpage>45</fpage>
          -
          <lpage>56</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Mary</surname>
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Gick</surname>
            and
            <given-names>Keith J.</given-names>
          </string-name>
          <string-name>
            <surname>Holyoak</surname>
          </string-name>
          .
          <article-title>Analogical problem solving</article-title>
          .
          <source>Cognitive Psychology</source>
          ,
          <volume>12</volume>
          (
          <issue>3</issue>
          ):
          <fpage>306</fpage>
          -
          <lpage>355</lpage>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Makoto</given-names>
            <surname>Haraguchi</surname>
          </string-name>
          .
          <article-title>Towards a mathematical theory of analogy</article-title>
          .
          <source>Bulletin of informatics and cybernetics</source>
          ,
          <volume>21</volume>
          (
          <issue>3</issue>
          ):
          <fpage>29</fpage>
          -
          <lpage>56</lpage>
          ,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Makoto</given-names>
            <surname>Haraguchi</surname>
          </string-name>
          .
          <article-title>Analogical reasoning using transformations of rules</article-title>
          .
          <source>In Proceedings of the 4th conference on Logic programming '85</source>
          , pages
          <fpage>56</fpage>
          -
          <lpage>65</lpage>
          , New York, NY, USA,
          <year>1986</year>
          . Springer-Verlag New York, Inc.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Diarmuid P. O'Donoghue and Mark</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Keane</surname>
          </string-name>
          .
          <article-title>A creative analogy machine: Results and challenges</article-title>
          .
          <source>In Proceedings of the International Conference on Computational Creativity</source>
          <year>2012</year>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Peter</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Turney</surname>
          </string-name>
          .
          <article-title>A uniform approach to analogies, synonyms, antonyms, and associations</article-title>
          .
          <source>CoRR, abs/0809.0124</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Manuela</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Veloso</surname>
          </string-name>
          and Jaime G. Carbonell.
          <article-title>Derivational analogy in prodigy: Automating case acquisition, storage, and utilization</article-title>
          .
          <source>In Machine Learning</source>
          , pages
          <fpage>249</fpage>
          -
          <lpage>278</lpage>
          . Kluwer Academic Publishers, Boston,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>