<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Abduction for (non-omniscient) agents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sherlock Holmes The Red-Headed League</string-name>
        </contrib>
      </contrib-group>
      <fpage>51</fpage>
      <lpage>64</lpage>
      <abstract>
        <p>Among the non-monotonic reasoning processes, abduction is one of the most important. Usually described as the process of looking for explanations, it has been recognized as one of the most commonly used in our daily activities. Still, the traditional definitions of an abductive problem and an abductive solution mention only theories and formulas, leaving agency out of the picture. Our work proposes a study of abductive reasoning from an epistemic and dynamic perspective, making special emphasis on non-ideal agents. We begin by exploring what an abductive problem is in terms of an agent's information, and what an abductive solution is in terms of the actions that modify it. Then we explore the different kinds of abductive problems and abductive solutions that arise when we consider agents whose information is not closed under logical consequence, and agents whose reasoning abilities are not complete.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>But though traditional examples of abductive reasoning are given in
terms of an agent’s information and its changes, classical definitions of an
abductive problem and its solutions are given in terms of theories and
formulas, without mentioning the agent’s information and how it is modified.</p>
      <p>The present work proposes a study of abductive reasoning from an
epistemic and dynamic perspective. After recalling the classical definitions
of an abductive problem and an abductive solution (the rest of the current
section), we explore what an abductive problem is in terms of the agent’s
information, and what an abductive solution is in terms of the actions that
modify it (Section 2). Then we focus on non-ideal agents, analyzing not
only the cases that arise when the agent’s information is not closed under
logical consequence (Section 3) but also those that arise when the agent’s
reasoning abilities are not complete (Section 4). We finish with a summary,
proposing lines for further work (Section 5).</p>
      <p>
        In this paper we will use the term information in the most general sense,
with the notions of knowledge or belief being particular instances that impose
further restrictions, like truth or consistency. Moreover, though we will use
formulas in Epistemic Logic (EL; [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]) and Dynamic Epistemic Logic style (DEL;
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]), we will not commit ourselves to any particular semantic model. The
main goal of this work is to explore the possibilities and concepts that
emerge from a dynamic epistemic analysis of abductive reasoning.
1.1
      </p>
      <p>
        The classical approach to abduction
Traditionally, it is said that there is an abductive problem when there is a
formula χ that is not predicted by the current theory Φ. Recently, it has
been observed that, even if the theory does not entail χ, it might entail its
negation. Following [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], we can identify two basic abductive problems.
Definition 1.1 (Abductive problem). Let Φ and χ be a theory and a formula,
respectively, in some language L. Let be a consequence relat ion on L.
      </p>
      <p>The pair (Φ, χ) is a novel abductive problem when neither χ nor ¬χ are
consequences of Φ, i.e., when</p>
      <p>The pair (Φ, χ) is an anomalous abductive problem when, though χ is not a
consequences of Φ, ¬χ is, i.e., when
and
and</p>
      <p>2</p>
      <p>Traditionally, a solution for an abductive problem (Φ, χ) is a formula ψ
that, together with Φ, entails χ. This solves the problem because now the
theory is strong enough to explain χ. The anomalous case requires an extra
initial step, since adding directly such ψ will make the theory to entail both
χ and ¬χ. The agent should perform first a theory revision that stop ¬χ from
being a consequence of Φ. Here are the formal definitions.</p>
      <p>Φ
Φ
χ
χ
Φ
Φ
¬χ
¬χ</p>
      <p>• Given a novel abductive problem (Φ, χ), the formula ψ is an abductive
solution if
Φ, ψ
χ
• Given an anomalous abductive problem (Φ, χ), the formula ψ is an
abductive solution if it is possible to perform a theory revision to get a
novel problem (Φ , χ) for which ψ is a solution.</p>
      <p>
        In some cases, Definition 1.2 is too weak since it allows trivial
solutions, like χ itself. Again, following [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], it is possible to make a further
classification.
      </p>
      <p>Definition 1.3 (Classification of abductive solutions). Let (Φ, χ) be an
abductive problem. An abductive solution ψ is
consistent if
explanatory if
minimal if, for every other abductive solution ϕ,
Φ, ψ
ψ
ψ</p>
      <p>⊥
χ
ϕ implies ϕ
ψ</p>
      <p>The consistency requirement discards those ψ inconsistent with Φ and
the explanatory requirement discards χ itself. Minimality works as the
Occam’s razor, asking for the solution ψ to be logically equivalent to any
other solution it implies.
2</p>
    </sec>
    <sec id="sec-2">
      <title>From an agent’s perspective</title>
      <p>Most of the examples of abductive reasoning involve an agent and its
information. It is Holmes who observes that Mr. Wilson’s right cuff is very
shiny; it is a doctor who observes the symptoms A and B; it is Karen who
observes that the grass is wet. So when does an agent has an abductive
problem (Φ, χ)? By interpreting Φ as the agent’s information, we get the
following definitions. We use formulas in EL style, where Inf ϕ is read as
“ϕ is part of the agent’s information”.</p>
      <p>Definition 2.1 (Subjective abductive problem). Let χ be a formula.</p>
      <p>We say that an agent has a novel χ-abductive problem when neither χ nor
¬χ are part of her information, i.e., when the following formula holds:</p>
      <p>We say that an agent has an anomalous χ-abductive problem when χ is not
part of her information but ¬χ is, i.e., when the following formula holds:
¬Inf χ ∧ ¬Inf ¬χ
¬Inf χ ∧</p>
      <p>Inf ¬χ</p>
      <p>So an agent has a χ-abductive problem when χ is not part of her
information. What about an abductive solution? Definition 1.2 states that ψ is a
solution to a novel problem if, when added to the theory Φ, we get a theory
that entails χ. But a theory is actually closed under logical consequence, so
ψ is a solution if, when added to the theory, makes χ part of the theory too.
The anomalous case needs another step, since a revision is required first.</p>
      <p>We have identified Φ with the agent’s information. Then, a solution
for the subjective novel case is a formula ψ that, when added to the agent’s
information, makes the agent informed about χ. This highlights the fact the
requisites of a solution involve an action; an action that changes the agent’s
information by adding ψ to it. In the subjective anomalous case, the action
was already clear, since the theory should be modified. But now we can
see that the requisites for this case involves two actions: removing a piece
of information and then incorporating a new one.</p>
      <p>We will express changes in the agent’s information by using formulas
in DEL style. In particular, formulas of the form Addφ ϕ will be read as
“φ can be added to the agent’s information and, after that, ϕ is the case”, and
formulas of the form Remφ ϕ will be read as“φ can be removed from the
agent’s information and, after that, ϕ is the case”.</p>
      <p>Definition 2.2 (Subjective abductive solution). Suppose an agent has a novel
χ-abductive problem, that is, ¬Inf χ ∧ ¬Inf ¬χ holds. A formula ψ is an
abductive solution to this problem if, when added to the agent’s information,
the agent becomes informed about χ. In a formula,</p>
      <p>Addψ Inf χ</p>
      <p>Now suppose the agent has an anomalous χ-abductive problem, that is,
¬Inf χ ∧ Inf ¬χ holds. A formula ψ is an abductive solution to this problem if
the agent can revise her information to remove ¬χ from it and, after it, the
incorporation of ψ makes χ part of her information. In a formula,
Rem¬χ</p>
      <p>¬Inf ¬χ ∧ Addψ Inf χ</p>
      <p>What about the further classification for abductive solutions? We can
also provide formulas that characterize them.</p>
      <p>Definition 2.3 (Classification of subjective abductive solutions). Suppose
an agent has a χ-abductive problem. A formula ψ is a(n)
• consistent abductive solution if it is a solution and can be added to the
agent’s information without making the latter inconsistent:</p>
      <p>Addψ Inf χ ∧ ¬Inf ⊥
• explanatory abductive solution if it is a solution and it does not imply
χ, that is, it only complements the agent’s information to produce χ:
¬(ψ → χ) ∧ Addψ Inf χ
• minimal abductive solution if it is a solution and, for any ϕ, if ϕ is a
solution that becomes part of the agent’s information after ψ is added,
then ψ also becomes part of the agent’s information after ϕ is added.</p>
      <p>Addψ Inf χ ∧ ( Addϕ Inf χ ∧ Addψ Inf ϕ) →
Addϕ Inf ψ
3</p>
    </sec>
    <sec id="sec-3">
      <title>A non-omniscient agent</title>
      <p>In the classical definition of an abductive problem, the set of formulas
Φ is understood as a theory, usually assumed to be closed under logical
consequence, as we mentioned before. If this is the case, then we have
actually revised an omniscient case.</p>
      <p>But our agent does not need to be ideal. And if the agent’s information
is not closed under logical consequence, then we should make a difference
between the information she actually has, her explicit information, and what
follows logically from it, her implicit information [12; 11; 15].
3.1</p>
      <p>Abductive problems
A non-omniscient agent has an abductive problem whenever χ is not part of
her explicit information. As a consequence, the modality Inf in Definition 2.1
becomes InfEx . But now each of our two abductive problems splits into four,
according to the agent’s implicit information (InfIm ) about χ and ¬χ. These
eight cases include inconsistent situations in which the agent is implicitly
informed about both χ and ¬χ. They can be discarded under particular
interpretations of the agent’s information, like knowledge or consistent beliefs,
but we have chosen to keep them here for the sake of generality.</p>
      <p>Still, not all these cases are possible. We have said that implicit
information is what follows logically from the explicit one, so explicit information
itself should be implicit information, that is,</p>
    </sec>
    <sec id="sec-4">
      <title>InfEx ϕ → InfIm ϕ</title>
      <p>By assuming this formula, we can drop the cases in which some formula is
in the agent’s explicit information, but not in her implicit one.
Definition 3.1 (Non-omniscient abductive problems). A non-omniscient
agent can face six different abductive problems, each one of them
characterized by a formula in Table 1.
(2.3)
(2.4)</p>
      <p>Let us review each one of the novel cases. In case (1.1), the truly novel one,
the agent lacks explicit and implicit information about both χ and ¬χ; the
formula χ is a real novelty for her. But in case (1.2), the not implicit novelty
one, though the agent does not have explicit information about neither χ not
¬χ, she has implicit information about χ. In other words, χ is a novelty for
the agent’s explicit information, but not for her implicit one since χ follows
logically from what she explicitly has. In case (1.3), the implicitly anomaly
one, the agent lacks explicit information about both χ and ¬χ, but implicitly
she is informed about ¬χ. Finally we have the implicitly inconsistent case,
(1.4), in which the agent lacks explicit information about both χ and ¬χ, but
has an implicit inconsistency.</p>
      <p>Now for the anomalous cases. Case (2.3) is the truly anomalous one: the
agent has both explicit and implicit information about ¬χ, and lacks both
explicit and implicit information about χ. In the remaining one (2.4), called
anomaly with implicit inconsistency, though the agent has explicit information
about ¬χ but not about χ, the latter follows from her explicit information.
Omniscience as a particular case An agent is omniscient when she has
explicitly all her implicit information. With this extra requirement, expressed
by the formula InfIm ϕ → InfEx ϕ, cases (1.2), (1.3), (1.4) and (2.4) can be
discarded since explicit and implicit information do not coincide. This leaves
us only with cases (1.1) and (2.3); exactly the two cases of Definition 2.1.</p>
      <p>Abductive solutions
We have defined non-omniscient χ-abductive problems as situations in
the agent is not explicitly informed about χ. Accordingly, for defining a
solution, we will look for an action (or a sequence of them) that makes
the agent explicitly informed about χ, without having neither implicit nor
explicit information about ¬χ. We will focus on cases (1.1), (1.2), (1.3) and
(2.3), leaving the inconsistent ones, (1.4) and (2.4), for future work.</p>
      <p>Consider the truly novel case (1.1): the agent lacks explicit and implicit
information about both χ and ¬χ. Then, just like in the omniscient case, a
solution is a formula ψ that when added to the agent’s explicit information
makes the agent explicitly informed about χ.</p>
      <p>Now consider the not implicit novelty case (1.2): though the agent does
not have χ explicitly, she has it implicitly. A solution for case (1.1), adding
some ψ, would also work here, but the agent does not really need this
external interaction, since a non-omniscient agent has another possibility:
she can make the implicit χ explicit by performing the adequate reasoning
steps. And this gives us new possibilities not only this case. For example,
in (1.1), the agent does not need a ψ that makes χ explicit after added: a ψ
that makes χ implicit is also a solution, since now she can make χ explicit
by only reasoning. In fact, there are several strategies for solving each one
of the abductive problems, but for simplicity we will focus on the most
representative one for each one of them.</p>
      <p>In case (1.3), reasoning will only make the anomaly explicit. But then the
agent will be in the truly anomaly case (2.3), which can be solved by revising
the agent’s information to remove ¬χ from the explicit and implicit part,
and then adding a ψ that makes χ part of her explicit information.</p>
      <p>In the following definition, formulas of the form α ϕ indicates that the
agent can perform some reasoning step α after which ϕ is the case.
Definition 3.2 (Non-omniscient abductive solutions). Solutions for
consistent non-omniscient abductive problems are provided in Table 2.</p>
      <p>Case
(1.1)
(1.2)
(1.3)
(2.3)</p>
      <p>Solution
A formula ψ such that
Addψ InfEx χ
A reasoning α such that
α InfEx χ
A reasoning α and a formula ψ such that
α InfEx ¬χ ∧ Rem¬χ (¬InfIm ¬χ ∧ Addψ InfEx χ)
A formula ψ such that</p>
      <p>Rem¬χ ¬InfIm ¬χ ∧ Addψ InfEx χ</p>
      <p>Note how actions take us from some abductive problem to another. In
case (1.3), the proper reasoning will take the agent to case (2.3) from which,
by applying the proper revision, the agent will reach case (1.1), where a
new piece of information is needed. The flowchart of Figure 1 shows this.
Classification of abductive solutions The extra requisites of Definition 2.3
can be adapted in this non-omniscient case. For the consistency and the
explanatory requirements there are no important changes: we just require
for the agent’s implicit (and therefore her explicit information too) to be
consistent at the end of the sequence of actions (¬InfIm ⊥), and for the
formula ψ to not imply χ (¬(ψ → χ)) in the cases in which it is needed</p>
      <p>InfEx ¬χ</p>
      <p>Yes
Rem¬χ
¬InfIm ¬χ</p>
      <p>Yes</p>
      <p>Yes
((1.1), (1.3) and (2.3)). The minimality requirement now gives us more
options. We can define it over the action Addψ , looking for the weakest
formula ψ, but it can also be defined over the action Rem¬χ , looking for
the revision that removes the smallest amount of information. It can even
be defined over the action α , looking for the shortest reasoning chain.
4</p>
    </sec>
    <sec id="sec-5">
      <title>A non-dynamically-omniscient agent</title>
      <p>Even though the agents of the previous section are non-omniscient, there
is still an idealization about them. We have defined the agent’s implicit
information as what follows logically from her explicit information, but a
more ‘real’ agent does not need to be dynamically omniscient in the sense that
she does not need to have complete reasoning abilities. In other words, she
may not be able to derive all logical consequences of her explicit
information. This difference is important, because then a solution for a χ-abductive
problem does not need to be as strong as a formula that, when added, also
informs explicitly the agent about χ; it can also be some formula that, when
added, allow the agent to derive χ
4.1</p>
      <p>Abductive problems
Now we can make a further refinement. We can distinguish between what
follows logically from the agent’s explicit information, the objective implicit
information InfIm , and what the agent can actually derive, the subjective
implicit information InfDer . In other words, InfDer ϕ holds when the agent
can perform a sequence of reasoning steps that make ϕ explicit information.
In particular, an empty sequence of reasoning steps makes explicit the
information that is already explicit, so we assume</p>
    </sec>
    <sec id="sec-6">
      <title>InfEx ϕ → InfDer ϕ</title>
      <p>Though not complete, we can assume that the agent’s reasoning abilities
are sound. This makes subjective implicit information part of objective
implicit one, giving us</p>
    </sec>
    <sec id="sec-7">
      <title>InfDer ϕ → InfIm ϕ</title>
      <p>Each one of the six abductive problems of Table 1 turns into four cases,
according to whether the agent can derive or not what follows logically
from her explicit information, that is, according to whether InfDer χ and
InfDer ¬χ hold or not. Our two assumptions allow us to discards some of
the cases, leaving us with the following.</p>
      <p>Definition 4.1 (Extended abductive problems). A non-omniscient agent
without complete reasoning abilities can face eleven different abductive
problems, each one of them characterized by a formula in Table 3.
¬InfEx χ ∧ ¬InfEx ¬χ ∧
¬InfDer χ ∧ ¬InfDer ¬χ
∧ ¬InfIm χ ∧ ¬InfIm ¬χ
¬InfEx χ ∧ ¬InfEx ¬χ ∧  ¬IInnffDDeerr χχ ∧∧ ¬¬IInnffDDeerr ¬¬χχ  ∧ InfIm χ ∧ ¬InfIm ¬χ



¬InfEx χ ∧ ¬InfEx ¬χ ∧  ¬¬IInnffDDeerr χχ ∧∧ ¬IInnffDDeerr ¬¬χχ  ∧ ¬InfIm χ ∧ InfIm ¬χ

 ¬InfDer χ ∧ ¬InfDer ¬χ 
 InfDer χ ∧ ¬InfDer ¬χ 
¬InfEx χ ∧ ¬InfEx ¬χ ∧  ¬InfDer χ ∧ InfDer ¬χ  ∧ InfIm χ ∧ InfIm ¬χ
 InfDer χ ∧ InfDer ¬χ 
¬InfEx χ ∧ InfEx ¬χ ∧
¬InfDer χ ∧ InfDer ¬χ
∧ ¬InfIm χ ∧ InfIm ¬χ
¬InfEx χ ∧ InfEx ¬χ ∧</p>
      <p>InfDer χ ∧ InfDer ¬χ
∧ InfIm χ ∧ InfIm ¬χ
(1.1.a)
(1.2.a)
(1.2.b)
(1.3.a)
(1.3.c)
(1.4.a)
(1.4.b)
(1.4.c)
(1.4.d)
(2.3.c)
(2.4.d)
Recall that different abductive problems in Table 1 can have the same
solution. For example, though abductive problem (1.2), the non-implicit novelty
case, can be solved by means of reasoning steps (see Table 2), we mentioned
that it can also be solved like case (1.1). But the further refinement that we
have just done really makes them different. In case (1.2.b), χ is subjective
implicit information, so the agent can derive it and solve the problem by
only reasoning. Nevertheless, this is not possible in (1.2.a) since χ is
objective but not subjective implicit information. The agent cannot derive χ;
she needs a formula that, when added to her explicit information, makes χ
explicit (like in the truly novel case); or, more interesting, she can extend her
reasoning abilities with a formula/rule that allows her to derive χ.</p>
      <p>The same happens with other cases; consider those derived from (1.3).
In (1.3.c) the anomaly will be detected so the agent can start with a revision
of her information. But in (1.3.a) the anomaly cannot be derived, so we
have an objective but not subjective anomaly. The agent cannot detect and,
moreover, cannot derive the anomaly, so a better approach for a solution is
to solve (1.3.a) as a novel abductive problem. In fact, we can say that (1.3.a)
is an objective anomaly but a subjective novelty.</p>
      <p>Just like actions of reasoning, revision and addition can take us from
one abductive problem of Table 1 to another, they also allows us to move
between the abductive problem of Table 3. Again, we will focus on the
consistent cases, discarding (1.4.∗) and (2.4.d).</p>
      <p>Definition 4.2 (Extended abductive solutions). Solutions for consistent
extended abductive problems are provided in Table 4. It should be read as
a transition table that provides actions and conditions that should hold
in order to move from one abductive problem to another. There are six
operations/conditions which are described below, from left to right.
• Action Addψ consists in adding ψ to the agent’s explicit information.</p>
      <p>The aim is to make the agent explicitly informed about χ.
• Action Addψ/α extends the agent’s explicit information by adding
a formula ψ or some inference resource α (e.g., a rule) with the aim to
provide the agent with enough information so she can derive χ.
• Action α consists on the application of reasoning steps. The goal
here is to make χ explicit.
• Action Addψ/α is just as before. The goal is that after the action, the
agent should be able to derive ¬χ.
• Action α is just as before, this time with the aim to make ¬χ explicit.
• Action Rem¬χ removes ¬χ from the agent’s explicit information,
but the goal here is to remove it from her implicit information as well.</p>
      <p>Addψ InfEx χ Addψ/α InfDer χ α InfEx χ Addψ/α InfDer ¬χ α InfEx ¬χ Rem¬χ ¬InfIm ¬χ
Case
—
—
—
(1.3.c)
—
—
—
—
—
—
(2.3.c)
—
—
—
—
—
—
(1.1.a)</p>
      <p>Table 4 establishes a natural path to solve a consistent extended
abductive problem in. The longest path corresponds to case (1.3.a) in which
the agent does not have explicit information about neither χ nor ¬χ and,
though ¬χ follows logically from her explicit information, she cannot derive
it. A sequence of actions to solve this problem is, first, to provide the agent
with enough information so she can derive ¬χ, turning this case into (1.3.c).
Then, after reasoning to derive ¬χ she will have an explicit anomaly, case
(2.3.c). From here she needs to revise her information to remove ¬χ from it
and, once she has done this and reached case (1.1.a), she needs to extend her
information with some ψ that will make her be explicitly informed about χ.
4.3</p>
      <p>Collapsing the cases
We have taken the perspective of an outsider. From a subjective point
of view, the agent does not need to solve an anomaly that she cannot
detect. What guides the process of solving an abductive problem is explicit
information and what she can derive from it, that is, the subjective implicit
one. In other words, unaccessible inconsistencies should not matter!</p>
      <p>We can simplify the solution of abductive problem by observing that
some problems in Table 3 are in fact indistinguishable for the agent. Without
further external interaction, she can only access her explicit information and
eventually what she can derive from it; the rest, the implicit information
that is not derivable is not relevant. For example, abductive problems
(1.{1,2,3,4}.a) are in fact the same from the agent’s point of view, and she
will try to solve them in the same way. By reducing Table 3 according to
the agent’s subjective information we get:
 ¬InfDer χ ∧ ¬InfDer ¬χ
 InfDer χ ∧ ¬InfDer ¬χ
¬InfEx χ ∧ ¬InfEx ¬χ ∧  ¬InfDer χ ∧ InfDer ¬χ</p>
      <p> InfDer χ ∧ InfDer ¬χ
¬InfEx χ ∧ InfEx ¬χ ∧  ¬InfDer χ ∧ InfDer ¬χ
 InfDer χ ∧ InfDer ¬χ
(1.{1,2,3,4}.a)
(1.{2, 4}.b)
(1.{3, 4}.c)
(1.4.d)
(2.3.c)
(2.4.d)</p>
      <p>Note how these classes correspond to abductive problems in Table 1 in
which InfDer appears in place of InfIm . Then the abductive solutions in
Table 3 can be considered the objective solutions for the eleven objectively
different abductive problems. But if we consider only the information
accessible to the agent then the subjective paths she follows to solve abductive
problems are similar to the Figure 1.</p>
    </sec>
    <sec id="sec-8">
      <title>Summary and future work</title>
      <p>We have presented definitions of novel and anomalous abductive problems
from a subjective perspective. We have focused not only on omniscient
agents, but also on those whose information is not closed under logical
consequence and those whose reasoning abilities are not complete. Moreover,
we have also provided definitions for what an abductive solution is for each
one of this problems, identifying actions that allow the agent to move from
one problem to another and the conditions that such actions should verify
in order to provide an abductive solution.</p>
      <p>Our work is just an initial exploration on abductive reasoning for
(nonomniscient) agents, and there are many aspects yet to be studied. To begin
with, we can still make a further refinement in our notions of information,
this time relative to the explicit one. Among our explicit information, there
are things that are supported by the rest of our information. Consider, for
example, the quadratic formula for solving quadratic equations: even if we
do not know or forget the formula, we can derive it if we know the process
of completing squares. But we also have information that is not supported
by the rest; things that, if not observed, we would not have. Consider our
initial example of Mr. Wilson having a very shiny right cuff: Holmes did not
know him before, so there is no way he could have derived this information
about his cuff without observing him. And in fact, the intuitive idea of
abductive reasoning is closer to situations of this kind in which we try to
find support (justification) for facts that, being observed, have become part
of our explicit information.</p>
      <p>
        But we can also look for a more concrete form of what we already have.
First, we have talked about information, but we can study more specific
notions, like knowledge and beliefs, by asking for more specific requirements,
like truth or consistency. Moreover, the real definite step will be given
by providing a semantic model in which we can represent not only the
introduced notions of information (explicit, subjective implicit, objective
implicit) in their knowledge and belief versions, but also provide a concrete
definition for the discussed actions that modify them: adding external
information, reasoning in order to extend the explicit one or removing
part of it. Our current efforts are oriented to dynamic epistemic approaches,
following not only ideas like public announcements [14; 6] and revision
[3; 2], but also the finer grained notions of dropping [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and inference [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>And finally we can look at, possibly, the most important question in
abductive reasoning. We know now what an abductive solution is but, how
can we find them? In other words, given a particular semantic model
representing an agent’s information, can we provide a procedure that returns
‘the set of abductive solutions’? And though an abductive solution can be a
formula that, when assumed, provides the agent with explicit information
about the observed χ, the most interesting solutions are those that allow
the agent to derive χ, linking tightly the search for solutions with the agent’s
reasoning tools. This emphasize the need of a semantic model that allows
us to represent an agent’s inference dynamics.</p>
      <p>
        Even more: an important topic in abductive reasoning is the selection
of the best explanation (e.g., [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]). Beyond logical requisites to
avoid triviality, the definition of a suitable criteria to model the agent’s
preference for solutions is still an open problem. By using Kripke frames, we
can provide some criteria based in a plausibility measure among accessible
worlds [3; 2]. And at last (but not least), once the agent has selected her
’best’ explanation, what shall she do with it?
Acknowledgements We thank Johan van Benthem and A´ngel
Nepomuceno-Ferna´ndez for their valuable comments and suggestions.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Aliseda</surname>
          </string-name>
          .
          <source>Abductive Reasoning. Logical Investigations into Discovery and Explanation</source>
          , volume
          <volume>330</volume>
          of Synthese Library Series. Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          .
          <article-title>A qualitative theory of dynamic interactive belief revision</article-title>
          . In G. Bonanno, W. van der Hoek, and M. Wooldridge, editors,
          <source>Logic and the Foundations of Game and Decision Theory (LOFT7)</source>
          , volume
          <volume>3</volume>
          of Texts in Logic and Games, pages
          <fpage>13</fpage>
          -
          <lpage>60</lpage>
          . AUP,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>J. van Benthem.</surname>
          </string-name>
          <article-title>Dynamic logic for belief revision</article-title>
          .
          <source>Journal of Applied Non-Classical Logics</source>
          ,
          <volume>17</volume>
          (
          <issue>2</issue>
          ):
          <fpage>129</fpage>
          -
          <lpage>155</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. van Benthem and F. R.</given-names>
            <surname>Vela</surname>
          </string-name>
          <article-title>´zquez-Quesada. Inference, promotion, and the dynamics of awareness</article-title>
          .
          <source>Technical Report PP-2009-43</source>
          , ILLC, Universiteit van Amsterdam,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>H. van Ditmarsch</surname>
          </string-name>
          , W. van der Hoek, and
          <string-name>
            <given-names>B.</given-names>
            <surname>Kooi</surname>
          </string-name>
          .
          <source>Dynamic Epistemic Logic</source>
          , volume
          <volume>337</volume>
          of Synthese Library Series. Springer,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Gerbrandy</surname>
          </string-name>
          . Bisimulations on Planet Kripke.
          <source>PhD thesis</source>
          , ILLC, Universiteit van Amsterdam,
          <year>1999</year>
          .
          <string-name>
            <given-names>ILLC</given-names>
            <surname>Dissertation Series</surname>
          </string-name>
          <string-name>
            <surname>DS</surname>
          </string-name>
          -1999-01.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Grossi</surname>
          </string-name>
          and
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Vela</surname>
          </string-name>
          <article-title>´zquez-Quesada. Twelve Angry Men: A study on the fine-grain of announcements</article-title>
          . In X. He,
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Horty</surname>
          </string-name>
          , and E. Pacuit, editors,
          <source>LORI</source>
          , volume
          <volume>5834</volume>
          <source>of LNCS</source>
          , pages
          <fpage>147</fpage>
          -
          <lpage>160</lpage>
          . Springer,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          , editor.
          <source>Proceedings of the 1st Conference on Theoretical Aspects of Reasoning about Knowledge</source>
          , Monterey, CA,
          <year>March 1986</year>
          , San Francisco, CA, USA,
          <year>1986</year>
          . Morgan Kaufmann Publishers Inc.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hintikka</surname>
          </string-name>
          .
          <article-title>Knowledge and Belief: An Introduction to the Logic of the Two Notions</article-title>
          . Cornell University Press, Ithaca,
          <string-name>
            <surname>N.Y.</surname>
          </string-name>
          ,
          <year>1962</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hintikka</surname>
          </string-name>
          .
          <article-title>What is abduction? The fundamental problem of contemporary epistemology</article-title>
          .
          <source>Transactions of the Charles S. Peirce Society</source>
          ,
          <volume>34</volume>
          (
          <issue>3</issue>
          ):
          <fpage>503</fpage>
          -
          <lpage>533</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lakemeyer</surname>
          </string-name>
          .
          <article-title>Steps towards a first-order logic of explicit and implicit belief</article-title>
          .
          <source>In Halpern [8]</source>
          , pages
          <fpage>325</fpage>
          -
          <lpage>340</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Levesque</surname>
          </string-name>
          .
          <article-title>A logic of implicit and explicit belief</article-title>
          .
          <source>In Proc. of AAAI84</source>
          , pages
          <fpage>198</fpage>
          -
          <lpage>202</lpage>
          , Austin, TX,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lipton</surname>
          </string-name>
          .
          <article-title>Inference to the Best Explanation</article-title>
          . Routledge, New York,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Plaza</surname>
          </string-name>
          .
          <article-title>Logics of public communications</article-title>
          . In M. L.
          <string-name>
            <surname>Emrich</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          <string-name>
            <surname>Pfeifer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hadzikadic</surname>
            , and
            <given-names>Z. W</given-names>
          </string-name>
          . Ras, editors,
          <source>Proceedings of the 4th International Symposium on Methodologies for Intelligent Systems</source>
          , pages
          <fpage>201</fpage>
          -
          <lpage>216</lpage>
          , Tennessee, USA,
          <year>1989</year>
          . Oak Ridge National Laboratory, ORNL/DSRD-24.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M. Y.</given-names>
            <surname>Vardi</surname>
          </string-name>
          .
          <article-title>On epistemic logic and logical omniscience</article-title>
          .
          <source>In Halpern [8]</source>
          , pages
          <fpage>293</fpage>
          -
          <lpage>305</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>