<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Satisfiability for First-order Logic as a Non-Modal Deontic Logic</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Imperial College London</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>United Kingdom rak@doc.ic.ac.uk</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>In modal deontic logics, the focus is on inferring logical consequences, for example inferring whether or not an obligation O mail, to mail a letter, logically implies O [mail  burn] ,an obligation to mail or burn the letter. Here I present an alternative approach in which obligations are sentences (such as mail) in first-order logic (FOL), and the focus is on satisfying those sentences by making them true in some best model of the world. To facilitate this task and to make it manageable in this alternative approach, models are defined by a logic program (LP) extended by means of action assumptions (A). The resulting combination of FOL, LP and A is a variant of abductive logic programming (ALP).</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>We will see that the models M0 and W can be defined constructively, by using a logic
program P to define M0, and by using a set of candidate actions A to specify
alternative ways of extending M0. We will also see that it is possible to satisfy goals
without generating complete models, by using backward reasoning.</p>
      <p>There can be many models that satisfy the same goal satisfaction problem, and
some models may be better than others. Usual formulations of ALP ignore this fact,
and are neutral with respect to the choice between different models satisfying the
same goals. In contrast, in deontic logic, an agent is obliged to satisfy its goals in the
best way possible.</p>
      <p>To capture this normative character of deontic logic in FOL/ALP, we introduce
a partial ordering between the models in W.</p>
      <p>A normative goal satisfaction problem is a tuple G, M0, W, &lt; where:
&lt; is a strict partial ordering over W,
where M &lt; M’ represents M’ being better than M.</p>
      <p>MW satisfies a normative goal satisfaction problem G, M0, W, &lt; if and only if</p>
    </sec>
    <sec id="sec-2">
      <title>M satisfies the goal satisfaction problem G, M0, W and there does not exist M’W such that M’ satisfies G, M0, W and M &lt; M’.</title>
      <p>
        There is an obvious relationship with the possible world semantics of modal logics:
W is like a frame of possible worlds. The extension of M0 to M is like the
accessibility relation between possible worlds. The partial ordering &lt; is like the
preference relation in preference-based deontic logics, such as those of [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and [14].
      </p>
      <p>However, whereas preference-based deontic logics build the preference relation
into the semantics, complicating the logic, here the partial ordering is external to the
logic, which is simply FOL. Moreover, while in modal logics, actions and events are
normally represented by labels on the accessibility relation, here they are “reified”,
as part of the record of a partial history of the world.</p>
      <p>Deontic logic, the logic of obligation, contrasts with alethic logic, the logic of
necessity. In alethic logic, a necessary truth cannot be falsified. But in deontic logic,
an obligation is a normative ideal. If an obligation is violated, the world does not
end, but the resulting world is less than ideal.</p>
      <p>The focus on deriving logical consequences in modal logics makes it difficult to
deal with violations, conflicting norms, and contrary-to-duty obligations. In contrast,
the focus on goal satisfaction in FOL/ALP turns these problems into pragmatic
choices between alternative models. Moreover, it makes it possible for an agent,
aspiring towards the normative deal, to fall short, but nonetheless succeed in
generating a best model possible with the limited resources available.</p>
      <sec id="sec-2-1">
        <title>2 Logic programs as constructive definitions of models.</title>
        <p>These definitions of goal satisfaction imply that, for an agent to satisfy its goals G,
whether in some best way or not, the agent needs to search the space of alternative
world histories W, to find some model M  W of G. For this to be feasible in practice,
the models M  W must be constructible, and the space W itself must be searchable.
This is where ALP comes in. Logic programs in ALP provide a constructive
representation of models, and an efficient way to guarantee truth in a model without
necessarily generating the model in its totality.</p>
        <p>In our variant of ALP, the given model M0 is defined by a logic program P, and
the space of candidate models MW is defined by logic programs P  , where 
 A and A is a set of ground atomic sentences representing candidate assumptions.</p>
        <p>In ordinary abduction, the goals G represent observations, and A represents
candidate, hypothetical external events that can be used to explain G. In default
reasoning, A represents assumptions that conditions are normal, and G represents
constraints that ensure conditions are not assumed to be normal if they are
exceptional. In deontic applications, G represents obligations and prohibitions, and
A represents candidate actions that can be used to satisfy G.</p>
        <p>
          In the simplest case, a logic program is a set of definite clauses of the form
conclusion  condition1  …  conditionn, where conclusion and each conditioni
is an atomic formula. All variables are universally quantified with scope the entire
clause. Every such set of definite clauses P has a unique minimal model M [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. The
minimal model M can be viewed as the intended model of P, and P can be regarded
as a constructive definition of M.
        </p>
        <p>In LP, it is convenient to represent models M as Herbrand interpretations, which
are sets of atomic sentences representing all the atomic sentences that are true in M.
A Herbrand interpretation M of a logic program P is then a minimal model of P if
and only if M  M’ for any other Herbrand model M’ of P. Minimal models can be
constructed by forward reasoning: using modus ponens to exhaustively derive
atomic sentences that are instances of the conclusions of clauses from atomic
sentences that are instances of the conditions of clauses.</p>
        <p>Forward reasoning can also be used to satisfy a goal: guessing a set of candidate
assumptions   A, adding them to P, generating the minimal model M of P  ,
and then checking whether M is also a model of G. However, this kind of reasoning
is not computationally feasible. It is not feasible to generate candidate  blindly and
independently of G; nor is it feasible to generate models in their totality.</p>
        <p>
          In contrast, backward reasoning using SLD resolution [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] is computationally
feasible. It reasons backwards from G, reducing goals that match the conclusion of
a clause in P to subgoals that are the instantiated conditions of the clause. It continues
this process of goal-reduction, until all subgoals are solved directly either by atomic
sentences in P or by assumptions in A. In this way, backward reasoning generates
only assumptions  that are relevant to satisfying G. Moreover, backward reasoning
does not generate a complete model, but generates only the assumptions , which
together with P determine the minimal model of P  .
        </p>
        <p>In the following three, simple examples, the logic program P is either an empty
set of clauses {} or a singleton set containing the clause X = X. Backward reasoning
only identifies those assumptions/actions in A or those instances of X = X that are
needed to satisfy the goal. Moreover, in the first two examples, the preference
relation {} is empty. So all models are equally good.</p>
      </sec>
      <sec id="sec-2-2">
        <title>3 Map colouring as goal satisfaction</title>
        <p>The classic map colouring problem illustrates the use of FOL/ALP for goal
satisfaction. However, it can also be formulated in deontic terms:</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>It is obligatory that every country is painted with a colour. It is forbidden to paint the same country two different colours. It is forbidden to paint two adjacent countries the same colour.</title>
    </sec>
    <sec id="sec-4">
      <title>In FOL, the problem has the form G, M0, W, &lt; where:</title>
      <p>G = { X [country(X)   C [colour(C)  paint(X, C)]],
X C1 C2 [paint(X, C1)  paint(X, C2)  C1 = C2],
X Y C ¬ [adjacent(X, Y)  paint(X, C)  paint(Y, C)] }
W = {M0   |   A} where A = {paint(X, C) | country(X), colour(C)}
&lt; = {}. i.e. all models are equally good.</p>
      <p>The initial model M0 is given by a logic program that specifies the countries, the
colours, the adjacency relation, and the identity relation. For a simple problem with
two adjacent countries and two colours, this is given by the program P and its
minimal model M0:</p>
      <p>P = {country(iz), country(oz), adjacent(iz, oz), colour(red), colour(blue), X = X}
M0 ={country(iz), country(oz), adjacent(iz, oz), colour(red), colour(blue),
iz = iz, oz = oz, red = red, blue = blue}
Because the program P is so simple, the only difference between P and M0 is that P
contains a general clause X = X, whereas M0 contains all variable free instances of
the clause.</p>
      <p>There are exactly two models that satisfy the goals G, and both are equal good:
M1 = M0  { paint (iz, red), paint (oz, blue)} and</p>
      <p>M2 = M0  { paint (iz, blue), paint (oz, red)}
The map colouring problem can also be expressed in modal deontic logic,
formalising the English statement of the problem. It would then be possible to infer
the following logical consequence:</p>
      <p>O [paint (iz, red)  paint (oz, blue)]
O [paint (iz, blue)  paint (oz, red)]

which describes all solutions of the problem. It is not obvious how to infer a single
solution, which would be more useful in practice.</p>
      <sec id="sec-4-1">
        <title>4 Ross’s Paradox</title>
        <p>SDL (Standard Deontic Logic) is commonly used as a basis for comparison between
different deontic logics. It is a propositional logic with a modal operator O
representing obligation. Among the many problems of SDL, which also affects many
other deontic logics, is Ross’s Paradox [12].
i.e.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>It is obligatory that the letter is mailed. If the letter is mailed, then the letter is mailed or the letter is burned.</title>
      <p>O mail, mail  mail  burn.</p>
    </sec>
    <sec id="sec-6">
      <title>In SDL it is a logical consequence of O mail that O [mail  burn].</title>
      <p>As McNamara [11] puts it, “it seems rather odd to say that an obligation to mail
the letter entails an obligation that can be fulfilled by burning the letter (something
presumably forbidden), and one that would appear to be violated by not burning it if
I don't mail the letter”.</p>
      <p>The “Paradox” can be understood as another example of the inadequacy of logical
consequence for dealing with problems of satisfying obligations. Here is a
formulation of the paradox as a goal satisfaction problem G, M0, W, &lt;:
G = {mail,  burn}
M0 = {}
W = {M0   |   A} where A = {mail, burn}</p>
      <p>= {{}, {mail}, {burn}, {mail, burn}}
&lt; = {}.</p>
      <p>P = {} and M0 is the minimal model of P.</p>
      <p>M = {mail} is the only minimal model that satisfies G. But mail  burn is true in
M. So satisfying G, entails satisfying mail  burn. But, contrary to suggestions that
may be associated with the fact that O [mail  burn] is a logical consequence of
O mail, satisfying the goal mail  burn does not satisfy the goal mail.</p>
      <p>
        Viewed in this way, Ross’s Paradox is not a paradox at all, but rather, as Fox [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
also argues, a confusion between satisfying an obligation and implying that one
obligation is a logical consequence of another.
      </p>
      <sec id="sec-6-1">
        <title>5 Chisholm’s Paradox</title>
        <p>
          Ross’s Paradox and the map colouring problem do not show the need for preferences
between alternative models. However, the need for preferences arises with
Chisholm’s Paradox [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]:
        </p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>It ought to be that Jones goes to assist his neighbours. It ought to be that, if Jones goes, then he tells them he is coming. If Jones doesn't go, then he ought not tell them he is coming. Jones doesn't go.</title>
      <p>
        Much of the discussion [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] in the deontic logic literature concerns the problems that
arise with alternative representations of the conditional obligations in the second and
third sentences. I will not repeat this discussion here, but will present the example as
a normative goal satisfaction problem G, M0, W, &lt;:
      </p>
      <p>G = {go → tell, ¬go → ¬tell}
M0 = {}
W = {M0   |   A} where A = {go, tell}</p>
      <p>= {{}, {go}, {tell}, {go, tell}}</p>
      <p>M &lt; M’ if go  M and go  M’.</p>
      <p>Here the “obligation” for Jones to go is represented by the preference for models in
which go is true over models in which go is false. There are two models M1 = {}
and M2 = {go, tell} that make G true. But M2 is better than M1. So only M2 satisfies
G, M0, W, &lt;. This means that Jones must go and tell.</p>
      <p>Now suppose that Jones doesn’t go. We can represent this simply by removing
go from the candidate actions A, updating the problem to G, M0, W’, &lt; where W’
= {{}, {tell}}. The only (and best) candidate model that now satisfies the updated
problem is the less than ideal model M1 = {}, which means that Jones must not tell,
as is intuitively correct.</p>
      <p>In a more realistic representation with explicit time, when we discover that Jones
doesn’t go, we would update M0 to a new world history in which going is no longer
an option. Notice that, in any case, whether we update M0, W or both, the solution
of the updated problem changes, even though the goals remain the same. This gives
the satisfiability problem for FOL a non-monotonic character, even though logical
consequence itself is monotonic.</p>
      <sec id="sec-7-1">
        <title>6 LPS (Logical Production System)</title>
        <p>
          The logic and computer language LPS [
          <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
          ], is a scaled-down implementation of
ALP, intended for practical applications of goal satisfaction. Goals in LPS have the
simplified logical form of rules antecedent  consequent, which are similar to
condition-action rules in production systems [13]. All variables in the antecedent of
a rule are universally quantified with scope the entire rule, and all variables in the
consequent but not in the antecedent are existentially quantified with scope the
consequent of the rule. Variables include time variables, and all times in the
consequent are later than or equal to all times in the antecedent.
        </p>
        <p>Computation in LPS generates a sequence M0  … Mi-1  Mi …. of partial
histories, which in the limit determines a model M = M0 … Mi-1  Mi …. which
makes all the goals true. Each history Mi is obtained from the previous history Mi-1
by updating Mi-1 with the set of all actions and external events that take place between
Mi-1 and Mi. These updates are performed destructively, like change of state in the
real world, and like destructive updates in imperative computer languages.</p>
        <p>Computation in LPS gives rules both a logical and imperative interpretation.
Whenever any instance of an antecedent of a rule becomes true in Mi, forward
reasoning treats the corresponding instance of the consequent of the rule as a
command to perform actions, to make the instance of the consequent true in some
future Mj, j  i+1.</p>
        <p>Clauses conclusion  conditions in LP also have both a logical and imperative
interpretation. In addition to their purely logical interpretation, they have an
imperative interpretation as procedures, which use backward reasoning to
decompose a goal of determining whether a conclusion is true or of making a
conclusion true to the subgoal of determining or making the conditions true.
Conditions representing actions are made true by adding them to future histories Mj.</p>
        <p>Preference between models in the current implementation can be indicated only
by the order in which clauses are written. This gives preference to models generated
by clauses written earlier over models generated by clauses written later. There is
also an in-built preference for models that satisfy goals as soon as possible.</p>
        <p>An online implementation of LPS is accessible from http://lps.doc.ic.ac.uk/. The
LPS examples notebook in the examples menu contains executable links to the map
colouring problem, and to several other examples having a deontic goal satisfaction
interpretation. The first steps notebook goes through an example in which two agents
have conflicting goals. Agent bob wants the light on whenever he is in a room, and
agent dad wants the light off in any room where the light is on. It may be natural to
think of bob’s goal as a personal goal, and dad’s goal as an obligation. But LPS
makes no distinction between the two kinds of goals.</p>
      </sec>
      <sec id="sec-7-2">
        <title>7 Conclusions</title>
        <p>In this short paper, I have formulated deontic reasoning in ALP as goal satisfaction
in FOL, with the A and LP components of ALP used to define the space of candidate
models. I have argued that backward reasoning with LP overcomes the need to
generate complete models, and makes it possible to avoid generating candidate
actions blindly without relevance to the goals that need to be satisfied. I have also
argued that, for philosophical applications, the focus on goal satisfaction in ALP is
more useful than the focus on logical consequence in modal deontic logic.
11. McNamara, P. (2006) Deontic logic. Handbook of the History of Logic, 7,
197289.
12. Ross, A. (1941) Imperatives and Logic. Theoria, 7, 53–71.
13. Simon, H. (2001) Production systems. In The MIT encyclopedia of the cognitive
sciences. MIT press.
14. van Benthem, J., Grossi, D. and Liu, F. (2014) Priority structures in deontic logic.</p>
        <p>Theoria</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Carmo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          (
          <year>2002</year>
          )
          <article-title>Deontic logic and contrary-to-duties. Handbook of philosophical logic</article-title>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Chisholm</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          (
          <year>1963</year>
          )
          <article-title>Contrary-to-Duty Imperatives</article-title>
          and
          <string-name>
            <given-names>Deontic</given-names>
            <surname>Logic</surname>
          </string-name>
          . Analysis.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. van Emden,
          <string-name>
            <given-names>M. H.</given-names>
            , &amp;
            <surname>Kowalski</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. A.</surname>
          </string-name>
          (
          <year>1976</year>
          )
          <article-title>The semantics of predicate logic as a programming language</article-title>
          .
          <source>JACM.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Fox</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2015</year>
          )
          <article-title>The Semantics of Imperatives</article-title>
          .
          <source>The Handbook of Contemporary Semantic Theory</source>
          ,
          <volume>3</volume>
          ,
          <fpage>433</fpage>
          -
          <lpage>469</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Hansson</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>1969</year>
          )
          <article-title>An analysis of some deontic logics</article-title>
          .
          <source>Nous.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kowalski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sadri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2014</year>
          )
          <article-title>A logical characterization of a reactive system language</article-title>
          .
          <source>Proceedings of RuleML 2014</source>
          , Springer Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Kowalski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sadri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2016</year>
          )
          <article-title>Programming in logic without logic programming</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          ,
          <volume>16</volume>
          (
          <issue>3</issue>
          ),
          <fpage>269</fpage>
          -
          <lpage>295</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kowalski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Satoh</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>Obligation as optimal goal satisfaction</article-title>
          .
          <source>Journal of Philosophical Logic</source>
          , Springer. https://link.springer.com/article/10.1007/s10992-017-9440-3
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kowalski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>1974</year>
          )
          <article-title>Predicate logic as programming language</article-title>
          .
          <source>In IFIP congress</source>
          Vol.
          <volume>74</volume>
          ,
          <fpage>569</fpage>
          -
          <lpage>544</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kowalski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2011</year>
          )
          <article-title>Computational logic and human thinking: how to be artificially intelligent</article-title>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>