<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Handling Preferences in LPMLN: A Preliminary Report</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bin Wang</string-name>
          <email>kse.wang@seu.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shutao Zhang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hongxiang Xu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhizheng Zhang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wei Wu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiaodong Li</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science and Engineering Southeast University</institution>
          ,
          <addr-line>Nanjing</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Science and Technology on Information System Engineering Lab The 28</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper, we present a new knowledge representation and reasoning tool to handle uncertainty, inconsistencies, and preferences by combining the ideas of LPMLN and logic programming with ordered disjunction (LPOD). The former is an extension of Answer Set Programming (ASP) that is designed for handling uncertainty and inconsistencies in knowledge representation by incorporating the method in Markov Logic Networks (MLN). The latter is a logic formalism to represent preferences in ASP, which is a simple yet efective way to handle preferences. Firstly, we present a new knowledge representation language o-LPMLN that extends LPMLN by introducing ordered disjunctions. Then, we present an example to show the use of the language. Finally, we present an approach to computing o-LPMLN programs by translating them into regular LPMLN programs. Results obtained in this paper provide an alternative tool to handle inconsistencies, uncertainty, and preferences in a unified framework. Moreover, a by-product of the paper is that we present an approach to implementing LPOD via using LPMLN solvers.</p>
      </abstract>
      <kwd-group>
        <kwd>LPMLN</kwd>
        <kwd>LPOD</kwd>
        <kwd>Preferences</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        LPMLN [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] is an extension of Answer Set Programming (ASP) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which is
designed to handle uncertainty and inconsistencies in knowledge representation
by incorporating the methods in Markov Logic Networks [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Intuitively, an
LPMLN program extended ASP by assigning each rule a weight degree, which
means an ASP rule can be violated with a loss of the weight. Therefore, a stable
model of an LPMLN program can only satisfy a subprogram, and a stable model
is a better solution if the sum of the weight degrees of rules satisfied by the stable
model is greater than that of another stable model. Through this way, uncertain
and inconsistent knowledge can be encoded and computed in the framework of
LPMLN. So far, there is few efort that studies the applications of LP MLN, most
of the researchers focus on the implementations and theoretical properties of
LPMLN [
        <xref ref-type="bibr" rid="ref1 ref11 ref12 ref13 ref17 ref18">12, 1, 13, 11, 17, 18</xref>
        ].
      </p>
      <p>
        In many real-world applications, preferences are inevitable. For example, it
has been shown that many problems can be formulated as a qualitative
optimization problem [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] in ASP, by which the solutions of the problems are associated
with the optimal or suboptimal stable models of corresponding ASP programs.
To handle preferences, large body of extensions of ASP has been presented, even
LPMLN can be viewed as such an extension. For example, “a is preferred to b”
can be trivially encoded by following LPMLN program M
Among these extensions, logic programs with ordered disjunction (LPOD) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is
a simple yet efective one by introducing ordered disjunction in the head of a
regular ASP rule (called LPOD rule). An LPOD rule “a b body” means if
X and Y are the stable models satisfying “body” such that a 2 X, a ̸2 Y and
b 2 Y , then X is preferred to Y w.r.t. the rule. The other meaning behind the
LPOD rule is that if we believe “body” and “a” are true, then we are allowed but
not necessarily to believe “b” is true in the same stable model, which makes the
LPOD diferent from LP MLN. For example, the same knowledge “a is preferred
to b” can be encoded by following LPOD program P
a
b:
(3)
It is easy to check that X = fag and Y = fbg are two (candidate) stable models
of P , but Z = fa; bg is not a stable model of P , and X is preferable to Y w.r.t.
P . By contrast, we can observe that all of X, Y and Z are stable models of
above LPMLN program M , and the weight degree of Z w.r.t. M is higher than
others. Obviously, M cannot capture the intuition of P , which motivates us to
present a new language o-LPMLN by combining LPMLN and LPOD.
      </p>
      <p>
        In this paper, we present a new language o-LPMLN, which can be regarded as
an extension of LPMLN and LPOD. Since there are several usable LPMLN solvers,
the extension is expected to be a powerful knowledge representation and
reasoning tool for handling uncertainty, inconsistencies, and preferences. Firstly, we
define the syntax and semantics of the language o-LPMLN. Secondly, we present
an example to show the use of the language o-LPMLN. Thirdly, we present a
modular and linear-time constructible translation from o-LPMLN programs to
regular LPMLN programs, which provides an alternative approach to implementing
o-LPMLN by using existing LPMLN solvers [
        <xref ref-type="bibr" rid="ref11 ref17">11, 17</xref>
        ]. As a by-product, the
translation can also be used to implement LPOD. Finally, we discuss some related
and further work of the paper.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Preliminaries</title>
      <p>In this section, we review the language of LPMLN and LPOD. Note that
throughout this paper, we consider only finite ground logic programs, that is, logic
programs containing no variables. And we assume the reader is familiar with the
notions in ASP.
2.1</p>
      <p>LPMLN
An LPMLN program is a finite set of weighted rules w : r, where w is either a
symbol denoting the “infinite weight” or a real number, and r is an ASP rule
of the form
h1 _ ::: _ hk
b1; :::; bm; not c1; :::; not cn:
(4)
where hs, bs and cs are literals, _ is epistemic disjunction, and not is default
negation. For an ASP rule r of the form (4), the head of r is head(r) = fhi j 1
i kg, the positive body of r is body+(r) = fbi j 1 i mg, the negative body
of r is body (r) = fci j 1 i ng, and the literals occurred in r is lit(r) =
head(r) [ body+(r) [ body (r). Therefore, an ASP rule r of the form (4) can also
be abbreviated as “head(r) body+(r); not body (r)” or “head(r) body(r)”.
An LPMLN rule w : r is called soft if w is a real number, and hard if w is .
By M s and M h we denote the sets of all soft rules and hard rules in an LPMLN
program M respectively. By M we denote the set of unweighted ASP counterpart
of an LPMLN program M , i.e. M = fr j w : r 2 M g. An LPMLN program is
called ground if its rules contain no variables. For a ground LPMLN program M ,
we use W (M ) to denote the weight degree of M , i.e. W (M ) = exp (∑w:r2M w).</p>
      <p>
        A ground LPMLN rule w : r is satisfied by a consistent set X of ground
literals, denoted by X j= w : r, if X j= r by the notion of satisfiability in ASP.
An LPMLN program M is satisfied by X, denoted by X j= M , if X satisfies all
rules in M . By MX we denote the LPMLN reduct of an LPMLN program M w.r.t.
X, i.e. MX = fw : r 2 M j X j= w : rg. X is a stable model of the program M
if X is a stable model of MX and M h MX . And by SM (M ) we denote the
set of all stable models of an LPMLN program M . For an interpretation I, the
weight degree W (M; I) of I w.r.t. M is defined as
and if I 2 SM (M ), the probability degree P (M; I) of I w.r.t. M is defined as
For a literal l, the probability degree P (M; l) of l w.r.t. M is defined as
Note that in its original definitions [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], a stable model is allowed to violate some
hard rules, which is slightly diferent from our defintions here but does not afect
the generality of the results obtained in the paper.
      </p>
      <p>W (M; I) = W (MIs)
P (M; I) =
P (M; l) =</p>
      <p>W (M; I)
X′2SM(M)W (M; X′)</p>
      <p>∑
l2X;X2SM(M)</p>
      <p>P (M; X)
(5)
(6)
(7)
2.2</p>
      <p>
        LPOD
Following Lee and Yang’s point [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], we distinguish regular rules and LPOD
rules in a logic program with ordered disjunction (LPOD). An LPOD P consists
of two parts: the regular part P r and the ordered disjunction part P o, where P r
is an ASP program consisting of rules of the form (4), and P o is a finite set of
LPOD rules r of the form (8),
h1
:::
hn
body(r):
where hs (n &gt; 1) are literals satisfying hi = hj if i = j. By o(r) we denote the
number of literals occurred in the head of an LPOD rule r, i.e. o(r) = jhead(r)j.
An LPOD rule r of the form (8) means if body(r) is true, for any postive integers
i &lt; j, we prefer to believe hi rather than hj , and if we believe hi, we are not
necessary to believe hj .
      </p>
      <p>Definition 1.
ri, is defined as</p>
      <p>For an LPOD rule r, its i-th option (1
i</p>
      <p>o(r)), denoted by
hi</p>
      <p>body(r); not h1; :::; not hi 1:
Definition 2. A split program of an LPOD P is obtained from P by replacing
each rule in P o by one of its options.</p>
      <p>
        Based on the above definitions, the semantics of an LPOD P is defined as
follows. A consistent set S of literals is a candidate stable model of P if it is
a stable model of a split program of P . By CSM (P ) we denote the set of all
candidate stable models of P . The satisfaction degree deg(I; r) of an LPOD rule
r w.r.t. an interpretation I is defined as
(8)
(9)
deg(I; r) =
{1;
if I ̸j= body(r);
(10)
minfk j hk 2 head(r) \ Ig; otherwise:
And the satisfaction degree deg(I; P ) of an LPOD P w.r.t. an interpretation I
is defined as the sum of satisfaction degrees of LPOD rules in P w.r.t. I, i.e.
deg(I; P ) = ∑r2P o deg(I; r). For a candidate stable model S of P , by Si(P ) we
denote the set of LPOD rules in P o that are satisfied by S at degree i. Based on
the notion of satisfaction degree, for two candidate stable model X and Y of P ,
Brewka [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] introduces four preference criteria:
1. Cardinality-Preferred: X is cardinality-preferred to Y , denoted by X &gt;c
Y , if there is a positive integer i such that jXi(P )j &gt; jY i(P )j, and jXj (P )j =
jY j (P )j for all j &lt; i;
2. Inclusion-Preferred: X is inclusion-preferred to Y , denoted by X &gt;i Y ,
if there is a positive integer i such that Y i(P ) Xi(P ), and Xj (P ) = Y j (P )
for all j &lt; i;
3. Pareto-Preferred: X is pareto-preferred to Y , denoted by X &gt;p Y , if
there is a rule r 2 Po such that deg(X; r) &lt; deg(Y; r), and there does not
exist a rule r 2 Po such that deg(Y; r) &lt; deg(X; r).
4. Penalty-Sum-Preferred: X is penalty-sum-preferred to Y , denoted by
      </p>
      <p>X &gt;ps Y , if deg(P; X) &lt; deg(P; Y ).</p>
      <p>For each pr 2 fc; i; p; psg, a candidate stable model X of P is a pr-preferred
stable model if there is no candidate stable model Y of P such that Y &gt;pr X.
3</p>
      <p>o-LPMLN
To handle preferences in LPMLN naturally, we extend LPMLN by introducing
ordered disjunctions in this section, the obtained language is called o-LPMLN.</p>
      <p>Syntactically, an o-LPMLN program M consists of two parts: the regular part
M r and the ordered disjunction part M o, where M r is a regular LPMLN program,
and M o consists of rules of the form (8) preceded by “ ”, called weighted ordered
disjunctive rules (wo-rules for simplicity). A wo-rule is a hard rule, because
we treat the preferences as some conditions that must be satisfied in solving
problems. As a consequence, the preference criteria are used prior to certainty
degrees in computing inference results of an o-LPMLN program, which will be
shown later.</p>
      <p>Similar to LPOD, we have the following definitions.</p>
      <p>Definition 3. For a wo-rule : r, its i-th option (1
: ri, where ri is the i-th option of rule r.
i
o(r)) is defined as
Definition 4. For an o-LPMLN program M , a split program of M is obtained
by replacing each rule in M o by one of its options.</p>
      <p>Semantically, a consistent set X of literals is a candidate stable model of M ,
if X is a stable model of a split program of M . By CSM (M ) we denote the set
of all candidate stable models of M . The satisfaction degree deg(I; : r) of a
wo-rule : r w.r.t. an interpretation I and the satisfaction degree deg(I; M ) of
an o-LPMLN program M w.r.t. I are defined the same as in LPOD. Therefore,
the preferred stable models of an o-LPMLN program M are defined the same
as in LPOD, that is, for each pr 2 fc; i; p; psg, a candidate stable model X of
M is a pr-preferred stable model if there is no candidate stable models Y of M
such that Y &gt;pr X. By pr-SM (M ) we denote the set of all pr-preferred stable
models of M under a preference criterion pr 2 fc; i; p; psg.</p>
      <p>In addition, the weight degree W (M; X) of a candidate stable model X w.r.t.
an o-LPMLN program M is defined the same as in LP MLN, i.e. W (M; X) =
W (M Xs ). And the probability degree Ppr(M; X) of a pr-preferred stable model
X w.r.t. an o-LPMLN M is defined as follows</p>
      <p>Ppr(M; X) = ∑Y 2pr-SM(M) W (M; Y )</p>
      <p>W (M; X)
(11)
where pr 2 fc; i; p; psg. Here, we do not define the probability degree for any
candidate stable model for two main reasons. Firstly, preferred stable models
are used to model intended solutions of a problem, while non-preferred stable
models are usually discarded. Hence, considering the probability of all candidate
stable models is not useful practically. Secondly, many candidate stable models
may satisfy the same subprogram, which means the probability defined for these
stable models cannot measure the “importance” of a stable model appropriately.
For the same reason, we define the probability degree Ppr(M; l) of a literal l
w.r.t. an o-LPMLN program M as follows</p>
      <p>∑
l2X;X2pr-SM(M)
Ppr(M; l) =</p>
      <p>Ppr(M; X)
(12)</p>
      <p>Following the above definitions, the steps of computing the inference results
of an o-LPMLN program M are as follows
1. compute all candidate stable models of M ;
2. find all preferred stable models of M ;
3. compute weight and probability degrees of all preferred stable models;
4. compute the probability degrees of literals w.r.t. M .</p>
      <p>Now we use an example to show the use of o-LPMLN in handling uncertain
and inconsistent knowledge with preferences.</p>
      <p>
        Example 1. This example is from the Example 1 in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], in which we use weight
to measure the possibility of knowledge. A personal assistant agent has following
knowledge (we assume the weight degree each soft knowledge ranges from 1 - 5.)
1. If I go to the cinema and it rains, then I will be not wet;
2. If I go to the beach and it rains, then I will be wet;
3. If I go to the cinema and there is sultriness, then I will be fresh;
4. If I go to the beach and there is sultriness, then I will be not fresh;
5. It is almost-certain that if it is humid, there will be sultriness (weight degree:
4);
6. It is quasi-certain that if it is cloudy, then it will rain (weight degree: 2);
7. The weather forcast said that today is a humid day, cloudy with weight
degree 4, not cloudy with weight degree 1;
8. My preferecnes today are that going to beach is more important than being
fresh; and being fresh is more important than being not wet.
      </p>
      <p>The question is where should I go today? The above eight sentences can be
encoded in the following o-LPMLN program M , in which w stand for wet, r for
rain, h for humid, c for cloudy, go_c for going to cinema, go_b for going to
beach, s for sultriness, f for being fresh.</p>
      <p>: :w
: w
: f
: :f
4 : s
2 : r
: h:</p>
      <p>go_c; r:
go_b; r:
go_c; s:</p>
      <p>go_b; s:
h:
c:
(r1)
(r2)
(r3)
(r4)
(r5)
(r6)
(r7)
4 : c:
1 : :c:
: go_b</p>
      <p>f :w:
: go_b _ go_c:
: go_b; go_c:
(r8)
(r9)
(r10)
(r11)
(r12)
Each of rule (r1) - (r10) in M expresses some knowledge in above sentences,
where (r5) - (r9) handle uncertain and inconsistent knowledge in a regular
LPMLN style, and (r10) handles preferences in an LPOD style. Rules (r11) and
(r12) means “I have to go to cinema or beach”. All candidate stable models of M
are shown in Table 1. From the table, it is easy to check that the candidate stable
models containing go_b (denoted by SMb) are preferred to those containing f
aSnMdb:&gt;wpr(dSeMnoft,edSMbybS&gt;Mprf SaMndnSw,Mannwd rSeMspfec&gt;tipvrelSy)M, in.we..fAorltehaocuhgphrb2otfhc;oif; pca;pnsdgi-,
date stable models X = fr; s; c; f; h; :w; go_cg and Y = fr; s; c; w; h; :f; go_bg
are the most probable stable models, we choose to go to beach, which shows
the efect of preferences in decision making. If we arrange our trip based on the
preferred stable models, the probability degree we are wet is greater than 0:83.
Note that the exact probability degree is unknown, because we are not sure if it
rains in some cases, which shows the abilty of LPMLN in reasoning over uncertain
knowledge.</p>
      <p>Computing o-LPMLN Programs
In this section, we present a translation from o-LPMLN programs to LPMLN
programs, which provides an alternative approach to implementing o-LPMLN.</p>
      <p>In previous section, we rewrite an LPOD rule “a b:” in LPMLN as follows
2 : a:
And we show that the above LPMLN program fails to capture the meaning of
original LPOD rule. The gap between above LPOD and LPMLN rules is caused
by following property of LPOD rules, called satisfaction chain property.
Proposition 1. Let r be an LPOD rule in an LPOD P , if X is a candidate
stable model of P , then X j= rk for any deg(X; r) k o(r), where rk is the
k-th option of r.</p>
      <p>Obviously, our LPMLN encoding of an LPOD rule cannot express the satisfaction
chain property. Therefore, to compute LPOD rules in LPMLN, we need to encode
not only the preferences but also satisfaction chain property expressed by the
rule, which is the intuition of our translation method presented in this section.
Definition 5. Given an o-LPMLN program M , its LPMLN counterpart (M )
consists of three parts, i.e. (M ) = M r [ 1(M o) [ 2(M o), where M r is the
regular part of M , and the other two parts of (M ) are defined as follows
1(M o) = f : sh(r) j
defined as follows</p>
      <p>: r 2 M og, where sh(r) is a complete shift of r
body(r); not h1; :::; not ho(r):
(15)
2(M o) = f 1 : rk j
: r 2 M o; and 1
k
o(r)g.</p>
      <p>Definition 5 presents a linear and modular translation from o-LPMLN to
LPMLN, which uses the ideas in split program to express the satisfaction chain
property of o-LPMLN rules. Lemma 1 shows that an o-LPMLN program and its
LPMLN counterpart coincide on their (candidate) stable models.</p>
      <p>Lemma 1. Given an o-LPMLN program M , (M ) is the LPMLN counterpart
defined in Definition 5, a consistent set X of literals is a candidate stable model
of M if it is a stable model of (M ).</p>
      <p>Proof. The proof is divided into two parts, in the first part, we show that if
X 2 CSM (M ) then X 2 SM ( (M )); in the second part, we show that if
X 2 SM ( (M )) then X 2 CSM (M ). For an o-LPMLN program M , let M be a
split program of M . For each wo-rule : r 2 M o, let : r be the corresponding
LPMLN rule in M , and : sh(r) be the corresponding rule in 1(M o).</p>
      <p>Part 1: For the split program M , without loss of generality, suppose X is a
stable model of M . By the definition, for each wo-rule : r 2 M o, we have X j=
head(r ) or X ̸j= body(r ), which means X ̸j= body(sh(r)), i.e. X j= : sh(r).
Therefore, we can infer that X j= 1(M o), which means 1(M o) (M )X . By
the definition of split program, we have (M r)h MX (M )X . Assume there
is a proper subset X′ of X such that X′ j= (M )X , by above results, we have
X′ j= MX , which contradicts with the premise that X is a stable model of M .
Hence, X is a stable model of (M ).</p>
      <p>Part 2: Suppose X is a stable model of (M ). By the definition, it is obvious
that (M )X = M Xr [ 1(M o) [ f 1 : rk 2 2(M o) j deg(X; : r) k o(r)g.
Let M (X) be a split program of M w.r.t. X, M (X) is constructed as follows
– add all rules in M r to M (X);
– for each o-LPMLN rule : r 2 M o, add
r).</p>
      <p>: rk to M (X), where k = deg(X; :
It is clear that M (X) is a split program of M and X j= M (X). Assume there is
a proper subset X′ of X such that X′ j= M (X). From the proof in Part 1, we
have X′ j= M Xr [ 1(M o). By Proposition 1, it is clear that X′ j= (M )X , which
contradicts with X 2 SM ( (M )). Hence, we can infer that X 2 CSM (M ).</p>
      <p>Combining the above results, we have shown that CSM (M ) = SM ( (M )),
Lemma 1 is proven.</p>
      <p>Now we investigate the preference criteria in the context of LPMLN. Firstly,
we introduce some notations. For an o-LPMLN program M and its LPMLN
counterpart (M ), the weight degree W ( (M ); X; r) of X w.r.t. a wo-rule : r 2 M o
is defined as</p>
      <p>W ( (M ); X; : r) = W ( 2(f : rg)X )
(16)
According to the satisfaction chain property for LPOD rules, we can obtain the
relationship between satisfaction degree and weight degree defined in o-LPMLN
, which is shown in Proposition 2.</p>
      <p>Proposition 2. Given an o-LPMLN program M and its LPMLN counterpart
(M ), for a candidate stable model X 2 CSM (M ) and a wo-rule : r 2 M o,
the satisfaction degree deg(X; : r) of : r w.r.t. X can be derived from the
weight degree W ( (M ); X; : r) of X w.r.t. : r, i.e.</p>
      <p>deg(X; : r) = o(r) + 1 + ln(W ( (M ); X; : r))
(17)</p>
      <p>Based on the results in Proposition 2, it is clear that the preferred stable
model of an o-LPMLN program M can be computed from its LPMLN counterpart,
which is shown in Theorem 1.</p>
      <p>Theorem 1. Given an o-LPMLN program M and its LPMLN counterpart (M ),
all computing results of M can be derived from the computing results of (M ),
that is,
– CSM (M ) = SM ( (M ));
– for each candidate stable model X, W (M; X) = W (M r; X);
– for each pr 2 fc; i; p; psg, pr-preferred stable models of M can be derived
from the computing results of (M ).</p>
      <p>
        Theorem 1 directly implies an approach to computing o-LPMLNs via
translating it into LPMLN programs and using existing LPMLN solvers, such as LPMLN2ASP
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], LPMLN-Models [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] etc. Among diferent preference criteria, the
penaltysum criterion especially relates to the weight of a stable model defined in LP MLN.
Therefore, we have a direct corollary of Theorem 1, which is shown in Corollary
1.
      </p>
      <p>Corollary 1. Given an o-LPMLN program M and its LPMLN counterpart (M ),
a consistent set X of literals is a ps-preferred stable model of M if X is a stable
model of (M ), and there does not exist a stable model Y 2 SM ( (M )) such
that W ( 2(M o); Y ) &lt; W ( 2(M o); X).</p>
      <p>Until now, we have shown an approach to computing o-LPMLN programs via
using existing LPMLN solvers. Actually, it is easy to observe that our approach
presented here can also be used to computing LPODs, since an LPOD can be
viewed as an o-LPMLN program without soft rules. It is worth noting that
although LPOD rules can be encoded in LPMLN, our extension of LPMLN cannot
only be seen as a syntactic sugar. On the one hand, the ordered disjunction
provides a way to express preferences in LPMLN, which is diferent from uncertain
and inconsistent knowledge essentially. On the other hand, o-LPMLN introduces
new criteria to evaluate stable models of an LPMLN program, which enriches the
expressivity of LPMLN.</p>
      <p>In addition, the LPMLN counterpart of an o-LPMLN program is linear-time
constructible, which means the computational complexity of o-LPMLN is the
same as that of LPMLN, and our extension of LPMLN does not increase the
computational complexity.
5</p>
    </sec>
    <sec id="sec-3">
      <title>Related Work</title>
      <p>
        Concerning related work, since LPOD is an earlier work on representing
preferences in ASP, there have been several formalisms presented after it, such as
Answer Set Optimization [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], meta-programming technique [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] etc. All these
formalisms can serve as a foundation to handle preferences in LPMLN.
      </p>
      <p>
        LPPOD [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ] is another extension of ASP that handles uncertainty and
preferences in a unified framework by combining LPOD and possibilistic ASP
(PASP) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. But PASP only handles qualitative uncertainty, while LPMLN can
handle both qualitative and quantitative uncertainty. Therefore, o-LPMLN can
handle qualitative as well as quantitative uncertainty.
      </p>
      <p>
        In addition, Lee and Yang [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] present an “almost” modular translation from
LPOD to ASP by introducing some auxiliary atoms, while our translation from
LPOD to LPMLN does not introduce any new atom, and it is completely
modular and linear-time constructible. All of other implementations of LPOD make
iterative calls of ASP solvers to find preferred stable models [
        <xref ref-type="bibr" rid="ref2 ref6">2, 6</xref>
        ], while our
translation only needs to call LPMLN solvers one time.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Conclusion and Future Work</title>
      <p>In this paper, we present an alternative knowledge representation and
reasoning tool for handling inconsistencies, uncertainty, and preferences in a unified
framework, which is an extension of LPMLN by introducing ordered
disjunctions, called o-LPMLN. Our contributions are as follows. Firstly, we present the
syntax and semantics of language o-LPMLN. Secondly, we present a translation
from o-LPMLN to regular LPMLN programs. Using existing LPMLN solvers, the
translation provides a method to implement o-LPMLN. As a by-product, the
translation also provides a one-shot approach to implementing LPOD.</p>
      <p>For the future, we plan to further study the combination of LPMLN and
LPOD. For example, in this paper, we treat LPOD rules as hard rules in LPMLN,
while this restriction should be relaxed in some cases, i.e. the preferences should
be allowed to be dissatisfied, which will be discussed in further work.
Moreover, we plan to develop an eficient o-LPMLN solver, and use o-LPMLN in more
practical scenarios.
7</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>We are grateful to the anonymous referees for their useful comments. This work
is supported by the National Key Research and Development Plan of China
(Grant No.2017YFB1002801).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Balai</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>On the Relationship between P-log and LPMLN</article-title>
          . In: Kambhampati,
          <string-name>
            <surname>S</surname>
          </string-name>
          . (ed.)
          <source>Proceedings of the 25th International Joint Conference on Artiifcial Intelligence</source>
          . pp.
          <fpage>915</fpage>
          -
          <lpage>921</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Brewka</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niemelä</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Syrjänen</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Implementing ordered disjunction using answer set solvers for normal programs</article-title>
          .
          <source>In: Proceedings of the 8th European Conference On Logics In Artificial Intelligence</source>
          . pp.
          <fpage>444</fpage>
          -
          <lpage>456</lpage>
          . Cosenza,
          <string-name>
            <surname>Italy</surname>
          </string-name>
          (
          <year>2002</year>
          ). https://doi.org/10.1007/3-540-45757-7-37
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Brewka</surname>
          </string-name>
          , G.:
          <article-title>Logic Programming with Ordered Disjunction</article-title>
          .
          <source>In: Proceedings of the 9th International Workshop on Non-Monotonic Reasoning</source>
          . pp.
          <fpage>100</fpage>
          -
          <lpage>105</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Brewka</surname>
          </string-name>
          , G.:
          <article-title>Answer Sets: From Constraint Programming Towards Qualitative Optimization</article-title>
          .
          <source>In: Proceedings of the 7th Logic Programming and Nonmonotonic Reasoning</source>
          . vol.
          <volume>2923</volume>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>46</lpage>
          . Fort Lauderdale, USA (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Brewka</surname>
          </string-name>
          , G.:
          <article-title>Preferences in Answer Set Programming</article-title>
          .
          <source>In: Proceedings of the 11th Conference of the Spanish Association for Artificial Intelligence on Current Topics in Artificial Intelligence</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Brewka</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Delgrande</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Romero</surname>
          </string-name>
          , J.:
          <article-title>asprin: Customizing Answer Set Preferences without a Headache</article-title>
          .
          <source>In: Proceedings of the 29th AAAI Conference on Artificial Intelligence</source>
          . pp.
          <fpage>1467</fpage>
          --
          <lpage>1474</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Confalonieri</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nieves</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Osorio</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vázquez-Salceda</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Dealing with explicit preferences and uncertainty in answer set programming</article-title>
          .
          <source>Annals of Mathematics and Artificial Intelligence</source>
          <volume>65</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>159</fpage>
          -
          <lpage>198</lpage>
          (
          <year>2012</year>
          ). https://doi.org/10.1007/s10472-012-9311-0
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Confalonieri</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prade</surname>
          </string-name>
          , H.:
          <article-title>Using possibilistic logic for modeling qualitative decision: Answer Set Programming algorithms</article-title>
          .
          <source>International Journal of Approximate Reasoning</source>
          <volume>55</volume>
          (
          <issue>2</issue>
          ),
          <fpage>711</fpage>
          -
          <lpage>738</lpage>
          (
          <year>2014</year>
          ). https://doi.org/10.1016/j.ijar.
          <year>2013</year>
          .
          <volume>11</volume>
          .002
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Gebser</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaminski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaub</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Complex optimization in answer set programming</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          <volume>11</volume>
          (
          <issue>4-5</issue>
          ),
          <fpage>821</fpage>
          -
          <lpage>839</lpage>
          (
          <year>2011</year>
          ). https://doi.org/10.1017/S1471068411000329
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lifschitz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>The Stable Model Semantics for Logic Programming</article-title>
          . In: Kowalski,
          <string-name>
            <given-names>R.A.</given-names>
            ,
            <surname>Bowen</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.A</surname>
          </string-name>
          . (eds.)
          <source>Proceedings of the Fifth International Conference and Symposium on Logic Programming</source>
          . pp.
          <fpage>1070</fpage>
          -
          <lpage>1080</lpage>
          . MIT Press (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Talsania</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Computing LPMLN using ASP and MLN solvers</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          <volume>17</volume>
          (
          <issue>5-6</issue>
          ),
          <fpage>942</fpage>
          -
          <lpage>960</lpage>
          (sep
          <year>2017</year>
          ). https://doi.org/10.1017/S1471068417000400
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Weighted Rules under the Stable Model Semantics</article-title>
          . In: Baral,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Delgrande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.P.</given-names>
            ,
            <surname>Wolter</surname>
          </string-name>
          ,
          <string-name>
            <surname>F</surname>
          </string-name>
          . (eds.)
          <source>Proceedings of the Fifteenth International Conference on Principles of Knowledge Representation and Reasoning:</source>
          . pp.
          <fpage>145</fpage>
          -
          <lpage>154</lpage>
          . AAAI Press (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Z.: LPMLN</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weak</surname>
            <given-names>Constraints</given-names>
          </string-name>
          , and
          <article-title>P-log</article-title>
          . In: Singh,
          <string-name>
            <given-names>S.P.</given-names>
            ,
            <surname>Markovitch</surname>
          </string-name>
          , S. (eds.)
          <source>Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence</source>
          . pp.
          <fpage>1170</fpage>
          -
          <lpage>1177</lpage>
          . AAAI Press (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Translating LPOD and CR-Prolog2 into Standard Answer Set Programs 2 (</article-title>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Nicolas</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stéphan</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lefèvre</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Possibilistic uncertainty handling for answer set programming</article-title>
          .
          <source>Annals of Mathematics and Artificial Intelligence</source>
          <volume>47</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>139</fpage>
          -
          <lpage>181</lpage>
          (
          <year>2006</year>
          ). https://doi.org/10.1007/s10472-006-9029-y
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Richardson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Domingos</surname>
            ,
            <given-names>P.M.:</given-names>
          </string-name>
          <article-title>Markov Logic Networks</article-title>
          .
          <source>Machine learning 62(1- 2)</source>
          ,
          <fpage>107</fpage>
          -
          <lpage>136</lpage>
          (
          <year>2006</year>
          ). https://doi.org/10.1007/s10994-006-5833-1
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>A Parallel LPMLN Solver: Primary Report</article-title>
          . In: Bogaerts,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Harrison</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (eds.)
          <source>Proceedings of the 10th Workshop on Answer Set Programming and Other Computing Paradigms</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . CEUR-WS, Espoo, Finland (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Splitting an LPMLN Program</article-title>
          .
          <source>In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence</source>
          . pp.
          <fpage>1997</fpage>
          -
          <lpage>2004</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>