=Paper= {{Paper |id=Vol-2252/paper7 |storemode=property |title=Handling Preferences in LPMLN: A Preliminary Report |pdfUrl=https://ceur-ws.org/Vol-2252/paper7.pdf |volume=Vol-2252 |authors=Bin Wang,Shutao Zhang,Hongxiang Xu,Zhizheng Zhang,Wei Wu,Xiaodong Li |dblpUrl=https://dblp.org/rec/conf/ictai/WangZXZWL18 }} ==Handling Preferences in LPMLN: A Preliminary Report== https://ceur-ws.org/Vol-2252/paper7.pdf
 Handling Preferences in LPMLN : A Preliminary
                    Report

Bin Wang1 , Shutao Zhang1 , Hongxiang Xu1 , Zhizheng Zhang1 , Wei Wu2 , and
                               Xiaodong Li2
                   1
                     School of Computer Science and Engineering
                       Southeast University, Nanjing, China
           {kse.wang, shutao_zhang, xhx1693, seu_zzz}@seu.edu.cn
        2
           Science and Technology on Information System Engineering Lab
        th
  The 28 Research Institute of China Electronics Technology Group Corparation
                                  Nanjing, China
                                xmuww2004@126.com



       Abstract. In this paper, we present a new knowledge representation
       and reasoning tool to handle uncertainty, inconsistencies, and prefer-
       ences by combining the ideas of LPMLN and logic programming with
       ordered disjunction (LPOD). The former is an extension of Answer Set
       Programming (ASP) that is designed for handling uncertainty and in-
       consistencies in knowledge representation by incorporating the method
       in Markov Logic Networks (MLN). The latter is a logic formalism to
       represent preferences in ASP, which is a simple yet effective way to
       handle preferences. Firstly, we present a new knowledge representation
       language o-LPMLN that extends LPMLN by introducing ordered disjunc-
       tions. Then, we present an example to show the use of the language.
       Finally, we present an approach to computing o-LPMLN programs by
       translating them into regular LPMLN programs. Results obtained in this
       paper provide an alternative tool to handle inconsistencies, uncertainty,
       and preferences in a unified framework. Moreover, a by-product of the
       paper is that we present an approach to implementing LPOD via using
       LPMLN solvers.

       Keywords: LPMLN · LPOD · Preferences


1 Introduction

LPMLN [12] is an extension of Answer Set Programming (ASP) [10], which is
designed to handle uncertainty and inconsistencies in knowledge representation
by incorporating the methods in Markov Logic Networks [16]. Intuitively, an
LPMLN program extended ASP by assigning each rule a weight degree, which
means an ASP rule can be violated with a loss of the weight. Therefore, a stable
model of an LPMLN program can only satisfy a subprogram, and a stable model
is a better solution if the sum of the weight degrees of rules satisfied by the stable
model is greater than that of another stable model. Through this way, uncertain
2      B. Wang et al.

and inconsistent knowledge can be encoded and computed in the framework of
LPMLN . So far, there is few effort that studies the applications of LPMLN , most
of the researchers focus on the implementations and theoretical properties of
LPMLN [12, 1, 13, 11, 17, 18].
    In many real-world applications, preferences are inevitable. For example, it
has been shown that many problems can be formulated as a qualitative optimiza-
tion problem [4] in ASP, by which the solutions of the problems are associated
with the optimal or suboptimal stable models of corresponding ASP programs.
To handle preferences, large body of extensions of ASP has been presented, even
LPMLN can be viewed as such an extension. For example, “a is preferred to b”
can be trivially encoded by following LPMLN program M

                                      2 : a.                                  (1)
                                      1 : b.                                  (2)

Among these extensions, logic programs with ordered disjunction (LPOD) [3] is
a simple yet effective one by introducing ordered disjunction in the head of a
regular ASP rule (called LPOD rule). An LPOD rule “a × b ← body” means if
X and Y are the stable models satisfying “body” such that a ∈ X, a ̸∈ Y and
b ∈ Y , then X is preferred to Y w.r.t. the rule. The other meaning behind the
LPOD rule is that if we believe “body” and “a” are true, then we are allowed but
not necessarily to believe “b” is true in the same stable model, which makes the
LPOD different from LPMLN . For example, the same knowledge “a is preferred
to b” can be encoded by following LPOD program P

                                      a × b.                                  (3)

It is easy to check that X = {a} and Y = {b} are two (candidate) stable models
of P , but Z = {a, b} is not a stable model of P , and X is preferable to Y w.r.t.
P . By contrast, we can observe that all of X, Y and Z are stable models of
above LPMLN program M , and the weight degree of Z w.r.t. M is higher than
others. Obviously, M cannot capture the intuition of P , which motivates us to
present a new language o-LPMLN by combining LPMLN and LPOD.
    In this paper, we present a new language o-LPMLN , which can be regarded as
an extension of LPMLN and LPOD. Since there are several usable LPMLN solvers,
the extension is expected to be a powerful knowledge representation and reason-
ing tool for handling uncertainty, inconsistencies, and preferences. Firstly, we
define the syntax and semantics of the language o-LPMLN . Secondly, we present
an example to show the use of the language o-LPMLN . Thirdly, we present a mod-
ular and linear-time constructible translation from o-LPMLN programs to regu-
lar LPMLN programs, which provides an alternative approach to implementing
o-LPMLN by using existing LPMLN solvers [11, 17]. As a by-product, the trans-
lation can also be used to implement LPOD. Finally, we discuss some related
and further work of the paper.
                      Handling Preferences in LPMLN : A Preliminary Report         3

2 Preliminaries
In this section, we review the language of LPMLN and LPOD. Note that through-
out this paper, we consider only finite ground logic programs, that is, logic pro-
grams containing no variables. And we assume the reader is familiar with the
notions in ASP.

2.1 LPMLN
An LPMLN program is a finite set of weighted rules w : r, where w is either a
symbol α denoting the “infinite weight” or a real number, and r is an ASP rule
of the form
                h1 ∨ ... ∨ hk ← b1 , ..., bm , not c1 , ..., not cn .            (4)
where hs, bs and cs are literals, ∨ is epistemic disjunction, and not is default
negation. For an ASP rule r of the form (4), the head of r is head(r) = {hi | 1 ≤
i ≤ k}, the positive body of r is body + (r) = {bi | 1 ≤ i ≤ m}, the negative body
of r is body − (r) = {ci | 1 ≤ i ≤ n}, and the literals occurred in r is lit(r) =
head(r) ∪ body + (r) ∪ body − (r). Therefore, an ASP rule r of the form (4) can also
be abbreviated as “head(r) ← body + (r), not body − (r)” or “head(r) ← body(r)”.
An LPMLN rule w : r is called soft if w is a real number, and hard if w is α.
By M s and M h we denote the sets of all soft rules and hard rules in an LPMLN
program M respectively. By M we denote the set of unweighted ASP counterpart
of an LPMLN program M , i.e. M = {r | w : r ∈ M }. An LPMLN program is
called ground if its rules contain no variables. For a ground LPMLN(∑  program M),
we use W (M ) to denote the weight degree of M , i.e. W (M ) = exp        w:r∈M w .
    A ground LPMLN rule w : r is satisfied by a consistent set X of ground
literals, denoted by X |= w : r, if X |= r by the notion of satisfiability in ASP.
An LPMLN program M is satisfied by X, denoted by X |= M , if X satisfies all
rules in M . By MX we denote the LPMLN reduct of an LPMLN program M w.r.t.
X, i.e. MX = {w : r ∈ M | X |= w : r}. X is a stable model of the program M
if X is a stable model of MX and M h ⊆ MX . And by SM (M ) we denote the
set of all stable models of an LPMLN program M . For an interpretation I, the
weight degree W (M, I) of I w.r.t. M is defined as
                                W (M, I) = W (MIs )                              (5)
and if I ∈ SM (M ), the probability degree P (M, I) of I w.r.t. M is defined as
                                            W (M, I)
                        P (M, I) =                                               (6)
                                     ΣX ′ ∈SM (M ) W (M, X ′ )
For a literal l, the probability degree P (M, l) of l w.r.t. M is defined as
                                         ∑
                         P (M, l) =               P (M, X)                       (7)
                                     l∈X,X∈SM (M )

Note that in its original definitions [12], a stable model is allowed to violate some
hard rules, which is slightly different from our defintions here but does not affect
the generality of the results obtained in the paper.
4       B. Wang et al.

2.2 LPOD
Following Lee and Yang’s point [14], we distinguish regular rules and LPOD
rules in a logic program with ordered disjunction (LPOD). An LPOD P consists
of two parts: the regular part P r and the ordered disjunction part P o , where P r
is an ASP program consisting of rules of the form (4), and P o is a finite set of
LPOD rules r of the form (8),

                                h1 × ... × hn ← body(r).                                (8)

where hs (n > 1) are literals satisfying hi = hj iff i = j. By o(r) we denote the
number of literals occurred in the head of an LPOD rule r, i.e. o(r) = |head(r)|.
An LPOD rule r of the form (8) means if body(r) is true, for any postive integers
i < j, we prefer to believe hi rather than hj , and if we believe hi , we are not
necessary to believe hj .

Definition 1. For an LPOD rule r, its i-th option (1 ≤ i ≤ o(r)), denoted by
ri , is defined as
                    hi ← body(r), not h1 , ..., not hi−1 .               (9)

Definition 2. A split program of an LPOD P is obtained from P by replacing
each rule in P o by one of its options.

    Based on the above definitions, the semantics of an LPOD P is defined as
follows. A consistent set S of literals is a candidate stable model of P if it is
a stable model of a split program of P . By CSM (P ) we denote the set of all
candidate stable models of P . The satisfaction degree deg(I, r) of an LPOD rule
r w.r.t. an interpretation I is defined as
                        {
                         1,                            if I ̸|= body(r);
            deg(I, r) =                                                     (10)
                         min{k | hk ∈ head(r) ∩ I}, otherwise.

And the satisfaction degree deg(I, P ) of an LPOD P w.r.t. an interpretation I
is defined as∑the sum of satisfaction degrees of LPOD rules in P w.r.t. I, i.e.
deg(I, P ) = r∈P o deg(I, r). For a candidate stable model S of P , by S i (P ) we
denote the set of LPOD rules in P o that are satisfied by S at degree i. Based on
the notion of satisfaction degree, for two candidate stable model X and Y of P ,
Brewka [5] introduces four preference criteria:
 1. Cardinality-Preferred: X is cardinality-preferred to Y , denoted by X >c
    Y , if there is a positive integer i such that |X i (P )| > |Y i (P )|, and |X j (P )| =
    |Y j (P )| for all j < i;
 2. Inclusion-Preferred: X is inclusion-preferred to Y , denoted by X >i Y ,
    if there is a positive integer i such that Y i (P ) ⊂ X i (P ), and X j (P ) = Y j (P )
    for all j < i;
 3. Pareto-Preferred: X is pareto-preferred to Y , denoted by X >p Y , if
    there is a rule r ∈ Po such that deg(X, r) < deg(Y, r), and there does not
    exist a rule r ∈ Po such that deg(Y, r) < deg(X, r).
                     Handling Preferences in LPMLN : A Preliminary Report        5

 4. Penalty-Sum-Preferred: X is penalty-sum-preferred to Y , denoted by
    X >ps Y , if deg(P, X) < deg(P, Y ).
For each pr ∈ {c, i, p, ps}, a candidate stable model X of P is a pr-preferred
stable model if there is no candidate stable model Y of P such that Y >pr X.


3   o-LPMLN
To handle preferences in LPMLN naturally, we extend LPMLN by introducing
ordered disjunctions in this section, the obtained language is called o-LPMLN .
     Syntactically, an o-LPMLN program M consists of two parts: the regular part
M and the ordered disjunction part M o , where M r is a regular LPMLN program,
   r

and M o consists of rules of the form (8) preceded by “α”, called weighted ordered
disjunctive rules (wo-rules for simplicity). A wo-rule is a hard rule, because
we treat the preferences as some conditions that must be satisfied in solving
problems. As a consequence, the preference criteria are used prior to certainty
degrees in computing inference results of an o-LPMLN program, which will be
shown later.
     Similar to LPOD, we have the following definitions.
Definition 3. For a wo-rule α : r, its i-th option (1 ≤ i ≤ o(r)) is defined as
α : ri , where ri is the i-th option of rule r.

Definition 4. For an o-LPMLN program M , a split program of M is obtained
by replacing each rule in M o by one of its options.

    Semantically, a consistent set X of literals is a candidate stable model of M ,
if X is a stable model of a split program of M . By CSM (M ) we denote the set
of all candidate stable models of M . The satisfaction degree deg(I, α : r) of a
wo-rule α : r w.r.t. an interpretation I and the satisfaction degree deg(I, M ) of
an o-LPMLN program M w.r.t. I are defined the same as in LPOD. Therefore,
the preferred stable models of an o-LPMLN program M are defined the same
as in LPOD, that is, for each pr ∈ {c, i, p, ps}, a candidate stable model X of
M is a pr-preferred stable model if there is no candidate stable models Y of M
such that Y >pr X. By pr-SM (M ) we denote the set of all pr-preferred stable
models of M under a preference criterion pr ∈ {c, i, p, ps}.
    In addition, the weight degree W (M, X) of a candidate stable model X w.r.t.
an o-LPMLN program M is defined the same as in LPMLN , i.e. W (M, X) =
       s
W (MX    ). And the probability degree Ppr (M, X) of a pr-preferred stable model
X w.r.t. an o-LPMLN M is defined as follows
                                           W (M, X)
                     Ppr (M, X) = ∑                                           (11)
                                      Y ∈pr-SM (M ) W (M, Y )

where pr ∈ {c, i, p, ps}. Here, we do not define the probability degree for any
candidate stable model for two main reasons. Firstly, preferred stable models
are used to model intended solutions of a problem, while non-preferred stable
6      B. Wang et al.

models are usually discarded. Hence, considering the probability of all candidate
stable models is not useful practically. Secondly, many candidate stable models
may satisfy the same subprogram, which means the probability defined for these
stable models cannot measure the “importance” of a stable model appropriately.
For the same reason, we define the probability degree Ppr (M, l) of a literal l
w.r.t. an o-LPMLN program M as follows
                                         ∑
                    Ppr (M, l) =                  Ppr (M, X)                 (12)
                                  l∈X,X∈pr-SM (M )

    Following the above definitions, the steps of computing the inference results
of an o-LPMLN program M are as follows
1. compute all candidate stable models of M ;
2. find all preferred stable models of M ;
3. compute weight and probability degrees of all preferred stable models;
4. compute the probability degrees of literals w.r.t. M .
   Now we use an example to show the use of o-LPMLN in handling uncertain
and inconsistent knowledge with preferences.
Example 1. This example is from the Example 1 in [8], in which we use weight
to measure the possibility of knowledge. A personal assistant agent has following
knowledge (we assume the weight degree each soft knowledge ranges from 1 - 5.)
1. If I go to the cinema and it rains, then I will be not wet;
2. If I go to the beach and it rains, then I will be wet;
3. If I go to the cinema and there is sultriness, then I will be fresh;
4. If I go to the beach and there is sultriness, then I will be not fresh;
5. It is almost-certain that if it is humid, there will be sultriness (weight degree:
   4);
6. It is quasi-certain that if it is cloudy, then it will rain (weight degree: 2);
7. The weather forcast said that today is a humid day, cloudy with weight
   degree 4, not cloudy with weight degree 1;
8. My preferecnes today are that going to beach is more important than being
   fresh; and being fresh is more important than being not wet.
The question is where should I go today? The above eight sentences can be
encoded in the following o-LPMLN program M , in which w stand for wet, r for
rain, h for humid, c for cloudy, go_c for going to cinema, go_b for going to
beach, s for sultriness, f for being fresh.
                                α : ¬w ← go_c, r.                               (r1)
                                α : w ← go_b, r.                                (r2)
                                α : f ← go_c, s.                                (r3)
                                α : ¬f ← go_b, s.                               (r4)
                                4 : s ← h.                                      (r5)
                                2 : r ← c.                                      (r6)
                                α : h.                                          (r7)
                     Handling Preferences in LPMLN : A Preliminary Report        7

                                4 : c.                                         (r8)
                                1 : ¬c.                                        (r9)
                                α : go_b × f × ¬w.                           (r10)
                                α : go_b ∨ go_c.                             (r11)
                                α :← go_b, go_c.                             (r12)

Each of rule (r1) - (r10) in M expresses some knowledge in above sentences,
where (r5) - (r9) handle uncertain and inconsistent knowledge in a regular
LPMLN style, and (r10) handles preferences in an LPOD style. Rules (r11) and
(r12) means “I have to go to cinema or beach”. All candidate stable models of M
are shown in Table 1. From the table, it is easy to check that the candidate stable
models containing go_b (denoted by SMb ) are preferred to those containing f
and ¬w (denoted by SMf and SMnw respectively), i.e. for each pr ∈ {c, i, p, ps},
SMb >pr SMf , SMb >pr SMnw , and SMf >pr SMnw . Although both of candi-
date stable models X = {r, s, c, f, h, ¬w, go_c} and Y = {r, s, c, w, h, ¬f, go_b}
are the most probable stable models, we choose to go to beach, which shows
the effect of preferences in decision making. If we arrange our trip based on the
preferred stable models, the probability degree we are wet is greater than 0.83.
Note that the exact probability degree is unknown, because we are not sure if it
rains in some cases, which shows the abilty of LPMLN in reasoning over uncertain
knowledge.


              Table 1. Candidate Stable Models of M in Example 1

                         CSM            W (M, ∗) deg(∗, r10) is preferred
                {r, c, h, ¬w, go_c}        e6        3
                  {c, h, ¬w, go_c}         e4        3
                {¬c, h, ¬w, go_c}          e3        3
                   {h, ¬w, go_c}           e2        3
              {r, s, c, f, h, ¬w, go_c}   e10        2
                  {s, c, f, h, go_c}       e8        2
                {s, f, ¬c, h, go_c}        e7        2
                   {s, f, h, go_c}         e6        2
               {r, c, f, h, ¬w, go_c}      e6        2
                   {c, f, h, go_c}         e4        2
                  {f, ¬c, h, go_c}         e3        2
                     {f, h, go_c}          e2        2
              {r, s, c, w, h, ¬f, go_b}   e10        1         c, i, p, ps
                {s, c, h, ¬f, go_b}        e8        1         c, i, p, ps
               {s, ¬c, h, ¬f, go_b}        e7        1         c, i, p, ps
                  {s, h, ¬f, go_b}         e6        1         c, i, p, ps
                 {r, c, w, h, go_b}        e6        1         c, i, p, ps
                     {c, h, go_b}          e4        1         c, i, p, ps
                    {¬c, h, go_b}          e3        1         c, i, p, ps
                      {h, go_b}            e2        1         c, i, p, ps
8          B. Wang et al.

4 Computing o-LPMLN Programs
In this section, we present a translation from o-LPMLN programs to LPMLN
programs, which provides an alternative approach to implementing o-LPMLN .
   In previous section, we rewrite an LPOD rule “a × b.” in LPMLN as follows

                                         2 : a.                                (13)
                                         1 : b.                                (14)

And we show that the above LPMLN program fails to capture the meaning of
original LPOD rule. The gap between above LPOD and LPMLN rules is caused
by following property of LPOD rules, called satisfaction chain property.
Proposition 1. Let r be an LPOD rule in an LPOD P , if X is a candidate
stable model of P , then X |= rk for any deg(X, r) ≤ k ≤ o(r), where rk is the
k-th option of r.
Obviously, our LPMLN encoding of an LPOD rule cannot express the satisfaction
chain property. Therefore, to compute LPOD rules in LPMLN , we need to encode
not only the preferences but also satisfaction chain property expressed by the
rule, which is the intuition of our translation method presented in this section.

Definition 5. Given an o-LPMLN program M , its LPMLN counterpart τ (M )
consists of three parts, i.e. τ (M ) = M r ∪ τ1 (M o ) ∪ τ2 (M o ), where M r is the
regular part of M , and the other two parts of τ (M ) are defined as follows
    – τ1 (M o ) = {α : sh(r) | α : r ∈ M o }, where sh(r) is a complete shift of r
      defined as follows

                            ← body(r), not h1 , ..., not ho(r) .               (15)

    – τ2 (M o ) = {−1 : rk | α : r ∈ M o , and 1 ≤ k ≤ o(r)}.

   Definition 5 presents a linear and modular translation from o-LPMLN to
     MLN
LP     , which uses the ideas in split program to express the satisfaction chain
property of o-LPMLN rules. Lemma 1 shows that an o-LPMLN program and its
LPMLN counterpart coincide on their (candidate) stable models.

Lemma 1. Given an o-LPMLN program M , τ (M ) is the LPMLN counterpart
defined in Definition 5, a consistent set X of literals is a candidate stable model
of M iff it is a stable model of τ (M ).

Proof. The proof is divided into two parts, in the first part, we show that if
X ∈ CSM (M ) then X ∈ SM (τ (M )); in the second part, we show that if
X ∈ SM (τ (M )) then X ∈ CSM (M ). For an o-LPMLN program M , let M ∗ be a
split program of M . For each wo-rule α : r ∈ M o , let α : r∗ be the corresponding
LPMLN rule in M ∗ , and α : sh(r) be the corresponding rule in τ1 (M o ).
    Part 1: For the split program M ∗ , without loss of generality, suppose X is a
stable model of M ∗ . By the definition, for each wo-rule α : r ∈ M o , we have X |=
                      Handling Preferences in LPMLN : A Preliminary Report           9

head(r∗ ) or X ̸|= body(r∗ ), which means X ̸|= body(sh(r)), i.e. X |= α : sh(r).
Therefore, we can infer that X |= τ1 (M o ), which means τ1 (M o ) ⊆ τ (M )X . By
the definition of split program, we have (M r )h ⊆ MX     ∗ ⊆ τ (M ) . Assume there
                                                                    X
                       ′                    ′
is a proper subset X of X such that X |= τ (M )X , by above results, we have
X ′ |= MX∗
           , which contradicts with the premise that X is a stable model of M ∗ .
Hence, X is a stable model of τ (M ).
    Part 2: Suppose X is a stable model of τ (M ). By the definition, it is obvious
that τ (M )X = MX  r
                      ∪ τ1 (M o ) ∪ {−1 : rk ∈ τ2 (M o ) | deg(X, α : r) ≤ k ≤ o(r)}.
Let M (X) be a split program of M w.r.t. X, M (X) is constructed as follows

 – add all rules in M r to M (X);
 – for each o-LPMLN rule α : r ∈ M o , add α : rk to M (X), where k = deg(X, α :
   r).

It is clear that M (X) is a split program of M and X |= M (X). Assume there is
a proper subset X ′ of X such that X ′ |= M (X). From the proof in Part 1, we
have X ′ |= MX r
                 ∪ τ1 (M o ). By Proposition 1, it is clear that X ′ |= τ (M )X , which
contradicts with X ∈ SM (τ (M )). Hence, we can infer that X ∈ CSM (M ).
    Combining the above results, we have shown that CSM (M ) = SM (τ (M )),
Lemma 1 is proven.

    Now we investigate the preference criteria in the context of LPMLN . Firstly,
we introduce some notations. For an o-LPMLN program M and its LPMLN coun-
terpart τ (M ), the weight degree W (τ (M ), X, r) of X w.r.t. a wo-rule α : r ∈ M o
is defined as
                      W (τ (M ), X, α : r) = W (τ2 ({α : r})X )                 (16)
According to the satisfaction chain property for LPOD rules, we can obtain the
relationship between satisfaction degree and weight degree defined in o-LPMLN
, which is shown in Proposition 2.

Proposition 2. Given an o-LPMLN program M and its LPMLN counterpart
τ (M ), for a candidate stable model X ∈ CSM (M ) and a wo-rule α : r ∈ M o ,
the satisfaction degree deg(X, α : r) of α : r w.r.t. X can be derived from the
weight degree W (τ (M ), X, α : r) of X w.r.t. α : r, i.e.

                deg(X, α : r) = o(r) + 1 + ln(W (τ (M ), X, α : r))               (17)

   Based on the results in Proposition 2, it is clear that the preferred stable
model of an o-LPMLN program M can be computed from its LPMLN counterpart,
which is shown in Theorem 1.

Theorem 1. Given an o-LPMLN program M and its LPMLN counterpart τ (M ),
all computing results of M can be derived from the computing results of τ (M ),
that is,

 – CSM (M ) = SM (τ (M ));
 – for each candidate stable model X, W (M, X) = W (M r , X);
10      B. Wang et al.

 – for each pr ∈ {c, i, p, ps}, pr-preferred stable models of M can be derived
   from the computing results of τ (M ).
    Theorem 1 directly implies an approach to computing o-LPMLN s via translat-
ing it into LPMLN programs and using existing LPMLN solvers, such as LPMLN2ASP
[11], LPMLN-Models [17] etc. Among different preference criteria, the penalty-
sum criterion especially relates to the weight of a stable model defined in LPMLN .
Therefore, we have a direct corollary of Theorem 1, which is shown in Corollary
1.
Corollary 1. Given an o-LPMLN program M and its LPMLN counterpart τ (M ),
a consistent set X of literals is a ps-preferred stable model of M iff X is a stable
model of τ (M ), and there does not exist a stable model Y ∈ SM (τ (M )) such
that W (τ2 (M o ), Y ) < W (τ2 (M o ), X).
   Until now, we have shown an approach to computing o-LPMLN programs via
using existing LPMLN solvers. Actually, it is easy to observe that our approach
presented here can also be used to computing LPODs, since an LPOD can be
viewed as an o-LPMLN program without soft rules. It is worth noting that al-
though LPOD rules can be encoded in LPMLN , our extension of LPMLN cannot
only be seen as a syntactic sugar. On the one hand, the ordered disjunction pro-
vides a way to express preferences in LPMLN , which is different from uncertain
and inconsistent knowledge essentially. On the other hand, o-LPMLN introduces
new criteria to evaluate stable models of an LPMLN program, which enriches the
expressivity of LPMLN .
   In addition, the LPMLN counterpart of an o-LPMLN program is linear-time
constructible, which means the computational complexity of o-LPMLN is the
same as that of LPMLN , and our extension of LPMLN does not increase the
computational complexity.

5 Related Work
Concerning related work, since LPOD is an earlier work on representing pref-
erences in ASP, there have been several formalisms presented after it, such as
Answer Set Optimization [4], meta-programming technique [9] etc. All these
formalisms can serve as a foundation to handle preferences in LPMLN .
    LPPOD [7, 8] is another extension of ASP that handles uncertainty and
preferences in a unified framework by combining LPOD and possibilistic ASP
(PASP) [15]. But PASP only handles qualitative uncertainty, while LPMLN can
handle both qualitative and quantitative uncertainty. Therefore, o-LPMLN can
handle qualitative as well as quantitative uncertainty.
    In addition, Lee and Yang [14] present an “almost” modular translation from
LPOD to ASP by introducing some auxiliary atoms, while our translation from
LPOD to LPMLN does not introduce any new atom, and it is completely mod-
ular and linear-time constructible. All of other implementations of LPOD make
iterative calls of ASP solvers to find preferred stable models [2, 6], while our
translation only needs to call LPMLN solvers one time.
                      Handling Preferences in LPMLN : A Preliminary Report          11

6 Conclusion and Future Work
In this paper, we present an alternative knowledge representation and reason-
ing tool for handling inconsistencies, uncertainty, and preferences in a unified
framework, which is an extension of LPMLN by introducing ordered disjunc-
tions, called o-LPMLN . Our contributions are as follows. Firstly, we present the
syntax and semantics of language o-LPMLN . Secondly, we present a translation
from o-LPMLN to regular LPMLN programs. Using existing LPMLN solvers, the
translation provides a method to implement o-LPMLN . As a by-product, the
translation also provides a one-shot approach to implementing LPOD.
    For the future, we plan to further study the combination of LPMLN and
LPOD. For example, in this paper, we treat LPOD rules as hard rules in LPMLN ,
while this restriction should be relaxed in some cases, i.e. the preferences should
be allowed to be dissatisfied, which will be discussed in further work. More-
over, we plan to develop an efficient o-LPMLN solver, and use o-LPMLN in more
practical scenarios.


7 Acknowledgments
We are grateful to the anonymous referees for their useful comments. This work
is supported by the National Key Research and Development Plan of China
(Grant No.2017YFB1002801).


References
 1. Balai, E., Gelfond, M.: On the Relationship between P-log and LPMLN . In: Kamb-
    hampati, S. (ed.) Proceedings of the 25th International Joint Conference on Arti-
    ficial Intelligence. pp. 915–921 (2016)
 2. Brewka, G., Niemelä, I., Syrjänen, T.: Implementing ordered disjunction using
    answer set solvers for normal programs. In: Proceedings of the 8th European Con-
    ference On Logics In Artificial Intelligence. pp. 444–456. Cosenza, Italy (2002).
    https://doi.org/10.1007/3-540-45757-7-37
 3. Brewka, G.: Logic Programming with Ordered Disjunction. In: Proceedings of the
    9th International Workshop on Non-Monotonic Reasoning. pp. 100–105 (2002)
 4. Brewka, G.: Answer Sets: From Constraint Programming Towards Qualitative Op-
    timization. In: Proceedings of the 7th Logic Programming and Nonmonotonic Rea-
    soning. vol. 2923, pp. 34–46. Fort Lauderdale, USA (2004)
 5. Brewka, G.: Preferences in Answer Set Programming. In: Proceedings of the 11th
    Conference of the Spanish Association for Artificial Intelligence on Current Topics
    in Artificial Intelligence. pp. 1–10 (2005)
 6. Brewka, G., Delgrande, J.P., Romero, J.: asprin: Customizing Answer Set Pref-
    erences without a Headache. In: Proceedings of the 29th AAAI Conference on
    Artificial Intelligence. pp. 1467—-1474 (2015)
 7. Confalonieri, R., Nieves, J.C., Osorio, M., Vázquez-Salceda, J.: Dealing
    with explicit preferences and uncertainty in answer set programming. An-
    nals of Mathematics and Artificial Intelligence 65(2-3), 159–198 (2012).
    https://doi.org/10.1007/s10472-012-9311-0
12      B. Wang et al.

 8. Confalonieri, R., Prade, H.: Using possibilistic logic for modeling qualitative deci-
    sion: Answer Set Programming algorithms. International Journal of Approximate
    Reasoning 55(2), 711–738 (2014). https://doi.org/10.1016/j.ijar.2013.11.002
 9. Gebser, M., Kaminski, R., Schaub, T.: Complex optimization in answer set pro-
    gramming. Theory and Practice of Logic Programming 11(4-5), 821–839 (2011).
    https://doi.org/10.1017/S1471068411000329
10. Gelfond, M., Lifschitz, V.: The Stable Model Semantics for Logic Programming.
    In: Kowalski, R.A., Bowen, K.A. (eds.) Proceedings of the Fifth International Con-
    ference and Symposium on Logic Programming. pp. 1070–1080. MIT Press (1988)
11. Lee, J., Talsania, S., Wang, Y.: Computing LPMLN using ASP and MLN
    solvers. Theory and Practice of Logic Programming 17(5-6), 942–960 (sep 2017).
    https://doi.org/10.1017/S1471068417000400
12. Lee, J., Wang, Y.: Weighted Rules under the Stable Model Semantics. In: Baral,
    C., Delgrande, J.P., Wolter, F. (eds.) Proceedings of the Fifteenth International
    Conference on Principles of Knowledge Representation and Reasoning:. pp. 145–
    154. AAAI Press (2016)
13. Lee, J., Yang, Z.: LPMLN , Weak Constraints, and P-log. In: Singh, S.P., Markovitch,
    S. (eds.) Proceedings of the Thirty-First AAAI Conference on Artificial Intelli-
    gence. pp. 1170–1177. AAAI Press (2017)
14. Lee, J., Yang, Z.: Translating LPOD and CR-Prolog2 into Standard Answer Set
    Programs 2 (2018)
15. Nicolas, P., Garcia, L., Stéphan, I., Lefèvre, C.: Possibilistic uncertainty handling
    for answer set programming. Annals of Mathematics and Artificial Intelligence
    47(1-2), 139–181 (2006). https://doi.org/10.1007/s10472-006-9029-y
16. Richardson, M., Domingos, P.M.: Markov Logic Networks. Machine learning 62(1-
    2), 107–136 (2006). https://doi.org/10.1007/s10994-006-5833-1
17. Wang, B., Zhang, Z.: A Parallel LPMLN Solver: Primary Report. In: Bogaerts, B.,
    Harrison, A. (eds.) Proceedings of the 10th Workshop on Answer Set Programming
    and Other Computing Paradigms. pp. 1–14. CEUR-WS, Espoo, Finland (2017)
18. Wang, B., Zhang, Z., Xu, H., Shen, J.: Splitting an LPMLN Program. In: Proceed-
    ings of the Thirty-Second AAAI Conference on Artificial Intelligence. pp. 1997–
    2004 (2018)