<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Aleatoric Description Logic for Probabilistic Reasoning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tim Fr</string-name>
          <email>tim.french@uwa.edu.au</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>s Smok</string-name>
          <email>thomas.smoker@research.uwa.edu.au</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>The University of Western Australia</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Description logics are a powerful tool for describing ontological knowledge bases. That is, they give a factual account of the world in terms of individuals, concepts and relations. In the presence of uncertainty, such factual accounts are not feasible, and a subjective or epistemic approach is required. Aleatoric description logic models uncertainty in the world as aleatoric events, by the roll of the dice, where an agent has subjective beliefs about the bias of these dice. This provides a subjective Bayesian description logic, where propositions and relations are assigned probabilities according to what a rational agent would bet, given a configuration of possible individuals and dice. Aleatoric description logic is shown to generalise the description logic ALC, and can be seen to describe a probability space of interpretations of a restriction of ALC where all roles are functions. Several computational problems are considered and aleatoric description logic is shown to be able to model learning, via Bayesian conditioning.</p>
      </abstract>
      <kwd-group>
        <kwd>Probabilistic Reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <sec id="sec-1-1">
        <title>Belief Representation</title>
      </sec>
      <sec id="sec-1-2">
        <title>Learning Agents</title>
        <p>
          bet that the concept holds true, along the lines of the Dutch book argument
of Ramsey [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] and de Finetti [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. The fundamental assumption of this work is
that an agent models the world aleatorically, where events correspond to the roll
of dice, and the bias of the dice is treated epistemically. That is, the agent has
prior assumptions about the bias of the dice, and may refine these assumptions
through observing the world.
        </p>
        <p>
          Aleatoric description logic aims to model reasoning in uncertain and
subjective knowledge settings [
          <xref ref-type="bibr" rid="ref10 ref12">10, 12</xref>
          ]. However, aleatoric description logic takes an
approach where the probabilistic and logical aspects of the knowledge base are
completely unified, rather than several other approaches where these are
independent facets of the knowledge base [
          <xref ref-type="bibr" rid="ref17 ref23 ref4">4, 23, 17</xref>
          ]. Therefore all concept and roles
are represented by “dice rolls” corresponding to an agent’s beliefs on the likely
configuration of the world.
        </p>
        <p>
          An advantage of this “probability first” approach is that aleatoric
description logic is naturally able to model learning via Bayesian conditioning over
complex observations (i.e. logical formula). Aleatoric modal logic is introduced
in [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], where the semantics are presented along with a proof theoretic calculus.
This paper extends that syntax and semantics to an aleatoric description logic,
following the correspondence between description logics and modal logics [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
2
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Aleatoric Description Logic</title>
      <p>
        This section presents the core syntax and semantics for Aleatoric Description
Logic (ADL). This is a generalisation of standard description logics, such as
ALC, in the same sense that complex arithmetic is a generalisation of
realvalued arithmetic: the true-false/0-1 values of description logics are extended to
the closed interval [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ].
      </p>
      <p>The syntax of ADL varies from that of ALC in a number of ways: there is
a ternary operator, if-then-else, in place of the typical Boolean operators, and a
marginalisation operator in place of the normal role quantifiers. These operators
add expressivity, but also better capture the aleatoric intuitions of the logic.</p>
      <p>The syntax for ADL is specified with respect to a set of atomic concepts, X
and a set of roles, R:</p>
      <p>α ::= ⊤ | ⊥ | A | (α?α:α) | [ρ] (α | α)
where A ∈ X is an atomic concept, and ρ ∈ R is a role. Let the set of ADL
formulas generated by this syntax be LADL. This syntax uses non-standard
operators and the following terminology is used: ⊤ is always; ⊥ is never; A is some
named concept that may hold for an individual; (α?β:γ) is if α then β, else γ;
and [ρ] (α | β) is ρ is α given β.</p>
      <p>We also identify a special role id ∈ R referred to as identity, which essentially
refers to different possibilities for the one individual, and write (α | β) in place
of [id] (α | β).</p>
      <p>In these semantics every thing is interpreted as a probability dependent only
on the individual: ⊤ always has probability 1.0 and ⊥ always has probability
0.0; an atomic concept A has some probability that is dependent only on the
current individual; (α?β:γ) has the probability of β given α or γ given not α;
and [ρ] (α | β) is the probability of α given β over the set of individuals in the
probability distribution corresponding to ρ.
2.1</p>
      <sec id="sec-2-1">
        <title>Probabilistic Semantics</title>
        <p>
          A formula of aleatoric description logic is interpreted with respect an aleatoric
belief model, that is based on the probability model of [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and the probability
structures defined in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Given a countable set S, we use the notation PD(S)
to notate the set of probability distributions over S, where µ ∈ PD(S) implies:
µ : S −→ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]; and Ps∈S µ (s) = 1.
        </p>
        <p>
          Definition 1. Given a set of atomic concepts X, and a set of roles R, an
aleatoric belief model is specified by the tuple B = (I, r, ℓ), where:
– I is a set of possible individuals.
– r : R × I −→ PD(I) assigns for each role ρ ∈ R and each individual i ∈ I, a
probability distribution r(ρ, i) over I. We will typically write ρ(i, j) in place
of r(ρ, i)(j).
– For the role id, we include the additional constraint: for all i, j, k ∈ I,
id(i, j) &gt; 0 implies id(j, k) = id(i, k).
– ℓ : I × X −→ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] gives the likelihood, ℓ(i, C) of an individual i satisfying
an atomic concept C. We will write C(i) in place of ℓ(C, i).
        </p>
        <p>Given some i ∈ I, we let Bi be referred to as a pointed aleatoric belief model.
Definition 2. Given an aleatoric belief model B = (I, r, ℓ), some i ∈ I, and
some α ∈ ADL we specify the probability B assigns i satisfying α, Bi(α),
recursively. We use the abbreviation Eiρα = Pj∈I ρ(i, j)Bj (α), where ρ ∈ R. Then:
Bi(⊥) = 0 Bi((α?β:γ)) = Bi(α).Bi(β) + (1 − Bi(α)).Bi(γ)
Bi(⊤) = 1 Bi([ρ] (α | β)) = Pj∈I ρ(iρ,Ej)iρBβj (α)Bj(β) if Eiρβ &gt; 0</p>
        <p>Bi(C) = C(i) Bi([ρ] (α | β)) = 1, if Ei β = 0</p>
        <p>In these semantics each proposition can be seen as an event (or a series of
conditional events), and the interpretation describes the probability of that event
being observed.</p>
        <p>For example, the following proposition describes the concept of someone
having been exposed to a virus, given they were in contact with someone who had
a fever
exp = (virus?⊤:[contact] (infectious | fever))
(1)
“The person was either already (asymptomatically) infected, or some person
selected from the population of contacts who have a fever, was infectious”.</p>
        <p>To evaluate this the concept virus is sampled, to see if they already have
the virus (there may be a 1% chance). In the cases where they didn’t already
have the virus, random individuals are sampled from the population of contacts,
and the concept fever is sampled from the individuals until a febrile contact is
selected. The probability associated to the proposition exp is the probability of
this process selecting an individual where infectious is sampled.</p>
        <p>An important property of these semantics is the weak independence
assumption: All formulas of ADL are contingent only on the individual at which they
are evaluated. This means that two formulas evaluated at the same individual
may be viewed as independent probabilistic events.</p>
        <p>That is, the probability of a coin landing heads twice in a row is independent
of the probability of the same coin landing heads once. Both events are contingent
on the bias of the coin, so in universes where the coin is more likely to land heads,
both events are more likely, but the event are conditionally independent given
the universe (or the possible individual). This simplifies the representation of
complex dependencies, as the joint probability of all events can be constrained
to be a probability distribution of individuals, where all event are conditionally
independent given an individual.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Aleatoric Knowledge Bases</title>
        <p>An aleatoric knowledge base is defined over the same signature of atomic
concepts X and roles R, including id. Additionally there is a set of named
individuals, N, which may be thought of as special concepts for grounding assertions
and framing queries. In line with the epistemic nature of these knowledge bases
each named individual can be any one of a number of possible individuals, and
the distribution of these possible individuals is represented by the role id.</p>
        <p>As with ALC we have terminological axioms and assertional axioms. The
aleatoric terminological axioms or T-Books describe rules that are universally
true for all individuals, and thus provide a non-probabilistic intensional
definition of the concepts and roles in the logic. Aleatoric assertional axioms or
A-Books describe subjective extensional information by listing the probabilities
with which individuals satisfy given concepts and roles. It is not the case that
T-Books describe concept inclusion nor subsumption as TBoxes do in ALC, as
these concepts do not have a strong intuitive foundation in an aleatoric setting.
Instead T-Books provide a means to constrain strength of belief.
Definition 3. The aleatoric terminological axioms have the form:
α
β (α is no more likely than β) and
α ≈ β (α is exactly as likely as β).</p>
        <p>A T-Book is a set of aleatoric terminological axioms.</p>
        <p>
          These axioms place universal constraints on the likelihoods of aleatoric
formulas being true. For example we might include an axiom first place, meaning
coming first in a race is no more likely than placing (coming first, second or
third). Alternatively, we could define placing precisely as coming first, second
or third, via the axiom place ≈ first ⊔ second ⊔ third, and then first place is
implicitly true.
Definition 4. The aleatoric assertional axiom (or simply assertions) have the
form:
– a :p α, where a ∈ N, p ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] and α ∈ ADL asserts a belief that individual
a satisfies concept α, with probability p.
– (a, b) :p ρ, where a, b ∈ N, p ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] and ρ ∈ R asserts that individual b
satisfies the role ρ for a with probability p.
        </p>
        <p>An A-Book A is a set of aleatoric assertional axioms, and A is a well-formed
A-Book if for every a ∈ N , for every ρ ∈ R, Pb∈N{p | (a, b) :p ρ} ≤ 1.
While T-Books give intensional definitions of concepts, A-Books give an
extensional definition by describing the concepts and roles in terms of the individuals.
As such an A-Book captures an agent’s current state of belief, just as a
bookmaker’s book describes the current odds for a race. An A-Book is well-formed if
the roles described can be extended to a probability distribution (i.e. the
probabilities sum to less than 1).</p>
        <p>While an A-Book is existentially quantified, T-Books are universally
quantified and consequently a very powerful formalism.</p>
        <p>Definition 5. An aleatoric knowledge base K = (A, T ) is a pair consisting of
a set of assertional axioms A and a set of terminological axioms T .</p>
        <p>An aleatoric knowledge base describes a belief, or subjective position of an
agent, that can correspond to a number of different interpretations. The
semantics for these interpretations are given below.</p>
        <p>An interpretation satisfies the aleatoric knowledge base K = (A, T ) iff it
satisfies all the axioms in A and T .</p>
        <p>Definition 6. Given an aleatoric knowledge base K = (A, T ) over the signature
(X, R, N), and an aleatoric belief model B = (I, R, ℓ) over the signature (X∪N, R)
satisfies K iff:
– For every a ∈ N, for every i ∈ I, a(i) ∈ {0, 1} and for all i, j ∈ I, a(i) = 1
and id(i, j) &gt; 0 implies a(j) = 1. That is, the names are absolute concepts,
and two possibilities for a single individual will share a name.
– For axioms α β ∈ T , for all i ∈ I, Bi(α) ≤ Bi(β).
– For axioms α ≈ β ∈ T , for all i ∈ I, Bi(α) = Bi(β).
– For axioms a :p α ∈ A, for all i ∈ I with a(i) = 1, Bi(Eα) = p.
– For axioms (a, b) :p ρ ∈ A for all i ∈ I with a(i) = 1, Pj∈I ρ(i, j) · b(j) = p.
We say that a knowledge base K is consistent if it is supported by at least one
aleatoric belief model.
for α coming up ⊤ exactly n times out of m.) Note that this does not describe
4
a probability or frequency, but an event. So α 5 does not mean α is sampled at
least 80% of the time. Instead it describes the event of α being sampled 4 times
out of 5, which would be quite likely (0.88) if α had probability 0.8, and unlikely
(0.19) if α had probability 0.5. This formalism can encode degrees of belief in
an elegant way. If an agent were to perform an action only if they believed α
very strongly, one might set α 190 as a precondition for the action, and if an agent
were informed of a proposition β by another agent who is2considered unreliable,
they may update their belief base with the proposition β 3 .</p>
        <p>
          These operators may not appear logical: ⊓ is not idempotent, and appears
similar to the product t-norm of fuzzy logic [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]. However, Section 3 shows that
these new operators are inherently probabilistic and represent the process of
reasoning over a probability space of description logic models. Furthermore,
restricting the concept probabilities to be 0 or 1, it can be seen that the semantic
interpretation of ⊓, ¬ and ∃ρ agrees with the standard description logic
semantics, so classical description logic can be seen as a special case of aleatoric
description logic.
For example, suppose we have three agents: Hector, Igor and Julia. They each
may have a virus (V), or not, and they also may have a fever (F), whether
they have the virus or not. For each possible individual, there is a probability of
them having a fever, which is naturally higher for possible individuals with the
virus. Each agent will occasionally come into contact with another agent, and
the identity of this agent is described by the probability distribution contact.
Finally, for each possible individual there is the probability of them being the
actual agent (id).
        </p>
        <p>An aleatoric knowledge base could model that Hector is very likely not to
have the virus; Julia is likely to have the virus, Julia is very likely to have a
fever and it is likely that Hector came into contact with Julia. Furthermore, a
terminological axiom can specify the belief that a new exposure to the virus
(exp) occurs if an agent did not already have the virus, but came into contact
with some febrile person who did have the virus. We can calculate the probability
of an agent being newly exposed to the virus:</p>
        <p>E(¬V ⊓ [c] (V | F ))
exp
Thus the aleatoric knowledge base K = (A, T ) is:</p>
        <p>A = Hector :0.1 V, Julia :0.7 V, Julia :0.69 F, (Hector, Julia) :0.3 c
T = {E(¬V ⊓ [c] (V | F )) exp}
To determine if the knowledge base necessitates that there is a greater than 25%
chance of Hector being newly exposed to the virus, the axiom Hector :0.25 exp
can be inserted into the knowledge base, and consistency checking can be applied.
This process is described in the following section.</p>
        <p>An interpretation that satisfies the knowledge base is presented below. For
each agent, we suppose that there are two possible individuals (PI), one with
the virus (e.g. Hector1) and one without (e.g. Hector0). Note that the weighted
probabilities of these agents satisfy the constraints of the A-Book, A.</p>
        <p>The probabilities for this scenario are given in Table 2, and a graphical
representation is given in Figure 1.</p>
        <p>Interpreting this for Hector, we see the probability Hector was newly
exposed to the virus is approximately 0.7. The working for this is shown in Table 3.
This section will consider computational properties of aleatoric description logic.
The particular questions considered are:
Model Checking: Given an aleatoric belief model Bi and some formula α, what
is the value of Bi(α)?
Satisfiability: Given an aleatoric knowledge base, K = (A, T ), is it consistent?</p>
        <p>The main question of interest is whether there is any interpretation that
could possibly correspond to a given aleatoric knowledge base. However, by
assigning flat priors to all unknown (or ambivalent concepts) one can define an
interpretation and get a partial answer via model-checking.</p>
        <p>Theorem 1. Given a pointed belief model Bi consisting of n possible individuals,
and a formula α consisting of m symbols, the value Bi(α) can be computed in
time O(n2m).</p>
        <sec id="sec-2-2-1">
          <title>See [8] (Lemma 4.7) for proof. To be able to perform inference based on an aleatoric knowledge base, we must first determine if it is consistent (i.e. agrees with at least one aleatoric belief model). A partial solution is given here for acyclic aleatoric knowledge bases.</title>
          <p>Definition 7. A concept C is an atom if C ∈ X ∪ {⊤, ⊥} (i.e. C is an atomic
concept, always, or never). A terminological axiom is simple if it has the one of
the forms
– C ≈ (D?E:F ) where C, D, E and F are all atoms.</p>
          <p>– C ≈ [ρ] (D | E) where C, D and E are all atoms.</p>
          <p>A simple T-Book, T , is a T-Book consisting only of simple terminological
axioms. A simple T-Book, T , is acyclic if there is no sequence of concepts C0, . . . , Cn
where:
– for all i = 1, . . . , n, either:
• there is some C ≈ (D?E:F ) ∈ T , where Ci, Ci−1 ∈ {C, D, E, F } ∩ X;
• there is some C ≈ [ρ] (D | E) ∈ T , where Ci, Ci−1 ∈ {C, D, E} ∩ X;
– there is some i where C ≈ [ρ] (D | E) ∈ T and Ci, Ci−1 ∈ {C, D, E} ∩ X;
– C0 = Cn.</p>
          <p>An A-Book, A is simple if for all aleatoric assertional axioms σ ∈ A of the form
a :p α, it is the case that α is an atomic concept. If A is a simple A-Book and
T is a simple T-Book, the K is a simple aleatoric knowledge base, and if T is
also acyclic K is an acyclic simple knowledge base.</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>The following lemma is a useful simplification.</title>
          <p>Lemma 1. Every aleatoric knowledge base K = (A, T ) is equivalent to a simple
′
aleatoric knowledge base, K .</p>
        </sec>
        <sec id="sec-2-2-3">
          <title>See [8] (Lemma ???) for proof.</title>
          <p>
            Theorem 2. Given a simple aleatoric knowledge base K = (A, T ) where T is
acyclic, it is possible to determine if K is consistent with complexity PSPACE.
The process for the satisfiability theorem is to build a system of polynomial
equalities and inequalities corresponding to the axioms in K. This system of
constraints is satisfiable if and only if K is satisfiable. The number of variables,
inequalities and equalities in the system is polynomial in the size of K, so
determining if the system is satisfiable reduces to ∃R (the satisfiability of existentially
quantified polynomial equations) which is in PSPACE [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ]. See [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ] (Theorem 4.8)
for a full proof. The case for non-acyclic T-Books is left to future work.
3
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Expressivity</title>
      <p>From Table 1 it can be seen that aleatoric description logic generalises ALC, in
the sense that ALC can be mapped to the 0 − 1 fragment of ADL. However,
this mapping overlooks the probabilistic aspect of ADL and how this relates to
uncertainty in description logics.</p>
      <p>The semantic interpretation of aleatoric description logic can be seen as
interpreting ALC over a probability space of interpretations where all roles are
functions. The aleatoric belief models of ADL describe a probability space of
these simple descriptions, and the semantics of ADL recursively define the
likelihood of a formula holding in models sampled from this probability space.</p>
      <p>
        A probability space [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] is a tuple (Ω, F , P ), where Ω is a set, F is a σ-algebra
over Ω (the events), and P is a probability measure on F that is countably
additive.
      </p>
      <p>Theorem 3. Given a formula α of ALC and an aleatoric belief model Bi, there
exists: a formula α∗ that is logically equivalent to α in ALC; and probability
space (Ω Bi , F , P Bi) where: Ω Bi is a set of functional ALC models; F is an
algebra over Ω Bi consisting of an element βˆ for every ALC formula β; and P Bi
is a probability measure on F derived from Bi. This probability space is such that
P Bi (αˆ) = Bi(α∗).</p>
      <p>
        The proof and necessary constructions can be found in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] Section 5. This
result gives a foundation for the semantics of ADL, establishing a correspondence
between the probabilistic operations of ADL, and the deterministic operations
of ALC applied over a probability space of interpretations. This way, ADL
represents reasoning where an agent has in mind a probability space of possible
interpretations for the state of the world. By observing the world, the agent is
able to refine this probability space, and learn a better representation for their
beliefs.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Learning</title>
      <p>
        An aleatoric belief model describes an agent’s beliefs and prior assumptions and
the agent may update these beliefs based on observations, via Bayesian
conditioning. This section will introduce two learning mechanisms, role learning and
concept learning whereby an agent may update the distribution of individuals
fulfilling a role, and also update the aleatoric probabilities associated with an
atomic concept at an individual. These mechanisms are unique to ADL and
provides a compelling advantage over alternative probabilistic description logics [
        <xref ref-type="bibr" rid="ref17 ref20 ref23 ref4 ref9">4,
17, 9, 23, 20</xref>
        ].
4.1
      </p>
      <sec id="sec-4-1">
        <title>Role learning</title>
        <p>Role learning refines the probability distribution associated with a role ρ. For a
pointed aleatoric belief model, Bi = (I, r, ℓ, i), for every j ∈ I, ρ(i, j) is the prior
probability that j fulfils the role of ρ for i. Given an observation which is an
ADL formula of the form [ρ] (α | ⊤), Bj(α) is the probability of this observation
holding, given j fulfils the role of ρ for i. Via Bayes’ rule, it follows that the
probability of j fulfilling the role of ρ for i, given the observation is:
ρ′(i, j) =
ρ(i, j) · Bj (α)</p>
        <p>Bi([ρ] (α | ⊤))
(the prior probability of j is multiplied by the probability of α given j, divided
by the probability of α).</p>
        <p>Definition 8. Let Bi = (I, r, ℓ, i) be an aleatoric belief model, and φ = [ρ] (α | β)
an observation, made at i. The φ-update of Bi is the aleatoric belief model Biφ =
(I, ri,α, ℓ, i), where for all ρ′ 6= ρ and j 6= i, ri,φ(ρ′, j) = r(ρ, j) and for all j ∈ I
ri,φ(ρ, i)(j) =
ρ(i, j) · Bj(α)
Bi([ρ] (α | β))
.</p>
        <p>Thus an agent with an aleatoric model of the world may update their
epistemic uncertainty of the distribution of roles, via Bayesian conditioning. The
φ-update of Bi is the agent’s posterior model of the world.</p>
        <p>Given the example in Subsection 2.3, suppose that Hector’s belief model is
B = (I, r, ℓ, i), and Hector is informed that the contact has tested positive for the
virus. Hector is also informed that the test used has a 10% false positive rate, so
Hector’s belief model now includes an atomic concept FP that is 0.1 everywhere.
Let φ = [c] ((F P ?⊤:V ) | ⊤) and then the φ-update of BH0 is computed by:
rH0,φ(c, H0)(j) =
c(i, j) · (0.1 + 0.9 · Bj(V )
BH0 ([c] ((F P ?⊤:V ) | ⊤))
.</p>
        <p>Substituting in the values from Table 2, Hector is able to discount the possible
individuals without a virus and condition the distribution for contact accordingly.
The φ-update of BH0 is represented in Figure 2.
Role learning is a natural application of Bayes’ law since the learning is applied
to a probability distribution of possible individuals. However, the probabilities
of atomic concepts are modelled as dice, and hence independent of all other
variables beyond the possible individual. This means we gain no additional
information from applying Bayes’ law. If it was possible to observe atomic concepts
directly (and often) it would be simple to refine a statistical model of the
probabilities. Observations in ADL are complex formulas, so it is preferable to find
a more general solution.</p>
        <p>
          Concept learning addresses these issues by introducing new possible
individuals in such a way that they do not affect any expected values for named
individuals but with variations in the aleatoric probability of concepts, which
may then be learnt via role learning, given arbitrary observations. The details
of such a construction are given in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], Section 6.
        </p>
        <p>In the example of Subsection 2.3, suppose that the assessment that the
likelihood of Hector having a fever is to be reassessed, based on the
observation (or possibly erroneous belief) that Hector would have a fever if and
only if Hector’s contact had a fever. A new world H0 is replaced by H01 and
H02 where H01(F ) = 2H0(F ) − H0(f )2 = 0.84, and H02(F ) = H0(F )2 = 0.36.
The probabilities are then updated via role learning over id, given the
observation φ = Eid(F ?EcF :Ec¬F ), where the relevant fragment of the aleatoric belief
model is shown in Figure 3. Note, the model has been revised to make the
example clearer. Aggregating H01 and H02 back into a single node by taking weighted
sums of the likelihoods gives the updated probability of F at H0 to be 0.56.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Related work</title>
      <p>
        There is a substantial amount of work on logics for reasoning about uncertainty
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], including [
        <xref ref-type="bibr" rid="ref14 ref15 ref24">15, 14, 24</xref>
        ], and going back to the works of Ramsey [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], Carnap
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and de Finetti [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Markov Logic Networks [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] (generalising Bayesian networks and Markov
networks) address a similar problem of providing a logical interface to machine
learning methods. These approaches attach a probabilistic interpretation to
formulas in a fragment of first order logic, rather than providing a probabilistic
variation of first order logic operators.
      </p>
      <p>
        There is some commonality in purpose with probabilistic logic programming
[
        <xref ref-type="bibr" rid="ref16 ref6">16, 6</xref>
        ], although the concepts are constrained to be Horn clauses, where atomic
formula are mutually independent.
      </p>
      <p>
        There is a growing body of work addressing the need for probabilistic
reasoning in knowledge bases. In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], an inductive reasoning approach is applied
to include probabilities with rules; in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], a subjective Bayesian approach is
proposed to describe the probabilities associated with a concept or role holding; and
Lukasiewicz and Straccia [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] have proposed a method to include vagueness (or
fuzzy concepts [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]) in descriptions logics. Probabilistic extensions of description
logics have also been proposed by Rigguzzi et al [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] and Pozzato [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. These
approaches extend knowledge bases to include probabilistic assertions and
axioms, and provide an extended syntax for querying probability thresholds. Some
work on learning parameters and structure of knowledge bases via probabilistic
description logics has been done, including Ceylan and Penaloza [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], who have
proposed a Bayesian Description Logic that combines a basic description logic
framework with Bayesian networks [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] for representing uncertainty about facts,
and Ochoa Luna et al [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] who applied statistical methods to estimate the most
likely configuration of a knowledge base.
      </p>
      <p>These approaches are very different to the work presented here, as
probabilities are not propagated through the roles, and they do not permit learning
based on the observation of complex propositions.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>This paper has introduced a novel approach for representing uncertain knowledge
and beliefs. Generalising the description logic ALC, the aleatoric description logic
is able to represent complex concepts as independent aleatoric events. The events
are contingent on possible individuals so they give a subjective Bayesian
interpretation of knowledge bases. This paper has also given computational reasoning
methods for aleatoric knowledge bases, and shown how aleatoric description logic
corresponds to a probability space of functional ALC models. The aleatoric
concepts and roles enable a simple learning framework where agents are able to
update their beliefs based on the observations of complex propositions.</p>
      <p>Future work will examine the complexity of the satisfiability problem for
non-acyclic T-Books, and investigate implementing a reasoning system for aleatoric
description logics.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGuinness</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patel-Schneider</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , et al.:
          <article-title>The description logic handbook: Theory, implementation and applications</article-title>
          . Cambridge university press (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Canny</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>Some algebraic and geometric computations in PSPACE</article-title>
          . In: STOC '
          <volume>88</volume>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Carnap</surname>
          </string-name>
          , R.:
          <source>On Inductive Logic. Philosophy of science 12(2)</source>
          ,
          <fpage>72</fpage>
          -
          <lpage>97</lpage>
          (
          <year>1945</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ceylan</surname>
            ,
            <given-names>I.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Penaloza</surname>
            ,
            <given-names>R.: The</given-names>
          </string-name>
          <string-name>
            <surname>Bayesian Description Logic BE L. In</surname>
          </string-name>
          : IJCAR. pp.
          <fpage>480</fpage>
          -
          <lpage>494</lpage>
          . Springer (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>De Finetti</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Theory of Probability: A critical introductory treatment</article-title>
          . John Wiley &amp; Sons (
          <year>1970</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>De Raedt</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kimmig</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toivonen</surname>
          </string-name>
          , H.:
          <article-title>Problog: A probabilistic prolog and its application in link discovery</article-title>
          .
          <source>In: IJCAI</source>
          . vol.
          <volume>7</volume>
          , pp.
          <fpage>2462</fpage>
          -
          <lpage>2467</lpage>
          .
          <string-name>
            <surname>Hyderabad</surname>
          </string-name>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>French</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gozzard</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reynolds</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <string-name>
            <given-names>A Modal</given-names>
            <surname>Aleatoric</surname>
          </string-name>
          <article-title>Calculus for Probabilistic Reasoning</article-title>
          . In: ICLA. pp.
          <fpage>52</fpage>
          -
          <lpage>63</lpage>
          . Springer (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>French</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smoker</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>An aleatoric description logic for probabilistic reasoning (long version) (</article-title>
          <year>2021</year>
          ), http://arxiv.org/abs/2108.13036
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Guti</surname>
          </string-name>
          <article-title>´errez-</article-title>
          <string-name>
            <surname>Basulto</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jung</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lutz</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , Schr¨oder, L.:
          <article-title>Probabilistic Description Logics for Subjective Uncertainty</article-title>
          .
          <source>Journal of Artificial Intelligence Research</source>
          <volume>58</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>66</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Halpern</surname>
          </string-name>
          , J.:
          <article-title>Reasoning about Uncertainty</article-title>
          . MIT Press, Cambridge MA (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Heinsohn</surname>
          </string-name>
          , J.:
          <article-title>Probabilistic Description Logics</article-title>
          .
          <source>In: Uncertainty Proceedings</source>
          <year>1994</year>
          , pp.
          <fpage>311</fpage>
          -
          <lpage>318</lpage>
          . Elsevier (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Jøsang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Subjective logic</article-title>
          . Springer (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Kolmogorov</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bharucha-Reid</surname>
            ,
            <given-names>A.T.</given-names>
          </string-name>
          :
          <article-title>Foundations of the Theory of Probability: Second English Edition</article-title>
          . Courier Dover Publications (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Kooi</surname>
            ,
            <given-names>B.P.</given-names>
          </string-name>
          :
          <article-title>Probabilistic Dynamic Epistemic Logic</article-title>
          .
          <source>Journal of Logic, Language and Information</source>
          <volume>12</volume>
          (
          <issue>4</issue>
          ),
          <fpage>381</fpage>
          -
          <lpage>408</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Kozen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>A Probabilistic PDL</article-title>
          .
          <source>Journal of Computer and System Sciences</source>
          <volume>30</volume>
          (
          <issue>2</issue>
          ),
          <fpage>162</fpage>
          -
          <lpage>178</lpage>
          (
          <year>1985</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Lukasiewicz</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Probabilistic logic programming</article-title>
          .
          <source>In: ECAI</source>
          . pp.
          <fpage>388</fpage>
          -
          <lpage>392</lpage>
          (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Lukasiewicz</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Straccia</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Managing Uncertainty and Vagueness in Description Logics for the Semantic Web</article-title>
          .
          <source>Web Semantics: Science, Services and Agents on the World Wide Web</source>
          <volume>6</volume>
          (
          <issue>4</issue>
          ),
          <fpage>291</fpage>
          -
          <lpage>308</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Ochoa-Luna</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Revoredo</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cozman</surname>
            ,
            <given-names>F.G.</given-names>
          </string-name>
          :
          <article-title>Learning probabilistic description logics: A framework and algorithms</article-title>
          .
          <source>In: Mexican International Conference on Artificial Intelligence</source>
          . pp.
          <fpage>28</fpage>
          -
          <lpage>39</lpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <source>Causality: Models, Reasoning</source>
          , and Inference.
          <source>Econometric Theory</source>
          <volume>19</volume>
          (
          <fpage>675</fpage>
          -
          <lpage>685</lpage>
          ),
          <volume>46</volume>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Pozzato</surname>
            ,
            <given-names>G.L.</given-names>
          </string-name>
          :
          <article-title>Typicalities and probabilities of exceptions in nonmotonic description logics</article-title>
          .
          <source>International Journal of Approximate Reasoning</source>
          <volume>107</volume>
          ,
          <fpage>81</fpage>
          -
          <lpage>100</lpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Ramsey</surname>
            ,
            <given-names>F.P.</given-names>
          </string-name>
          :
          <article-title>Truth and probability (</article-title>
          <year>1926</year>
          ).
          <source>The Foundations of Mathematics and other Logical</source>
          Essays pp.
          <fpage>156</fpage>
          -
          <lpage>198</lpage>
          (
          <year>1931</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Richardson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Domingos</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Markov logic networks</article-title>
          .
          <source>Machine learning 62(1-2)</source>
          ,
          <fpage>107</fpage>
          -
          <lpage>136</lpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Riguzzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellodi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zese</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Probabilistic description logics under the distribution semantics</article-title>
          .
          <source>Semantic Web</source>
          <volume>6</volume>
          (
          <issue>5</issue>
          ),
          <fpage>477</fpage>
          -
          <lpage>501</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Van Benthem</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gerbrandy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kooi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Dynamic Update with Probabilities</article-title>
          .
          <source>Studia Logica</source>
          <volume>93</volume>
          (
          <issue>1</issue>
          ),
          <volume>67</volume>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Zadeh</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          :
          <article-title>Fuzzy Sets</article-title>
          . In: Fuzzy Sets,
          <string-name>
            <given-names>Fuzzy</given-names>
            <surname>Logic</surname>
          </string-name>
          ,
          <source>And Fuzzy Systems: Selected Papers by Lotfi A Zadeh</source>
          , pp.
          <fpage>394</fpage>
          -
          <lpage>432</lpage>
          . World Scientific (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>