<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Modeling Intentional States with Subsystems of ALC</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>University of Amsterdam</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amsterdam</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>The Netherlands</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arch´e Research Centre</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>University of St. Andrews</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>St. Andrews</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Scotland</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>1833</year>
      </pub-date>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Although standard description logics are accurate tools to capture inferences from extensional knowledge bases, logics like ALC are unreasonably strong in intentional contexts. The successful modeling of cases in which, e.g., a knowledge base is interpreted as describing a conversational agent's beliefs or goals thus requires a more constrained notion of closure. In this paper, we identify several inferences in ALC whose validity is questionable for such applications and describe two subsystems of ALC that better fit the use-case.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The phrase “predicate calculus” denotes the “classical” first-order logic of Frege
and Russell. Although various description logics correspond to distinct fragments
of classical first-order logic, the semantic foundation is almost universally a
classical one. As a tool to “capture facts” about the world, this Boolean framework
is well-suited to many use-cases. For example, a knowledge base representing
metadata about an organization’s assets will likely be interpreted extensionally,
that is, the truth or falsity of an assertion make up the primary logical
consideration. Thus, the classical framework is generally sound with respect to extensional
contexts.</p>
      <p>However, not all contexts about which one wishes to reason are extensional;
many applications exist in which veridical considerations do not take center</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <p>
        Description logics like ALC and SROIQ are tools for knowledge representation,
i.e., the modeling of and reasoning about information. In their introduction to
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], Nardi and Brachman make a remark that reflects an important feature of
standard description logics by counting them among a tradition of
logic-based formalisms, which evolved out of the intuition that
predicate calculus could be used unambiguously to capture facts about the
world.[14, p. 2]
stage. If, e.g., a conversational agent maintains a knowledge base representation
of the beliefs of an interlocutor, it is implausible to count arbitrary tautologies
as beliefs. As the expressivity increases to include, e.g., number restrictions and
role axioms, a logic like SROIQ will grossly overgenerate, closing knowledge
bases under claims looking suspiciously like number theory and algebra, as well.
In short, reasoning about extensional facts about the world —a task to which
standard description logics are well-suited—is a distinct exercise from reasoning
about intentional states of cognitive agents.
2
      </p>
      <p>Challenges for ALC in Intentional Contexts
In this section, we review a number of ways in which ALC—and thus all stronger
description logics—is deductively promiscuous in intentional contexts. We briefly
provide a pr´ecis of the description logic ALC. Its language is built from pairwise
disjoint classes of primitive individual names, concept names, and role names,
natural interpretations of which are individuals (i.e., constants in first-order logic
terms), classes (i.e., unary predicates), and relations (i.e., binary predicates).
These terms make up expressions when closed under syncategorematic terms
allow us to represent many details of a domain.</p>
      <p>The language includes primitive top and bottom concepts ⊤ and ⊥,
interpreted as a universal class and an empty class, respectively. The class of concepts
is constructed by closing the primitive concepts under the following, where r is
a role name:</p>
      <p>C ::= ⊤|⊥|¬C|C ⊓ C|C ⊔ C|∃r.C|∀r.C
¬C is the complement of C and C ⊓ D, C ⊔ D are the intersection and union
of C, D, respectively. We read ∃r.C as the class of individuals a that bears the
relation r to some C and ∀r.C as the class of individuals a for which a bears r
to only members of C. Then, where a, b are individual names, r is a role name,
and C, D are concept names, sentences are formed as follows:</p>
      <p>ϕ ::= a : C|ha, bi : r|C ⊑ D|C ≡ D
In the sequel, our choice of terms will be self-documenting, where the intended
interpretations should be clear.
2.1</p>
      <sec id="sec-2-1">
        <title>Examples</title>
        <p>
          We identify cases in which ALC appears to overgenerate, licensing consequences
too permissively for particular use-cases. That ALC is promiscuous should not be
controversial; intentional interpretations of knowledge bases are hyperintensional
in the sense of [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and not closed under substitution of logical equivalents.
        </p>
        <p>
          These concepts see implementation in OWL 2 [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] as the classes owl:Thing and
owl:Nothing, respectively.
Case I: Assessing Mastery of a Domain Consider a conversational agent
that is a component of a recommendation system, e.g., an artificial assistant
for an online storefront. The types of resources to which the agent directs
users should be influenced by knowledge about those users. One particularly
information-rich metric is the degree to which an interlocutor enjoys mastery
over a particular domain. Popular, accessible examples of a particular genre of
media, e.g., are natural candidate recommendations for a user with no
knowledge of that genre; those same examples would prove far less useful—quite likely,
redundant—for a user with detailed command of the genre’s esoterica.
        </p>
        <p>Thus, the utility of a recommendation system’s suggestions with respect to a
domain will be enhanced for understanding the interlocutor’s command of that
domain. A clear indicator would be the breadth and degree of correctness of
the interlocutor’s beliefs about that domain, which could be measured from the
number and accuracy of the interlocutor’s assertions stored in an intentional
knowledge base.</p>
        <p>Example 1 A conversational agent assists the interlocutor as she navigates an
online storefront devoted to classical music. After the initial interactions, the
agent has recorded the following ABox A0 as reflecting the agent’s explicit beliefs
about classical music:
– Mahler : ∀composerOf.Symphony
– hMahler, Kindertotenliederi : composerOf
– Kindertotenlieder : ¬Symphony
– MahlerSymphony5 : Symphony ⊓ Modernist
Suppose that the agent takes the percentage of correct classifications of musical
pieces within the concept Symphony as a coarse metric for the agent’s command
of classical music. Given the intended interpretation of the terms, it appears that
the interlocutor has correctly classified each of the two compositions mentioned
in A0; intuitively, she has evidenced some expertise with respect to the domain.</p>
        <p>Clearly, this task requires some deductive machinery; to infer the correct
classification of MahlerSymphony5 requires an inference from MahlerSymphony5 :
Symphony ⊓ Modernist to MahlerSymphony5 : Symphony. But the deductive
machinery of ALC leads to a conflict with this intuition. The interlocutor has
evidently made a mistake over the course of the interaction by asserting that Mahler
only composed symphonies. Thus, A0 is inconsistent, whence, e.g., A0 ALC
MahlerSymphony5 : ¬Symphony, despite the interlocutor’s correct classification
of MahlerSymphony5. Any simple mistake that triggers a similar inconsistency
in an intentional knowledge base will, therefore, entirely warp the picture of the
interlocutor’s domain knowledge.</p>
        <p>Case II: Assessing an Interlocutor’s Goals Consider a further case in
which an artificial conversational agent is tasked with directing interlocutors
to resources. The appropriateness of resources will undoubtedly be measured in
terms of how well those resources fit the interlocutor’s goals. Such cases might
see the initial interactions leveraged to populate an intentional knowledge base
representing the interlocutor’s goals or requirements.
Example 2 A conversational agent assists an interlocutor Self in an online
technology storefront. The interlocutor’s goals are represented by the following
ABox:</p>
        <p>A1 = {Self : ∃purchases.GamingComputer ⊓ DesktopComputer}
As in Example 1, there is a need for some deductions under which A1 is closed.
Suppose that the artificial agent has a directive such that whenever an
interlocutor counts Self : ∃purchases.DesktopComputer ⊓ GamingComputer as a goal, the
interlocutor should be directed to a particular resource (e.g., a webpage).
However, this trigger condition is not explicitly included in A1; it must be inferred.</p>
        <p>However, given A1, ALC delivers additional inferences under which a
collection of goals clearly should not be closed. For example, we should hesitate to
license an inference to the further goal of
Self : ∃purchases.(TernaryComputer ⊔ (GamingComputer ⊓ DesktopComputer)) ⊓
(GamingComputer ⊓ DesktopComputer)
Despite equivalence with the member of the interlocutor’s explicit goal from A1,
to count the above as a goal has several shortcomings. For one, the sentence is
a bit of a red herring—one can imagine that, if read lazily, resources may be
wasted on trying to connect the user with an instance of TernaryComputer.</p>
        <p>
          This candidate becomes more problematic when one considers the matter
of awareness. The notion, identified in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] as a key cognitive feature that can
constrain agents’ capacity for reasoning, is defined so that “an agent is aware of
a formula if he can compute whether or not it is true in a given situation within
a certain time or space bound.”[9, p. 41] Simply put:
        </p>
        <p>How can someone say that he knows or doesn’t know about p if p is a
concept he is completely unaware of?[9, p. 40]
This notion of awareness, in the present context, can be recast as a matter of
conceptual proficiency. If a knowledge base hT , Ai is, e.g., a representation of an
agent’s goals, it is implausible to say that a : C is among those goals in cases in
which the agent fails to grasp the use of C, i.e., is unaware of how to distinguish
cases in which a falls under C from cases in which a does not fall under C.</p>
        <p>Given the intended interpretation, the concept TernaryComputer is a highly
esoteric one. As the typical consumer of computer products—e.g. the
interlocutor of Example 2—is unlikely to count the above as a goal, to assume its
equivalence would be counterproductive to the aims of the conversational agent. Thus,
pragmatic considerations demonstrate the problematic overgeneration following
from the application of ALC to this domain.</p>
        <p>Note that the suggestion that ALC is too strong is not to say that ALC fails as
a characterization of normative dimensions of logic. Indeed, it remains plausible
that e.g. α ought to accept ϕ given other beliefs. What is implausible, however,
is that ALC—and a fortiori more expressive description logics like SROIQ—
captures the descriptive dimension, that is, a characterization of the closure of
the occurrent beliefs of an agent α.</p>
        <p>
          ALCAC and ALCS⋆
We now introduce two subsystems of ALC (in the same language) that are
arguably more appropriate to the closure of intentional knowledge bases than
stock ALC. The propositional deductive system that serves as the basis for both
proposals is the logic of analytic containment AC introduced by Richard Angell
in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Angell’s case that AC captures a semantic notion of synonymy has been
reinforced in a number of works—e.g., [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>
          The interpretation of AC offered in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] is based on Kleene’s three truth values
V3. In [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], Kleene introduces three-valued matrices for connectives to account
for cases in which a recursive procedure calculating truth values fails to converge:
In this section, we shall introduce new senses of the propositional
connectives, in which, e.g., Q(x) ∨ R(x) will be defined in some cases when
Q(x) or R(x) is undefined. It will be convenient to use truth tables, with
three “truth values” t (‘true’), f (‘false’) and e (‘undefined’), in describing
the senses which the connectives shall now have.[13, p. 332]
Kleene considers that for a predicate there is a “range of definition” over which
it is defined. A fruitful and relevant interpretation of these ranges is that ϕ(c)
evaluates to e when an agent lacks proficiency with the concept ϕ(x) and is
unable to determine a truth value. If an agent is not familiar with the use of
a predicate—or does not have a clear grasp of how a predicate may apply to
certain objects—an atomic formula may be viewed as not truth-evaluable.
        </p>
        <p>
          [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] assigns statements ϕ pairs of truth values—i.e. values from the set V32—
where the first coordinate indicates the degree of support for ϕ and the second
coordinate represents the degree of refutation or evidence against ϕ. This yields
nine available values for AC:
ht, −i: α has evidence supporting ϕ
he, −i: α unaware of ϕ’s truth conditions
hf, −i: α lacks evidence supporting ϕ
h−, ti: α has evidence refuting ϕ
h−, ei: α unaware of ϕ’s falsity conditions
h−, fi: α lacks evidence refuting ϕ
E.g., that Brouwer : Topologist is assigned the value ht, fi might correspond to a
case in which an agent—who is aware of how to classify objects as elements of
either Topologist or ¬Topologist—has unequivocal support for Brouwer’s falling
under that concept. That Caesar :PrimeNumber is assigned he, ei may be understood
as indicating that an agent is unaware of how Caesar’s being a PrimeNumber could
either be demonstrated or refuted.
        </p>
        <p>
          The present interpretation of the truth values of [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] is relatively conservative;
the account essentially enriches Belnap’s concern for support and refutation of
[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] with a device representing awareness in the sense of [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. The interpretation
diverges from the standard as concerns the availability of a truth value like hf, ei,
as it countenances cases in which an agent can be aware of a statement’s truth
conditions while lacking awareness of its falsity conditions. If one supposes that
awareness of a concept must be in toto, then eliminating the incompatible values
(i.e., ht, ei, hf, ei, he, ti, he, fi) yields a set of values V 2⋆ inducing five-valued logic
3
Sf⋆de, offered by Daniels in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] as the logic of the intentional contexts of fictions.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Semantics</title>
        <p>
          As the truth values of AC and Sf⋆de are based on weak Kleene logic wK, the model
theory for the description logics ALCAC and ALCS⋆ will reference wK as well.
The weak truth functions ∼˙ , ∧˙ , and ∨˙ are defined as follows:
Definition 1 The weak Kleene truth tables are:
∼˙
t f
e e
f t
∧˙ t e f
t t e f
e e e e
f f e f
∨˙ t e f
t t e t
e e e e
f t e f
As observed in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], the available accounts of quantification for Kleene logics
are not sufficient to support the class theory necessary for a description logic.
Instead, the following quantifiers on V32 are offered:
Definition 2 The restricted Kleene quantifiers are functions ∃˙ and ∀˙ mapping
a nonempty sets X ⊆ V32 to truth values from V3 as follows:
t

∃˙ (X) = e
f
t

∀˙ (X) = e
f
if ht, ti ∈ X
if for all hu, vi ∈ X, either u = e or v = e
if ht, ti ∈/ X and for some hu, vi ∈ X, u 6= e and v 6= e
if ht, fi, ht, ei ∈/ X and for some hu, vi ∈ X, u 6= e and v 6= e
if for all hu, vi ∈ X, either u = e or v = e
if {ht, fi, ht, ei}X 6= ∅ and for some hu, vi ∈ X, u 6= e and v 6= e
The guiding principle here is that a statement with restricted quantification like
[∀xϕ(x)]ψ(x)—read: “all ϕ’s are ψ’s”—is truth-evaluable only in case the two
domains can be meaningfully compared. Evaluating such a sentence presupposes
a domain of comparison; if a user is unaware of a range over which ϕ(x) and
ψ(x) could be compared, it is impossible to demonstrate or refute the statement.
        </p>
        <p>
          We’ve noted that the presentation of AC is bilateral —one coordinate tracking
truth, the other falsity. The truth function for negation ∼¨, then, merely toggles
truth and falsity, i.e., exchanges the coordinates of a truth value. DeMorgan’s
laws suggest that extending this bilateralism to the truth functions requires that
one weak Kleene truth function operates on the arguments’ first coordinates
while its dual truth function operates on their second coordinates.
Definition 3 The AC truth functions ∼¨ , ∧¨, and ∨¨ are defined so that:
– ∼¨ (hu, vi) = hv, ui
– hu0, v0i ∧¨ hu1, v1i = hu0 ∧˙ u1, v0 ∨˙ v1i
– hu0, v0i ∨¨ hu1, v1i = hu0 ∨˙ u1, v0 ∧˙ v1i
Quantifiers will be defined over sets X of pairs of values from V32—i.e., pairs of
pairs of Kleene truth values. For convenience, let πi,j map values hhu0, v0i, hu1, v1ii
to the pair of the ith coordinate of hu0, v0i and the jth cordinate of hu1, v1i and
let πi,j[X] be the image of X under πi,j . Then:
Definition 4 The AC restricted quantifiers ∃¨ and ∀¨ are defined so that:
– ∃¨(X ) = h∃˙ (π0,0[X ]), ∀˙(π0,1[X ])i
– ∀¨(X ) = h∀˙ (π0,0[X ]), ∃˙(π0,1[X ])i
Most of the interpretative work is being done by the weak Kleene interpretation,
with the AC functions merely imposing a bilateral reading. The presentation of
the proper AC model theory is necessarily brief; for detailed discussion of the
connectives or quantifiers, see [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] or [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], respectively.
        </p>
        <p>We now can define the model theory for ALCAC and ALCS⋆ , starting with
an interpretation:
Definition 5 An interpretation over V32 is a function I paired with domain ΔI ,
described by the following clauses:
– I(a : ⊤) = he, vi and I(a : ⊥) = hv, ei for some v ∈ V3
– I(a : C) ∈ V32 and I(ha, bi : r) ∈ V 2</p>
        <p>3
– I(a : ¬C) = ∼¨ I(a : C)
– I(a : C ⊓ D) = I(a : C) ∧¨ I(a : D)
– I(a : C ⊔ D) = I(a : C) ∨¨ I(a : D)
– I(a : ∃r.C) = ∃¨({hI(ha, bi : r), I(b : C)i | b ∈ Δ})
– I(a : ∀r.C) = ∀¨({hI(ha, bi : r), I(b : C)i | b ∈ Δ})
– I(C ⊑ D) = ∀¨({hI(a : C), I(a : D)i | a ∈ Δ})
– I(C ≡ D) = I(C ⊑ D) ∧¨ I(D ⊑ C)
We call an interpretation an AC interpretation if it ranges over V32 and an Sf⋆de
interpretation if it ranges over V32⋆. In either case, we say that a sentence ϕ is
true in I if the first coordinate of I(ϕ) = t; a set of sentences is true if each
sentence is true.</p>
        <p>These definitions provide very natural accounts of semantic consequence for
the two systems as preservation of truth. For knowledge bases hT , Ai, we define:
Definition 6 hT , Ai ALCAC ϕ if for all AC models, ϕ is true if T ∪ A is true
Definition 7 hT , Ai ALCS⋆ ϕ if for all Sf⋆de models, ϕ is true if T ∪ A is true
3.2</p>
      </sec>
      <sec id="sec-2-3">
        <title>Proof Theory</title>
        <p>
          The logics ALCAC and ALCS⋆ can be provided sound and complete tableau
calculi over a common set of rules, differing only on closure conditions for branches.
The calculi are signed, where each node is of the form [v] ϕ, for a sentence ϕ
and Kleene truth value v. For elegance, we also include two additional signs: m
(understood as not-e) and n (understood as not-t). Following the notation of [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ],
we let + indicate the introduction of a branch on a tableau and ◦ indicate that
two signed formulae are to appear on the same branch.
        </p>
        <p>Definition 8 The collection of rules ALC-AC is defined by the following rules:
where v is any element of V3, c or c′ are new to a branch, and d is arbitrary.</p>
        <sec id="sec-2-3-1">
          <title>To introduce the calculi, we need a further definition:</title>
          <p>Definition 9 A tableau with rules from ALC-AC fits hT , Ai ⊢ ϕ if its initial
segment is some permutation of the signed sentences [t] τ for each τ ∈ T , [t] α
for each α ∈ A, and [n] ϕ.</p>
          <p>We need two notions of closure of a branch in order to define tableau calculi for
the two systems.</p>
          <p>Definition 10 A branch of a tableau with rules from ALC-AC is AC-closed if
there is a sentence ϕ and distinct v, v′ ∈ V such that [v] ϕ and [v′] ϕ are on the
branch.</p>
          <p>Definition 11 A branch of a tableau with rules from ALC-AC is S-closed if it
is either AC-closed or there is a sentence a : C and distinct v, v′ ∈ V such that
[v] a : C and [v′] a : ¬C are on the branch where either v = e or v′ = e.
If every branch of a tableau is AC -closed (respectively, S -closed), we simply
say that the tableau itself is AC -closed (respectively, S -closed). Deducibility for
the two systems is naturally defined as the appropriate type of closure. For a
knowledge base hT , Ai,
Definition 12 hT , Ai ⊢ALCAC ϕ if every fitting tableau is AC-closed
Definition 13 hT , Ai ⊢ALCS⋆ ϕ if every fitting tableau is S-closed</p>
          <p>
            Formal Results on ALCAC and ALCS⋆
The systems ALCAC and ALCS⋆ arise from—and are thus embeddable in—the
systems with restricted quantification introduced in [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ]. Although the latter
logics lack logical constants corresponding to ⊤ and ⊥, they can be enriched
with privileged unary predicates T and F so that:
– For all I and t, I(T(t)) = ht, vi and I(F(t)) = hv, ti for some v ∈ V3.
– [t] T(t) and [t] ¬T(t) are added as axioms to the tableau calculi
Such an expansion clearly preserves soundness and completeness. We will thus
assume in the sequel that these predicates are part of the first-order language;
references to the systems of [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ] include the additional features without loss of
generality.
          </p>
          <p>We can recover soundness and completeness by taking advantage of this
embedding with the following translation:
Definition 14 The translation ·τ from LALC to the first-order language LACrQ
– (a : C)τ = C(a) and (ha, bi : r)τ = r(a, b) for primitive C and r
– (a : ¬C)τ = ∼(a : C)τ
– (a : C ⊓ D)τ = (a : C)τ ∧ (a : D)τ and (a : C ⊔ D)τ = (a : C)τ ∨ (a : D)τ
– (a : ∀r.C)τ = [∀x r(a, x)](x : C)τ and (a : ∃r.C)τ = [∃x r(a, x)](x : C)τ
– (C ⊑ D)τ = [∀x (x : C)τ ](x : D)τ and (C ≡ D)τ = (C ⊑ D)τ ∧ (D ⊑ C)τ
The fidelity of ·τ is witnessed by four lemmas. The first two ensure
modeltheoretic agreement:
Lemma 1 hT , Ai ALCAC ϕ iff T τ ∪ Aτ
AC ϕτ
Proof. Suppose that for ALCAC and AC valuations I and I′, I(ϕ) = I′(ϕτ )
for all atoms ϕ, i.e., whenever ϕ is of the form a : C or ha, bi : r for primitive
C and r. Then a simple induction on complexity of formulae establishes that
the agreement lifts to complex sentences as well. Thus, every countermodel to
hT , Ai ALCAC ϕ is isomorphic to a countermodel to T τ ∪ Aτ AC ϕτ and vice
versa, whence hT , Ai 2ALCAC ϕ iff T τ ∪ Aτ 2AC ϕτ .</p>
          <p>Lemma 2 hT , Ai ALCS⋆ ϕ iff T τ ∪ Aτ
Sf⋆de ϕτ
Proof. As ALCS⋆ models are a fortiori ALCAC models (mutatis mutandis for
ALCS⋆ ), this lemma follows immediately from Lemma 2.</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>And two further lemmas ensure proof-theoretic agreement:</title>
          <p>
            Lemma 3 hT , Ai ⊢ALCAC ϕ iff T τ ∪ Aτ ⊢ACrQ ϕτ
Proof. The rules ALC-AC are selected to agree with those of the tableaux
calculus ACrQ of [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ]. By comparing the two calculi, whenever a rule is applied to
a signed sentence [v] ϕ, yielding further branches with signed sentences [vi] ϕi,
an analogous rule applied to [v] ϕτ yields signed sentences [vi] ϕiτ . Thus,
applying ·τ to every sentence converts any ALCAC tableau to a ACrQ tableau.
Conversely, any ACrQ tableau in which all initial formulae are in the range of
·wτitwhil·lτi−n1clude only sentences in the range of ·τ ; moreover, translating each node
will translate the tableau to an isomorphic ALCAC tableau. It follows
that hT , Ai ⊢ALCAC ϕ will hold precisely when T τ ∪ Aτ ⊢ACrQ ϕτ holds.
Lemma 4 hT , Ai ⊢ALCS⋆ ϕ iff T τ ∪ Aτ ⊢SrQ ϕτ
Proof. In [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ], the tableau calculus SrQ is by adding to ACrQ rules that, in the
present terminology, ensure AC-closure of a branch whenever [e] ϕ and [e] ¬ϕ
appear on that branch. I.e., a branch AC-closes with the rules if and only if it
S-closes without the rules. Thus, Lemma 3 applies to establish the same
correspondence between ALCS⋆ and SrQ.
          </p>
          <p>
            Lemmas 1–4 suffice to show the adequacy of the proof theory with respect to
the semantics. We conclude the section with the following theorems:
Theorem 1 hT , Ai ⊢ALCAC ϕ iff hT , Ai ALCAC ϕ
Proof. By Lemma 3, the left-hand side is equivalent to T τ ∪ Aτ ⊢ACrQ ϕτ .
By the results of [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ], this is equivalent to T τ ∪ Aτ AC ϕτ . Finally, appeal to
Lemma 1 ensures equivalence with the right-hand side hT , Ai ALCAC ϕ.
Theorem 2 hT , Ai ⊢ALCS⋆ ϕ iff hT , Ai ALCS⋆ ϕ
Proof. Follow the form of Theorem 1, including appeals to Lemmas 2 and 4.
4
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Concluding Remarks</title>
      <p>As knowledge-driven, conversational artificial intelligence is introduced to more
domains, the necessity of knowledge representation over intentional contexts
becomes increasingly pressing. The above discussion is intended as an opening
move to address this problem by proffering of two natural candidate solutions.</p>
      <p>Much remains to be done, e.g., determination of complexity of the
calculi, producing implementations of the systems, and empirical verification—or
refutation—that ALCAC and ALCS⋆ are found satisfactory by users. We
conclude, though, by providing some examples that bear on the issues described in
Examples 1 and 2, showing that the above systems are sufficiently strong for the
cases but weak enough to avoid the overgeneration of stock ALC.
Example 3 Let A = {a : C, a : D}. Then A ALCAC C ⊓ D ⊑ C. We provide the
following tableau demonstrating this fact:
[e][a : C ⊓ D]
[e][a : D]
×
[f][c′ : D] [e][c′ : D] [[uv00]][[aa :: DC]] · · · [ui−1][a : C]</p>
      <p>× × × [vi−1]×[a : D]
N.b. that because for each j &lt; i, either uj = e or vj = e, each branch beneath
[e][a : C ⊓ D] will close.</p>
      <p>A similar tableau proof can be easily produced (and is thus left to the reader)
demonstrating that if A = {a : C ⊓ D}, then A ALCAC a : D ⊓ C, as Example 2
required.</p>
      <p>However, the forms of problematic inferences identified in Examples 1 and 2
can be shown to fail in ALCAC and ALCS⋆ . For example, Example 1 identified
that in ALC, one inconsistent classification of a term corrupts all of an agent’s
correct classifications in an ABox. Such mistakes are not catastrophic in ALCAC:
Example 4 Let A = {a : C, b : ¬C, hm, bi : r, m : ∀r.C}. Then A 2ALCAC a : ¬C.
A model in which I(a : C) = ht, fi, I(b : C) = ht, ti, and I(hm, bi : r) = ht, fi can
be recognized as a counterexample.</p>
      <p>Example 2 identified how the closure of an ABox under ALC may introduce
subject-matter with which an agent is unaware or inproficient. The following
shows how ALCAC prevents this:
Example 5 Let A = {a : C}. Then A 2ALCAC a : C ⊓ (C ⊔ D). A model in which
I(a : C) = ht, fi and I(a : D) = he, ei can be confirmed to be a counterexample.
Of course, the adequacy of the systems introduced here will be determined by
user’s experiences with implementations. Prior to such surveys, however, they
at least can be seen to enjoy a superior fit to the use-cases than ALC and can
therefore be recognized as a step in the right direction.</p>
      <p>A reviewer has remarked that Example 3 reflects an inability to reason from an empty
ABox. In the systems described in this paper, one is not prevented from reasoning
from an empty TBox and ABox. But an empty TBox and ABox pair is agnostic about
an agent’s awareness, constituting insufficient grounds to infer anything about the
concepts with which an agent is familiar or proficient.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Angell</surname>
          </string-name>
          , R.B.:
          <article-title>Three systems of first degree entailment</article-title>
          .
          <source>Journal of Symbolic Logic</source>
          <volume>42</volume>
          (
          <issue>1</issue>
          ),
          <fpage>147</fpage>
          -
          <lpage>148</lpage>
          (
          <year>1977</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGuinnes</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patel-Schneider</surname>
            ,
            <given-names>P</given-names>
          </string-name>
          . (eds.):
          <article-title>The Description Logic Handbook</article-title>
          . Cambridge University Press, Cambridge,
          <volume>1</volume>
          <fpage>edn</fpage>
          . (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Belnap</surname>
          </string-name>
          , Jr., N.D.:
          <article-title>How a computer should think</article-title>
          . In: Ryle,
          <string-name>
            <surname>G</surname>
          </string-name>
          . (ed.)
          <source>Contemporary Aspects of Philosophy</source>
          , pp.
          <fpage>30</fpage>
          -
          <lpage>56</lpage>
          . Oriel Press, Stockfield (
          <year>1977</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bock</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fokoue</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haase</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoekstra</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horrocks</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruttenberg</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sattler</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>OWL 2 Web Ontology Language Structural Specification and Functional-Style Syntax</article-title>
          . W3C, Second edn. (
          <year>2012</year>
          ), http://www.w3.org/TR/owl2- syntax/
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Carnielli</surname>
            ,
            <given-names>W.A.</given-names>
          </string-name>
          :
          <article-title>Systematization of finite many-valued logics through the method of tableaux</article-title>
          .
          <source>Journal of Symbolic Logic</source>
          <volume>52</volume>
          (
          <issue>2</issue>
          ),
          <fpage>473</fpage>
          -
          <lpage>493</lpage>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Correia</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Grounding and truth functions</article-title>
          .
          <source>Logique et Analyse</source>
          <volume>53</volume>
          (
          <issue>211</issue>
          ),
          <fpage>251</fpage>
          -
          <lpage>279</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Cresswell</surname>
            ,
            <given-names>M.J.:</given-names>
          </string-name>
          <article-title>Hyperintensional logic</article-title>
          .
          <source>Studia Logica</source>
          <volume>34</volume>
          ,
          <fpage>25</fpage>
          -
          <lpage>38</lpage>
          (
          <year>1975</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Daniels</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>A note on negation</article-title>
          .
          <source>Erkenntnis</source>
          <volume>32</volume>
          (
          <issue>3</issue>
          ),
          <fpage>423</fpage>
          -
          <lpage>429</lpage>
          (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Fagin</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Halpern</surname>
          </string-name>
          , J.Y.:
          <article-title>Belief, awareness, and limited reasoning</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>34</volume>
          (
          <issue>1</issue>
          ),
          <fpage>39</fpage>
          -
          <lpage>76</lpage>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ferguson</surname>
            ,
            <given-names>T.M.</given-names>
          </string-name>
          :
          <article-title>Faulty Belnap computers and subsystems of FDE</article-title>
          .
          <source>Journal of Logic and Computation</source>
          <volume>26</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1617</fpage>
          -
          <lpage>1636</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Ferguson</surname>
          </string-name>
          , T.M.
          <article-title>: Tableaux and restricted quantification for systems related to weak Kleene logic</article-title>
          .
          <source>In: Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX</source>
          <year>2021</year>
          ). Springer, Switzerland (
          <year>2021</year>
          ), to appear
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Fine</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Angellic content</article-title>
          .
          <source>Journal of Philosophical Logic</source>
          <volume>45</volume>
          (
          <issue>2</issue>
          ),
          <fpage>199</fpage>
          -
          <lpage>226</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Kleene</surname>
            ,
            <given-names>S.C.</given-names>
          </string-name>
          : Introduction to Metamathematics. North-Holland Publishing Company, Amsterdam (
          <year>1952</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brachman</surname>
            ,
            <given-names>R.J.:</given-names>
          </string-name>
          <article-title>An introduction to description logics</article-title>
          . In: Baader,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>McGuinnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Nardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Patel-Schneider</surname>
          </string-name>
          ,
          <string-name>
            <surname>P</surname>
          </string-name>
          . (eds.)
          <source>The Description Logic Handbook</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>40</lpage>
          . Cambridge University Press, Cambridge,
          <volume>1</volume>
          <fpage>edn</fpage>
          . (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Yablo</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          : Aboutness. Princeton University Press, Princeton (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>