<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Cognitively adequate complexity of reasoning in a description logic: extended abstract</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jelle Tjeerd Fokkens</string-name>
          <email>tjeerdfokkens@gu.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>FredrikEngström</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>. The Description Logic ℒ ℰ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Rhodes, Greece</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Gothenburg</institution>
          ,
          <addr-line>Universitetsplatsen 1, 405 30, Göteborg</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The problem of finding automated and cognitively adequate explanations for entailments in knowledge bases is tackled by modelling a deductive reasoning process with the cognitive architaectu-rr.eThis results in the modeslharp which simulates the reasoning process of a human executing an algorithm for deciding inconsistency of a nℒ ℰ ABox. Withsharp one can make certain predictions about the inference time of this reasoning process. Based on the inference time two complexity measures are defined that are expectedly cognitively adequate by design. ABox is a set of expressions of the form ∶s  and  a role. Roles are primitive in this logic, but the concepts are constructed from a set of primitive concepts∈ CAKR'23: The 2nd International Workshop on Cognitive Aspects of Knowledge Representation, September 4, 2023, https://www.gu.se/om-universitetet/hitta-person/fredrikengs(tFr. oEmngström)</p>
      </abstract>
      <kwd-group>
        <kwd>cognitive modelling</kwd>
        <kwd>description logic</kwd>
        <kwd>complexity measure</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Δℐ ∣ ∃.(, ) ∈ 
ℐ ⊧  ∶ 
if  ℐ ∈  ℐ and ℐ ⊧ (, ) ∶</p>
      <p>if ( ℐ,  ℐ) ∈  ℐ.</p>
      <p>Note that negation only applies to primitive concept names. An interpret(aΔtℐi,o⋅nℐ), where
Δℐ is the domain, interpret(∀s .)
ℐ = { ∈ Δ ℐ ∣ ∀.(, ) ∈ 
ℐ →  ∈ 
ℐ} and (∃ .)
ℐ = { ∈
ℐ ∧  ∈</p>
      <p>ℐ}, while the other constructors are straightforwardly interpreted.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The ABox Inconsistency Algorithm</title>
      <p>The tableau-based algorithimnconsistent() decides inconsistency for a given ABox, as proved
in [2, pp.78-81]. inconsistent() applies syntax expansion rules until either none can be applied
anymore, or aclash is derived, i.e. for some individual nam e and concept nam e both ∶ 
and  ∶ ¬</p>
      <p>derived.</p>
      <p>For example, an application of th∃e-rule to{ ∶ ∃.} amounts to deriving bot(h, ) ∶  and
 ∶  , where is new; the other syntax expansion rules function straightforwardly. If a clash is
found, inconsistent() outputs ‘inconsistent’ and else it outputs ‘consistent’.</p>
      <p>Each next assertion to apply the rule to is nondeterministically selected; the list of assertions
in order of being selected until decision is called trhuen.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The Cognitive Architecturaect-r</title>
      <p>The cognitive architectuarcet-r is an integrated theory of human cognitive behaviour, as well
as a software with which this theory can be used to make various models.</p>
      <p>act-r works by manipulating chunks, which are lists of slot-value pairs that codify
information. These chunks can be temporarily stored in bufers that are connected to the procedural
memory where manipulations take place by the application of production rules. The latter are
condition-action rules, where the conditions apply to the chunks present in the bufers and the
actions modify them and create new ones.</p>
      <p>Each chunk has an associateadctivation which is a numerical value that determines the
chunk’s retrieval process as well as possible learning efects. The value of a chunk’s activation
is determined by which times a chunk is used as well as some random noise.</p>
      <p>
        More information can be found in, for example, thaect-r tutorial3[], or in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. The Model sharp</title>
      <p>We implemented the algorithminconsistent() intoact-r; the resulting model is callesdharp,
which stands forSimulatingHuman ABox Reasoning Performance. The idea is thashtarp can
be used to estimate the cognitively adequate complexity o fℒthℰe ABox inconsistency task,
so that justifications of inconsistencies in knowledge bases can be selected according to this
measure.</p>
      <p>sharp difers slightly from the originailnconsistent() algorithm to accommodate for certain
characteristics aocft-r. The main challenge is to keepsharp from making the same inference
more than once, which is ensured by the use of a variety of diferent lists that store certain
expressions already derived. One important consequence of this is tshatrp difers from the
inconsistent() algorithm in that it cannot process ABoxes that are too large, because the above
mentioned lists would overflow. Other issues are discussed in5][.</p>
      <p>For conveniencesharp’s production rules are grouped into five components with each its
own function: finding a clash, selecting the next assertion(s) and applying the respective syntax
expansion rules.</p>
    </sec>
    <sec id="sec-5">
      <title>5. The Predictions</title>
      <p>Withsharp we can predict the inference times that correspond to a gℒivℰen ABox. An
example of the simulated inference times of the ABo xes0 = { ∶ ( ⊓ ( ⊓ ( ⊓ ( ⊓ ¬))))}
and  1 = { ∶ ( ⊓ ),  ∶ ( ⊓ ),  ∶ ( ⊓ ),  ∶ ( ⊓ ¬)} is given in figure 1.</p>
      <p>The inference times fo r 1 exhibit a bigger spread than for0 Likewise, withsharp one can
predict that a change of element or concept name does not change the inference time in certain
situations. Moreover, a certain class of ABoxes displays exponential scaling of inference time
with syntactic complexity becauseAoNfD-branching [2, p.107]. Finally, the order of presenting
the ABox formulas and the order of conjuncts within conjunctions does not afect the inference
time according tsoharp.</p>
      <p>The paper then defines two complexity measures on ABoxes based onsharp. One is the
mean inference tim e  (⋅), the other is the mean inference time of the fastes t run(⋅);
both taken from a large enough sample of simulations to achieve a desirably robust accuracy.
The latter measure typically has no assertions that are irrelevant for deriving the clash.</p>
      <p>The two definitions correspond, respectively, to humans not using or using non-trivial
heuristics in selecting which assertion(s) to make an inference on. In the near future, experiments
are planned to test the predictions madeshbyarp.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Patel-Schneider</surname>
          </string-name>
          ,
          <source>Description Logic Handbook</source>
          , Cambridge University Press, Cambridge,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          , I. Horrocks,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Sattler</surname>
          </string-name>
          , An introduction to Description Logic, Cambridge University Press, Cambridge,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Act-r</surname>
            <given-names>software</given-names>
          </string-name>
          , http://act-r.psy.cmu.edu/softwa,r2e0/
          <fpage>23</fpage>
          . Accessed:
          <fpage>2022</fpage>
          -12-15.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Whitehill</surname>
          </string-name>
          ,
          <article-title>Understanding act-r - an outsider's perspective (</article-title>
          <year>2013</year>
          ). UhRttLp:s://doi.org/10. 48550/arXiv.1306.0125.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Fokkens</surname>
          </string-name>
          , Modelling the logical mind,
          <year>2023</year>
          . URL:https://hdl.handle.net/
          <year>2077</year>
          /7479.7
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>