<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On the Power and Limitations of Examples for Description Logic Concepts (Extended Abstract)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Balder ten Cate</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Raoul Koudijs</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ana Ozaki</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Amsterdam</institution>
          ,
          <addr-line>ILLC - Postbus 94242, 1090 GE Amsterdam</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Bergen</institution>
          ,
          <addr-line>HIB - Thormøhlens gate 55 5006 Bergen</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Oslo</institution>
          ,
          <addr-line>Gaustadalléen 23B 0373 Oslo</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Labeled examples (i.e., positive and negative examples) are an attractive medium for communicating complex concepts. They are useful for deriving concept expressions (such as in concept learning, interactive concept specification, and concept refinement) as well as for illustrating concept expressions to a user or domain expert. We investigate the power of labeled examples for describing description logic concepts. Specifically, we systematically study the existence and eficient computability of finite characterizations , i.e., finite sets of labeled examples that uniquely characterize a single concept, for a wide variety of description logics between ℰℒ and ℒℐ, both without an ontology and in the presence of a DL-Lite ontology. Finite characterizations are relevant for debugging purposes, and their existence is a necessary condition for exact learnability with membership queries.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Finite Characterisations</kwd>
        <kwd>Concept Languages</kwd>
        <kwd>Concept Learning</kwd>
        <kwd>Membership Queries</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Labeled examples (i.e., positive and negative examples) are an attractive medium for communicating
complex concepts. They are useful as data for deriving concept expressions (such as in concept learning,
interactive concept specification, and example-driven concept refinement) as well as for illustrating
concept expressions to a user or domain expert [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6 ref7">1, 2, 3, 4, 5, 6, 7</xref>
        ]. In this extended abstract we report on
a recent study [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] into the utility of labeled examples for describing description logic concepts.
      </p>
      <sec id="sec-1-1">
        <title>Example 1. In the description logic ℰ ℒ, we may define the concept of an e-bike by means of the concept</title>
        <p>expression  := Bicycle ⊓ ∃Contains.Battery. Suppose we wish to illustrate  by a collection of positive
and negative examples. What would be a good choice of examples? Take the interpretation ℐ consisting of
the following facts.</p>
        <p>Bicycle soltera2− →−− − Contains</p>
        <p>Bicycle px10− →−− −</p>
      </sec>
      <sec id="sec-1-2">
        <title>Car teslaY− →−− −</title>
        <p>Contains
Contains
li360Wh Battery
b12 Basket
li81kWh Battery
In the context of this interpretation ℐ, it is clear that soltera2 is a positive example for , and px10 and
teslaY are negative examples for . In fact, as it turns out,  is the only ℰ ℒ-concept (up to equivalence)
that is consistent with, or ‘ fits ’ these three labeled examples. In other words, these three labeled examples
“uniquely characterize”  within the class of all ℰ ℒ-concepts. This shows that the above three labeled
examples are a good choice of examples. Further examples would be redundant. Note, however, that this
depends on the choice of the description logic. For instance, the richer concept language ℒ allows for
concept expressions such as Bicycle ⊓ ¬∃Contains.Basket that also fit.</p>
      </sec>
      <sec id="sec-1-3">
        <title>Example 2. This example involves ℒ-concepts over a signature (Σ  , Σ ) consisting of the concept</title>
        <p>names Σ  = {Animal, Cat, Dog, Red} and no role names, i.e., Σ  = ∅. Let  furthermore be the
DL</p>
      </sec>
      <sec id="sec-1-4">
        <title>Lite ontology {Cat ⊑ Animal, Dog ⊑ Animal, Cat ⊑ ¬Dog} expressing that Cat and Dog are disjoint</title>
        <p>subconcepts of Animal. Consider the concept  := Cat ⊓ Red. If we wish to illustrate  by a collection
of positive and negative examples, what would be a good choice of examples? Take the interpretation ℐ
consisting of the following facts:</p>
      </sec>
      <sec id="sec-1-5">
        <title>Animal, Red  ⊵</title>
        <p>Red 
⊵</p>
      </sec>
      <sec id="sec-1-6">
        <title>Cat, Animal , Red  ⊵</title>
        <sec id="sec-1-6-1">
          <title>Dog, Animal, Red  ′ Animal</title>
          <p>′
′ Cat, Animal
′ Dog, Animal</p>
        </sec>
      </sec>
      <sec id="sec-1-7">
        <title>Note that ℐ satisfies the ontology . In the context of this interpretation, it is clear that</title>
        <p>•  is a positive example for .</p>
        <p>• All the other elements (that is, , ′, , ′, ′, , and ′) are negative examples for .</p>
      </sec>
      <sec id="sec-1-8">
        <title>As it turns out, every ℒ-concept (over the specified signature) that fits these labeled examples is equivalent</title>
        <p>to  under the ontology . In other words, the above labeled examples uniquely characterize  relative to
 and the signature (Σ  , Σ ). On the other hand, these labeled examples do not uniquely characterize 
in the absence of an ontology, because, for instance, Cat ⊓ Red ⊓ ¬Dog and Cat ⊓ Red ⊓ Animal also fit but
are not equivalent to  in the absence of the ontology.</p>
        <sec id="sec-1-8-1">
          <title>An ontology can help reduce the number of examples needed to uniquely characterize . Moreover, it</title>
          <p>can help avoid unnatural examples. For instance, without an ontology, a unique characterization for 
would need to include negative examples satisfying Cat ⊓ Red ⊓ Dog and Cat ⊓ Red ⊓ ¬Animal.</p>
          <p>
            Motivated by the above examples, we investigate the
existence and eficient computability of finite characteri- L (∀,∃,≥,-,⊓,⊔,⊤,⊥,¬) = ALCQI
zations, i.e., finite sets of labeled examples that uniquely
characterize a single concept. Finite characterizations are
relevant not only for illustrating a complex concept to a fragmenctshathraacttdeoriznaottioandsmit finite
suisoenr o(eb.tga.i,nteodvuesriinfyg tmhaecchoinrreecletanrensisnogftaecchonniqceupets)e.xTphreeisr- L(∃,-,⊓,⊔,⊤,⊥) L(∀,∃,⊓,⊔) L(∀,∃,≥,⊓,⊤) L(∃,≥,-,⊓,⊤)
existence is a necessary condition for exact learnability ?
with membership queries [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. Furthermore, from a more
fundamental point of view, by studying the existence of fragmcheanrtascttheartizaadtimo nitsfinite L(≥,-,⊓)
ifnite characterizations, we gain insight into the power and L(∃,-,⊓,⊤,⊥)
limitations of labeled examples as a medium for describing
concepts. pfroalyg-mtimenetscothmaptuatdambliet
          </p>
          <p>
            Finite characterisations were first studied in the com- characterizations
putational learning theory literature under the name of L(∃,⊓)
teaching sets, with a corresponding notion of teaching
dimension, measuring the maximal size of minimal teaching Figure 1: Summary of Thm. 1 and 2.
sets of some class of concepts [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ]. Several recent works
study finite characterisations for description logic concepts ([
            <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
            ] for ℰ ℒℐ; [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] for temporal instance
queries); a systematic study of finite characterizations for syntactic fragments of modal logic was carried
out in [14].
          </p>
          <p>
            In this prior literature, two types of examples have been considered, namely open-world examples (cf.
[
            <xref ref-type="bibr" rid="ref12">12, 15</xref>
            ]) and closed-world examples (e.g. [
            <xref ref-type="bibr" rid="ref13">13, 16</xref>
            ]). An open-world example is a pair (, ), where  is an
ABox and  is an individual name. Such a pair is a positive example for a concept expression  relative
to an ontology  if  belongs to the certain answers of  w.r.t.  and it is a negative example otherwise.
          </p>
          <p>In contrast, a closed-world example is a pair (ℐ, ) where ℐ is a finite interpretation satisfying the
ontology , and  is an element in the domain of ℐ. Such a pair is a positive example for a concept
expression  if  ∈ ℐ , and it is a negative example otherwise.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Contributions</title>
      <p>
        First contribution: We study the diference in descriptive power between closed-world examples
and open-world examples for description logic concepts. It was shown in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] that ℰ ℒℐ admits finite
characterizations with open-world examples under DL-Lite ontologies. We show that the same does not
hold with closed-world examples. On the other hand, for non-monotone concept languages, e.g. with
∀ or ¬, it is necessary to work with closed-world rather than open-world examples. For instance,
the concepts ∀. and ∀. do not have any positive open-world examples (relative to an empty
ontology), and hence neither can be uniquely characterized by open-world examples.
Second contribution: We systematically study the existence of, and the complexity of computing,
ifnite characterisations for concept expressions in a wide range of description logics, using closed-world
examples. Specifically, we look at concept languages ℒ(O) generated by a set of connectives O, where
{∃, ⊓} ⊆ O ⊆ {∀ , ∃, ≥ , − , ⊓, ⊔, ⊤, ⊥, ¬}. In other words, we look at fragments of the description logic
ℒℐ that contain at least the ∃ and ⊓ constructors from ℰ ℒ. Within this framework, we obtain an
almost complete classification of the concept languages that admit finite characterizations.
Theorem 1. Let {∃, ⊓} ⊆ O ⊆ {∀ , ∃, ≥ , − , ⊓, ⊔, ⊤, ⊥, ¬}.
      </p>
      <p>1. If O is a subset of {∃, − , ⊓, ⊔, ⊤, ⊥}, {∀, ∃, ⊔, ⊓} or {∀, ∃, ≥ , ⊓, ⊤} then ℒ(O) admits finite
characterizations with closed-world examples.</p>
      <sec id="sec-2-1">
        <title>2. Otherwise ℒ(O) does not admit finite characterizations with closed-world examples, except possibly</title>
        <p>if {≥ , − , ⊓} ⊆ O ⊆ {∃ , ≥ , − , ⊓, ⊤}.</p>
        <p>
          The above theorem identifies three maximal fragments of ℒℐ that admit finite characterisations.
It leaves open the status of essentially only two concept languages, namely ℒ(∃, ≥ , − , ⊓) and ℒ(∃, ≥
, − , ⊓, ⊤) (note that ∃ is definable in terms of ≥ ). The proof builds on prior results from [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] and [14].
Our main novel technical contributions are a construction of finite characterisations for ℒ(∀, ∃, ≥ , ⊓, ⊤)
and complementary negative results for ℒ(≥ , ⊥), ℒ(≥ , ⊔) and ℒ(∀, ∃, − , ⊓). The construction of finite
characterisations for ℒ(∀, ∃, ≥ , ⊓, ⊤) is non-elementary. We give an elementary (doubly exponential)
construction for ℒ(∃, ≥ , ⊓, ⊤), via a novel polynomial-time algorithm for constructing frontiers (i.e.,
complete sets of minimal weakenings) of ℒ(∃, ≥ , ⊓, ⊤)-concepts. It also follows that subsumptions
between such concepts can be checked in polynomial time.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Next, we study which concept languages ℒ(O) admit polynomial-time computable characterisations,</title>
        <p>i.e., have a polynomial-time algorithm to compute finite characterisations.
Theorem 2. Let {∃, ⊓} ⊆ O ⊆ {∀ , ∃, ≥ , − , ⊓, ⊔, ⊤, ⊥, ¬}.</p>
      </sec>
      <sec id="sec-2-3">
        <title>1. If O is a subset of {∃, − , ⊓, ⊤, ⊥}, then ℒ(O) admits polynomial-time computable characterizations</title>
        <p>with closed-world examples.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2. Otherwise, ℒ(O) does not admit polynomial-time computable characterizations with closed-world</title>
        <p>examples, assuming P̸=NP.</p>
        <p>
          The first item follows from known results [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]; we prove the second by establishing an exponential
lower bound on characterisations for ℒ(∃, ≥ , ⊓) and NP-hardness of computing characterisations for
ℒ(∀, ∃, ⊓). Theorems 1 and 2 are summarized in Figure 1. Theorems 1 and 2 still holds true when
‘admits finite (polynomial) time computable characterisations’ is replaced with ‘is exact learnable
from membership queries in finite (polynomial) time’. On the one hand, existence finite (polynomial)
characterisations is a necessary condition for (polynomial) membership query learnability, so the
negative results transfer immediately. On the other hand, finite characterisations always enable a brute
force membership query algorithm, and [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] shows that ℒ(∃, − , ⊓, ⊤) is polynomial-time learnable
from membership queries.
        </p>
        <p>Finally, we investigate finite characterizations relative to an ontology , where we require all the
(positive and negative) examples to be interpretations satisfying the ontology , and require that every
iftting concept is equivalent to the input concept  relative to  (cf. Example 2).</p>
        <p>Theorem 3. Let {∃, ⊓} ⊆ O ⊆ {∀ , ∃, ≥ , − , ⊓, ⊔, ⊤, ⊥, ¬}.</p>
      </sec>
      <sec id="sec-2-5">
        <title>1. If O is a subset of {∃, ⊓, ⊤, ⊥}, then ℒ(O) admits finite characterizations with closed-world</title>
        <p>examples w.r.t. all DL-Lite ontologies.</p>
      </sec>
      <sec id="sec-2-6">
        <title>2. Otherwise, ℒ(O) does not admit finite characterizations with closed-world examples w.r.t. all DL-Lite</title>
        <p>ontologies, except possibly if {∃, ⊓, ⊔} ⊆ O ⊆ {∃ , ⊓, ⊔, ⊤, ⊥}.</p>
      </sec>
      <sec id="sec-2-7">
        <title>In fact, for ℒ(∃, ⊓, ⊤, ⊥)-concepts  and DL-Lite ontologies  such that  is satisfiable w.r.t. , a finite</title>
        <p>characterisation can be computed in polynomial time.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Acknowledgements</title>
      <p>Balder ten Cate is supported by the European Union’s Horizon 2020 research and innovation programme
under grant MSCA-101031081. Raoul Koudijs and Ana Ozaki are supported by the Research Council of
Norway, project number 316022, led by Ana Ozaki.
[14] B. t. Cate, R. Koudijs, Characterising modal formulas with examples, ACM Trans. Comput. Logic
(2024). URL: https://doi.org/10.1145/3649461. doi:10.1145/3649461, just Accepted.
[15] J. C. Jung, V. Ryzhikov, F. Wolter, M. Zakharyaschev, Temporalising unique characterisability and
learnability of ontology-mediated queries (extended abstract), in: O. Kutz, C. Lutz, A. Ozaki (Eds.),
Proceedings of the 36th International Workshop on Description Logics (DL 2023) co-located with
the 20th International Conference on Principles of Knowledge Representation and Reasoning and
the 21st International Workshop on Non-Monotonic Reasoning (KR 2023 and NMR 2023)., Rhodes,
Greece, September 2-4, 2023, volume 3515 of CEUR Workshop Proceedings, CEUR-WS.org, 2023.</p>
      <p>URL: https://ceur-ws.org/Vol-3515/abstract-13.pdf.
[16] L. De Raedt, Logical settings for concept-learning, Artificial Intelligence 95 (1997) 187–201. URL:
https://www.sciencedirect.com/science/article/pii/S0004370297000416. doi:https://doi.org/
10.1016/S0004-3702(97)00041-6.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <article-title>Dl-learner: Learning concepts in description logics</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>10</volume>
          (
          <year>2009</year>
          )
          <fpage>2639</fpage>
          -
          <lpage>2642</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Funk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Pulcini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wolter</surname>
          </string-name>
          ,
          <article-title>Learning description logic concepts: When can positive and negative examples be separated?</article-title>
          ,
          <source>in: IJCAI</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1682</fpage>
          -
          <lpage>1688</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Funk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          ,
          <article-title>Actively learning concept and conjunctive queries under ℰ ℒ- ontologies</article-title>
          , in: IJCAI,
          <year>2021</year>
          , pp.
          <fpage>1887</fpage>
          -
          <lpage>1893</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ozaki</surname>
          </string-name>
          ,
          <article-title>Learning description logic ontologies: Five approaches. where do they stand?</article-title>
          ,
          <source>Künstliche Intell</source>
          .
          <volume>34</volume>
          (
          <year>2020</year>
          )
          <fpage>317</fpage>
          -
          <lpage>327</lpage>
          . URL: https://doi.org/10.1007/s13218-020-00656-9. doi:
          <volume>10</volume>
          .1007/ S13218-020-00656-9.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Fanizzi</surname>
          </string-name>
          , C.
          <string-name>
            <surname>d'Amato</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Esposito</surname>
          </string-name>
          ,
          <article-title>DL-FOIL concept learning in description logics</article-title>
          ,
          <source>in: Proc. of ILP</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Iannone</surname>
          </string-name>
          , I. Palmisano,
          <string-name>
            <given-names>N.</given-names>
            <surname>Fanizzi</surname>
          </string-name>
          ,
          <article-title>An algorithm based on counterfactuals for concept learning in the semantic web</article-title>
          ,
          <source>Appl. Intell</source>
          .
          <volume>26</volume>
          (
          <year>2007</year>
          )
          <fpage>139</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hitzler</surname>
          </string-name>
          ,
          <article-title>Concept learning in description logics using refinement operators</article-title>
          ,
          <source>Mach. Learn</source>
          .
          <volume>78</volume>
          (
          <year>2010</year>
          )
          <fpage>203</fpage>
          -
          <lpage>250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>B. ten Cate</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Koudijs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ozaki</surname>
          </string-name>
          ,
          <article-title>On the power and limitations of examples for description logic concepts</article-title>
          ,
          <source>in: Proc. of IJCAI</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>739</fpage>
          -
          <lpage>745</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Angluin</surname>
          </string-name>
          ,
          <article-title>Queries and concept learning</article-title>
          ,
          <source>Mach. Learn</source>
          .
          <volume>2</volume>
          (
          <year>1987</year>
          )
          <fpage>319</fpage>
          -
          <lpage>342</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Goldman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kearns</surname>
          </string-name>
          ,
          <article-title>On the complexity of teaching</article-title>
          ,
          <source>Journal of Computer and System Sciences</source>
          <volume>50</volume>
          (
          <year>1995</year>
          )
          <fpage>20</fpage>
          -
          <lpage>31</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S0022000085710033. doi:https://doi.org/10.1006/jcss.
          <year>1995</year>
          .
          <volume>1003</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>ten Cate</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dalmau</surname>
          </string-name>
          ,
          <article-title>Conjunctive queries: Unique characterizations and exact learnability</article-title>
          ,
          <source>ACM Trans. Database Syst</source>
          .
          <volume>47</volume>
          (
          <year>2022</year>
          ). URL: https://doi.org/10.1145/3559756. doi:
          <volume>10</volume>
          .1145/3559756.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Funk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          ,
          <article-title>Frontiers and exact learning of ELI queries under dl-lite ontologies</article-title>
          ,
          <source>in: IJCAI</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>2627</fpage>
          -
          <lpage>2633</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fortin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Konev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ryzhikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Savateev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wolter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zakharyaschev</surname>
          </string-name>
          ,
          <article-title>Unique Characterisability and Learnability of Temporal Instance Queries</article-title>
          ,
          <source>in: Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>163</fpage>
          -
          <lpage>173</lpage>
          . URL: https://doi.org/10.24963/kr.2022/17. doi:
          <volume>10</volume>
          .24963/kr.2022/17.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>