<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Learning Probabilistic Ontologies with Distributed Parameter Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Giuseppe Cota</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Riccardo Zese</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elena Bellodi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabrizio Riguzzi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Evelina Lamma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dipartimento di Ingegneria</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dipartimento di Matematica e Informatica</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Ferrara Via Saragat</institution>
          <addr-line>1, I-44122, Ferrara, Italy [giuseppe.cota,riccardo.zese,elena.bellodi,evelina.lamma</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>We consider the problem of learning both the structure and the parameters of Probabilistic Description Logics under DISPONTE. DISPONTE (\DIstribution Semantics for Probabilistic ONTologiEs") adapts the distribution semantics for Probabilistic Logic Programming to Description Logics. The system LEAP for \LEArning Probabilistic description logics" learns both the structure and the parameters of DISPONTE knowledge bases (KBs) by exploiting the algorithms CELOE and EDGE. The former stands for \Class Expression Learning for Ontology Engineering" and it is used to generate good candidate axioms to add to the KB, while the latter learns the probabilistic parameters and evaluates the KB. EDGE for \Em over bDds for description loGics paramEter learning" is an algorithm for learning the parameters of probabilistic ontologies from data. In order to contain the computational cost, a distributed version of EDGE called EDGEMR was developed. EDGEMR exploits the MapReduce (MR) strategy by means of the Message Passing Interface. In this paper we propose the system LEAPMR. It is a re-engineered version of LEAP which is able to use distributed parallel parameter learning algorithms such as EDGEMR.</p>
      </abstract>
      <kwd-group>
        <kwd>Probabilistic Description Logics</kwd>
        <kwd>Structure Learning</kwd>
        <kwd>Parameter Learning</kwd>
        <kwd>MapReduce</kwd>
        <kwd>Message Passing Interface</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        a di cult task for humans and data is usually available that could be leveraged
for tuning them, and, on the other hand, from the fact that in some domains
there exist poor-structured knowledge bases which could be improved [
        <xref ref-type="bibr" rid="ref10 ref9">10, 9</xref>
        ].
      </p>
      <p>
        In Probabilistic Logic Programming (PLP) various proposals for representing
uncertainty have been presented. One of the most successful approaches is the
distribution semantics [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref11 ref14 ref3">3, 14, 11</xref>
        ] the authors proposed an approach to
represent probabilistic axioms in DLs called DISPONTE (\DIstribution
Semantics for Probabilistic ONTologiEs"), which adapts the distribution semantics for
Probabilistic Logic Programming to DLs.
      </p>
      <p>
        LEAP [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] for \LEArning Probabilistic description logics" is an algorithm
for learning the structure and the parameters of probabilistic DLs following
DISPONTE. It combines the learning system CELOE [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] with EDGE [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The
former, CELOE (\Class Expression Learning for Ontology Engineering"),
provides a method to build new (subsumption) axioms that can be added to the KB,
while the latter is used to learn the parameters of these probabilistic axioms.
      </p>
      <p>
        EDGE stands for \Em over bDds for description loGics paramEter learning"
and learns the parameters of a probabilistic theory. This algorithm is rather
expensive from a computational point of view. Therefore, in order to reduce
EDGE running time, we developed EDGEMR [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. It represents a distributed
implementation of EDGE and uses a simple MapReduce approach based on the
Message Passing Interface (MPI).
      </p>
      <p>In this paper we present an evolution of LEAP called LEAPMR which adapts
the LEAP algorithm to use EDGEMR. In addition, due to a software
re-engineering e ort, it was possible to remove the RMI module used by LEAP. To the
best of our knowledge there are no other algorithms that perform distributed
structure learning of probabilistic DLs.</p>
      <p>The paper is structured as follows. Section 2 introduces Description
Logics and summarizes DISPONTE. Sections 3 and 4 brie y describe the EDGE
and EDGEMR algorithms. Section 5 presents LEAPMR. Finally, Section 7 draws
conclusions.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Description Logics and DISPONTE</title>
      <p>
        Description Logics (DLs) are a family of logic based knowledge representation
formalisms which are of particular interest for representing ontologies and for
the Semantic Web. For an extensive introduction to DLs we refer to [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
      <p>While DLs are a fragment of rst order logic, they are usually represented
using a syntax based on concepts and roles. A concept corresponds to a set of
individuals while a role corresponds to a set of couples of individuals of the
domain.</p>
      <p>A query over a KB is usually an axiom for which we want to test the
entailment from the KB. The entailment test may be reduced to checking the
unsatis ability of a concept in the KB, i.e., the emptiness of the concept.</p>
      <p>
        DISPONTE [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] (\DIstribution Semantics for Probabilistic ONTologiEs")
applies the distribution semantics to probabilistic ontologies [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. In DISPONTE
a probabilistic knowledge base K is a set of certain and probabilistic axioms.
Certain axioms take the form of regular DL axioms. Probabilistic axioms take
the form p :: E, where p is a real number in [0; 1] and E is a DL axiom. A
DISPONTE KB de nes a distribution over DL KBs called worlds assuming that
the axioms are independent. Each world w is obtained by including every certain
axiom plus a subset of chosen probabilistic axioms.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Parameter Learning for Probabilistic DLs</title>
      <p>
        EDGE [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] is a parameter learning algorithm which adapts the algorithm
EMBLEM [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], developed for learning the parameters for probabilistic logic programs,
to the case of probabilistic DLs under DISPONTE. Inspired by [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], it performs
an Expectation-Maximization cycle over Binary Decision Diagrams (BDDs).
      </p>
      <p>EDGE performs supervised parameter learning. It takes as input a
DISPONTE KB and a number of positive and negative examples that represent the
queries in the form of concept membership axioms, i.e., in the form a : C for an
individual a and a class C.</p>
      <p>
        First, EDGE generates, for each query, the BDD encoding its explanations
using BUNDLE [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Then, EDGE starts the EM cycle in which the steps of
Expectation and Maximization are iterated until a local maximum of the
loglikelihood (LL) of the examples is reached. The LL of the examples is guaranteed
to increase at each iteration. EDGE stops when the di erence between the LL
of the current iteration and that of the previous one drops below a threshold
or when this di erence is below a fraction of the previous LL. Finally, EDGE
returns the reached LL and the new probabilities for the probabilistic axioms.
      </p>
      <p>
        EDGE is written in Java, hence it is highly portable. For further information
about EDGE please refer to [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Distributed Parameter Learning for Probabilistic DLs</title>
      <p>
        In this section we brie y describe a parallel version of EDGE that exploits the
MapReduce approach in order to compute the parameters. We called this
algorithm EDGEMR [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Like most MapReduce frameworks, EDGEMR's architecture follows a
masterslave model. The communication between the master and the slaves is done by
means of the Message Passing Interface (MPI).</p>
      <p>In a distributed context, the performances depend on the scheduling strategy.
In order to evaluate di erent methods, we developed two scheduling strategies:
single-step scheduling and dynamic scheduling. These are used during the queries
computation phase.</p>
      <p>Single-step Scheduling if N is the number of the slaves, the master divides
the total number of queries into N + 1 chunks, i.e. the number of slaves plus
the master. Then the master begins to compute its queries while, for the
other chunks of queries, the master starts a thread for sending each chunk
to the corresponding slave. After the master has terminated dealing with
its queries, it waits for the results from the slaves. When the slowest slave
returns its results to the master, EDGEMR proceeds to the EM cycle.
Dynamic Scheduling is more exible and adaptive than single-step
scheduling. At rst, each machine is assigned a xed-size chunk of queries in order.
Then, if the master ends handling its chunk it just takes the next one,
instead, if a slave ends handling its chunk, it asks the master for another one
and the master replies by sending a new chunk of queries to the slave.
During this phase the master runs a thread listener that waits for the slaves'
requests of new chunks and for each request the listener starts a new thread
that sends a chunk to the slave which has done the request (to improve the
performances this is done through a thread pool). When all the queries are
evaluated, EDGEMR starts the EM cycle.</p>
      <p>
        Experimental results conducted in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] show that dynamic scheduling has usually
better performances than single-step.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Structure Learning with Distributed Parameter</title>
    </sec>
    <sec id="sec-6">
      <title>Learning</title>
      <p>
        LEAPMR is an evolution of the LEAP system [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. While the latter exploits
EDGE, the rst was adapted to be able to perform EDGEMR. Moreover, after
a process of software re-engineering it was possible to remove the RMI
communication module used by LEAP and therefore reduce some communication
overhead.
      </p>
      <p>
        It performs structure and parameter learning of probabilistic ontologies under
DISPONTE by exploiting: (1) CELOE [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] for the structure, and (2) EDGEMR
(Section 4) for the parameters.
      </p>
      <p>
        CELOE [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] was implemented in Java and belongs to the open-source
framework DL-Learner3. Let us consider a knowledge base K and a concept name
Target whose formal description, i.e. class description, we want to learn. It
learns a set of n class expressions Ci (1 i n) from a set of positive and
negative examples. Let K0 = K [ fCg where K is the background knowledge, we
say that a concept C covers an example e if K j
0 = e. The class expressions found
are sorted according to a heuristic. Such expressions can be used to generate
candidate axioms of the form Ci v Target.
      </p>
      <p>In order to learn an ontology, LEAPMR rst searches for good candidate
probabilistic subsumption axioms by means of CELOE, then it performs a greedy
search in the space of theories using EDGEMR to evaluate the theories using the
log-likelihood as heuristic.</p>
      <p>LEAPMR takes as input the knowledge base K and a set of examples, then
generates a set of candidate axioms by exploiting CELOE. A rst execution
of EDGEMR is applied to K to compute the initial value of the parameters
and of the LL. Then LEAPMR adds to K one probabilistic subsumption axiom
3 http://dl-learner.org/
generated from CELOE. After each addition, EDGEMR is performed on the
extended KB to compute the LL of the data and the parameters. If the LL is
better than the current best, the new axiom is kept in the knowledge base and the
parameter of the probabilistic axiom are updated, otherwise the learned axiom is
removed from the ontology and the previous parameters are restored. The nal
theory is obtained from the union of the initial ontology and the probabilistic
axioms learned.
6</p>
    </sec>
    <sec id="sec-7">
      <title>Experiments</title>
      <p>In order to test how much the exploitation of EDGEMR can improve the
performances of LEAPMR, we did a preliminary test where we considered the Moral4
KB which qualitatively simulates moral reasoning. It contains 202 individuals
and 4710 axioms (22 axioms are probabilistic).</p>
      <p>We performed the experiments on a cluster of 64-bit Linux machines with
8-cores Intel Haswell 2.40 GHz CPUs and 2 GB (max) memory allotted to Java
per node. We allotted 1, 3, 5, 9 and 17 nodes, where the execution with 1 node
corresponds to the execution of LEAP, while for the other con gurations we used
the dynamic scheduling with chunks containing 3 queries. For each experiment 2
candidate probabilistic axioms are generated by using CELOE and a maximum
of 3 explanations per query was set for EDGEMR. Table 1 shows the speedup
obtained as a function of the number of machines (nodes). The speedup is the
ratio of the running time of 1 worker to the one of n workers. We can note
that the speedup is signi cant even if it is sublinear, showing that a certain
amount of overhead (the resources, and thereby the time, spent for the MPI
communications) is present.</p>
      <p>Dataset
3</p>
      <p>N. of Nodes
5 9
9</p>
      <p>Moral 2.3 3.6 6.5 11.0</p>
      <p>Table 1. Speedup of LEAPMR relative to LEAP for Moral KB.
7</p>
    </sec>
    <sec id="sec-8">
      <title>Conclusions</title>
      <p>The paper presents the algorithm LEAPMR for learning the structure of
probabilistic description logics under DISPONTE. LEAPMR performs EDGEMR which
is a MapReduce implementation of EDGE, exploiting modern computing
infrastructures for performing distributed parameter learning.</p>
      <p>We are currently working for distributing both the structure and the
parameter learning of probabilistic knowledge bases by exploiting EDGEMR also during
4 https://archive.ics.uci.edu/ml/datasets/Moral+Reasoner
the building of the class expressions. In particular we would like to distribute
the scoring function used to evaluate the obtained re nements.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGuinness</surname>
            ,
            <given-names>D.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patel-Schneider</surname>
            ,
            <given-names>P.F</given-names>
          </string-name>
          . (eds.):
          <article-title>The Description Logic Handbook: Theory, Implementation, and Applications</article-title>
          . Cambridge University Press, New York, NY, USA (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horrocks</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sattler</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          : Description Logics,
          <source>chap. 3</source>
          , pp.
          <volume>135</volume>
          {
          <fpage>179</fpage>
          .
          <string-name>
            <surname>Elsevier Science</surname>
          </string-name>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bellodi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riguzzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albani</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A distribution semantics for probabilistic ontologies</article-title>
          .
          <source>In: International Workshop on Uncertainty Reasoning for the Semantic Web. CEUR Workshop Proceedings</source>
          , vol.
          <volume>778</volume>
          , pp.
          <volume>75</volume>
          {
          <fpage>86</fpage>
          .
          <string-name>
            <surname>Sun SITE Central Europe</surname>
          </string-name>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bellodi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riguzzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Expectation Maximization over Binary Decision Diagrams for probabilistic logic programs</article-title>
          .
          <source>Intell. Data Anal</source>
          .
          <volume>17</volume>
          (
          <issue>2</issue>
          ),
          <volume>343</volume>
          {
          <fpage>363</fpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cota</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zese</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellodi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riguzzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Distributed parameter learning for probabilistic ontologies (</article-title>
          <year>2015</year>
          ), to appear
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Fleischhacker</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Volker, J.:
          <article-title>Inductive learning of disjointness axioms</article-title>
          .
          <source>In: On the Move to Meaningful Internet Systems: OTM</source>
          <year>2011</year>
          , pp.
          <volume>680</volume>
          {
          <fpage>697</fpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ishihata</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kameya</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sato</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Minato</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Propositionalizing the EM algorithm by BDDs</article-title>
          .
          <source>In: Late Breaking Papers of the International Conference on Inductive Logic Programming</source>
          . pp.
          <volume>44</volume>
          {
          <issue>49</issue>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Auer</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Buhmann, L.,
          <string-name>
            <surname>Tramp</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Class expression learning for ontology engineering</article-title>
          .
          <source>J. Web Semant</source>
          .
          <volume>9</volume>
          (
          <issue>1</issue>
          ),
          <volume>71</volume>
          {
          <fpage>81</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Minervini</surname>
          </string-name>
          , P.,
          <string-name>
            <surname>d'Amato</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fanizzi</surname>
          </string-name>
          , N.:
          <article-title>Learning probabilistic description logic concepts: Under di erent assumptions on missing knowledge</article-title>
          .
          <source>In: Proceedings of the 27th Annual ACM Symposium on Applied Computing</source>
          . pp.
          <volume>378</volume>
          {
          <fpage>383</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ochoa-Luna</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Revoredo</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cozman</surname>
            ,
            <given-names>F.G.</given-names>
          </string-name>
          :
          <article-title>Learning probabilistic description logics: A framework and algorithms</article-title>
          .
          <source>In: Advances in Arti cial Intelligence</source>
          , pp.
          <volume>28</volume>
          {
          <fpage>39</fpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Riguzzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellodi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zese</surname>
          </string-name>
          , R.:
          <article-title>Epistemic and statistical probabilistic ontologies. In: Uncertainty Reasoning for the Semantic Web</article-title>
          .
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>900</volume>
          , pp.
          <volume>3</volume>
          {
          <fpage>14</fpage>
          .
          <string-name>
            <surname>Sun SITE Central Europe</surname>
          </string-name>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Riguzzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellodi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zese</surname>
          </string-name>
          , R.:
          <article-title>Parameter learning for probabilistic ontologies</article-title>
          .
          <source>In: RR</source>
          <year>2013</year>
          .
          <article-title>LNCS</article-title>
          , vol.
          <volume>7994</volume>
          , pp.
          <volume>265</volume>
          {
          <fpage>270</fpage>
          . Springer Berlin Heidelberg (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Riguzzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellodi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zese</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cota</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Learning probabilistic description logics</article-title>
          .
          <source>In: Uncertainty Reasoning for the Semantic Web III</source>
          , pp.
          <volume>63</volume>
          {
          <fpage>78</fpage>
          . LNCS, Springer International Publishing (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Riguzzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellodi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zese</surname>
          </string-name>
          , R.:
          <article-title>BUNDLE: A reasoner for probabilistic ontologies</article-title>
          .
          <source>In: RR</source>
          <year>2013</year>
          .
          <article-title>LNCS</article-title>
          , vol.
          <volume>7994</volume>
          , pp.
          <volume>183</volume>
          {
          <fpage>197</fpage>
          . Springer Berlin Heidelberg (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Sato</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>A statistical learning method for logic programs with distribution semantics</article-title>
          .
          <source>In: Proceedings of the 12th International Conference on Logic Programming</source>
          . pp.
          <volume>715</volume>
          {
          <fpage>729</fpage>
          . MIT Press (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. Volker, J.,
          <string-name>
            <surname>Niepert</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Statistical schema induction</article-title>
          .
          <source>In: The Semantic Web: Research and Applications</source>
          , pp.
          <volume>124</volume>
          {
          <fpage>138</fpage>
          . Springer Berlin Heidelberg (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>