<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>November</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Elisabetta Gentili</string-name>
          <email>elisabetta.gentili1@unife.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Engineering, University of Ferrara</institution>
          ,
          <addr-line>Via Saragat, 1, 44124, Ferrara</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Knowledge Graphs Completion, Probabilistic Inductive Logic Programming</institution>
          ,
          <addr-line>Regularization</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Riguzzi, Department of Mathematics and Computer Science, University of Ferrara</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>0</volume>
      <fpage>6</fpage>
      <lpage>09</lpage>
      <abstract>
        <p>Knowledge Graphs have gained popularity in the last decade, given their ability to represent huge structured knowledge bases. However, they are often incomplete and thus Knowledge Graph Completion (KGC) is currently a hot topic. In this paper we present our idea of performing KGC by learning liftable probabilistic logic programs via regularization, using LIFTCOVER+, with the aim of obtaining more accurate results while learning a smaller set of rules.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Background</title>
      <p>
        ⟨, , ⟩
is ⟨, , ℎ⟩
Even though the term Knowledge Graph (KG) has been used from 1973 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], there is still no
universally accepted formal definition for it [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Nevertheless, we can say that KGs are
graphbased representations of knowledge in terms of relationships between entities. More practically,
following the Resource Description Framework (RDF) data model [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], a KG is a set of triples
where s is the subject, p is the predicate, and o is the object. An example of such a triple
. Figure 1 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] shows an example of a KG.
      </p>
      <p>Data can be naturally and efectively represented with graphs in many real-world domains,
such as computer networks, social networks, healthcare (diseases, molecules), transportation,
and so on. For this reason, KGs are employed for diferent tasks like query answering,
recommender systems, chatbots and voice assistants.</p>
      <p>
        KGs have become popular over the last decade thanks to the introduction of Google’s
Knowledge Graph in 2012 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], used to automatically generate knowledge panels. Knowledge panels
are boxes containing information coming from various sources on the web, and are meant to
give the user an overview of the researched topic. Aside from Google’s, other popular examples
of KGs, both proprietary and open, are Amazon Product graph, Facebook Graph API, IBM
Watson, Microsoft Satori, Wikimedia’s Wikidata [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], YAGO [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and FreeBase [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
amsterdam
born
      </p>
      <p>born
lisa</p>
      <p>male
gender</p>
      <p>ed
married
lives
gender
speaks
lives
netherlands
bob
dutch
speaks
lang</p>
      <p>
        Since KGs cannot contain all the possible knowledge in the domain, they are usually
incomplete and sparse, thus it is often necessary to infer missing information (entities or relationships).
This task is referred to as Knowledge Graph Completion (KGC). According to what is missing,
KGC can be divided into specific tasks [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], such as link prediction, entity prediction, or relation
prediction.
      </p>
      <p>
        KGC is a very active field of research and many algorithms have been proposed to solve the
problem. They can be divided into traditional and representation learning-based methods [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
Rule-based reasoning methods and probabilistic graphical models, such as Markov Logic
Networks, are examples of techniques that fall under the first category. On the other hand, KGC
methods based on embeddings or neural network models are an example of technique belonging
to the second category.
      </p>
    </sec>
    <sec id="sec-3">
      <title>2. Methodology</title>
      <p>
        Our goal is to perform KGC with a Probabilistic Logic Programming (PLP) algorithm [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], to
learn logical rules representing paths in large KGs, which will allow the ranking of candidates
in terms of probabilities. PLP combines logic-based languages and uncertainty [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In the last
years PLP under the distribution semantics [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] in particular has gained high popularity thanks
to its expressiveness, especially in domains where uncertainty plays a relevant role [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14, 15</xref>
        ].
Logic Programs with Annotated Disjunctions (LPADs) [16] are a PLP language under the
distribution semantics. In LPADs, heads of clauses are disjunctions in which each atom is
annotated with a probability. Liftable Probabilistic Logic Programs [ 17] have been proposed
to perform lifted inference [ 18] in an eficient way by taking into consideration populations
of individuals instead of considering each individual separately. LIFTCOVER+ [19] performs
structure and parameter learning of liftable probabilistic logic programs, and it is an improved
version of LIFTCOVER [17] that adds regularization and gradient descent for parameter learning,
to improve the quality of the solutions and prevent overfitting.
      </p>
      <p>The triples of a KG can be represented by First-Order Logic (FOL) atoms. For example,
the triple ⟨, , ℎ⟩ can be represented by the atom (, ℎ) . Therefore, we
(, ℎ) ←  (, ℎ ), (ℎ, ℎ ).
consider a KG  as a set of ground atoms or facts:</p>
      <p>= { (, ) |   ∈ ℝ, ,  ∈ ℂ}
where ℂ is a set of constants (entities) and ℝ is a set of binary predicates (relations).</p>
      <p>
        Following the approach of AnyBURL [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], we want to learn chain rules of increasing length of
the form:
      </p>
      <p>ℎ( 0,  1) ←  1( 1,  2), … ,   (  ,  +1 ).</p>
      <p>Here ℎ(…) is the head of the rule, while the   (…) form the body. Upper-case letters are variables.</p>
      <p>
        AnyBURL is an anytime algorithm designed to learn rules from knowledge graphs by
following the bottom-up paradigm. With n referring to the number of body atoms and starting from
 = 2 , AnyBURL iteratively samples random paths of length n, and learns rules of length  − 1 ,
until a certain saturation is reached. For each rule, the confidence is computed as the number
of head and body groundings that are true divided by the number of body groundings that are
true. Considering the KG in Figure 1 from [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], in order to explain the fact (, ℎ) ,
AnyBURL finds all the paths starting from ed or netherlands, and their corresponding bottom
rule. An example of bottom rule to be generalized is the following:
Then, starting from these bottom rules, AnyBURL extracts generalized rules.
      </p>
      <p>Given a ground path rule of the form ℎ( 0,  1) ←  1( 1,  2), … ,   (  ,  +1 ), extracted rules can
be of one of three types:
1. rules that generalize acyclic ground path rules, i.e., rules where  0 ≠  +1 :
2. rules that generalize cyclic ground path rules, i.e., rules where  0 =  +1 :
3. rules that generalize both acyclic and cyclic ground path rules:
ℎ( 0,  ) ←  1( ,</p>
      <p>2), … ,   (  ,  +1 );
ℎ( ,  ) ← 
1( ,</p>
      <p>2), … ,   (  ,  );
ℎ( 0,  ) ←  1( , 
2), … ,   (  ,  +1 );
where  ,  are variables that appear in the head, while   can appear only in the body.</p>
      <p>A rule is stored only if a certain quality criteria is met (e.g., the confidence above a specified
threshold). Then, n is increased by 1 and the loop is repeated. Given a completion task  (, ?) ,
the learnt rules are then used to find an entity c such that  (, ) ∉  is true, with  ∈ ℝ and
,  ∈ ℂ . The candidate values for c are ordered according to the maximum confidence of
all the rules that generated them. In case of a tie for some candidates, the second rule that
generated them is considered, and so on. The following generalized rules can be obtained from
the above-mentioned bottom rule:
(,  ) ←  (, 
where upper-case letters are variables.</p>
      <p>In our approach, we learn rules with the AnyBURL algorithm but we attach to each rule a
probability instead of a confidence and we tune the probabilities of the set of rules with parameter
learning. We use LIFTCOVER+ that uses regularization: we try to bring the parameters close to
0 as much as possible and we remove the rules with a probability below a threshold, because
they have a small influence on the final result. By doing so, the ranking should be more accurate
and the number of rules learnt smaller. Furthermore, being a rule-based approach, the resulting
candidate ranking is explained by the rules, and thus easily understandable.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Related work</title>
      <p>
        Aside from AnyBURL [
        <xref ref-type="bibr" rid="ref4">20, 4</xref>
        ], several other approaches have been proposed for KGC.
      </p>
      <p>AMIE [21] and its improved version AMIE++ [22] are top-down rule learning systems tailored
to support the Open World Assumption, that is, a scenario in which absent data cannot be
used as counterexamples. AMIE++ was developed to work with larger knowledge bases. The
authors of [23] proposed a neural model for Existential Positive First-Order logical queries
represented via box embeddings. In [24], the authors developed an approach for mining
relational nonmonotonic rules from KGs under the Open World Assumption, that combines
rule learning and nonmonotonic reasoning. DRUM [25] is an approach used for mining
firstorder logical rules from KGs by performing inductive link prediction and thus able to manage
previously unseen entities. The authors of [26] proposed a rule learning approach to learn
typed rules using type information to guide the rule search.</p>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusions</title>
      <p>In this paper, we presented our approach for performing KGC with PLP. The approach is based
on the AnyBURL algorithm, however, it difers from it in the ranking method. In fact, we use
probabilities instead of confidence values. Furthermore, the algorithm we employ, LIFTCOVER+,
uses regularization to prune rules with negligible probabilities. In this way we should be able
to obtain a more accurate ranking and a smaller set of learnt rules.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This article was produced while attending the PhD programme in Engineering Science at
the University of Ferrara, Cycle XXXVIII, with the support of a scholarship financed by the
Ministerial Decree no. 351 of 9th April 2022, based on the NRRP - funded by the European Union
- NextGenerationEU - Mission 4 “Education and Research”, Component 1 “Enhancement of the
ofer of educational services: from nurseries to universities” - Investment 4.1 “Extension of the
number of research doctorates and innovative doctorates for public administration and cultural
heritage”. This work has been partially supported by the Spoke 1 “FutureHPC &amp; Big-Data” of the
Italian Research Center on High-Performance Computing, Big Data and Quantum Computing
(ICSC) funded by MUR Missione 4 - Next Generation EU (NGEU), by TAILOR, a project funded
by EU Horizon 2020 research and innovation programme under GA No. 952215, and by the
National Group of Computing Science (GNCS-INDAM).
on Deep Understanding and Reasoning, URANIA 2016, volume 1802 of CEUR Workshop
Proceedings, Sun SITE Central Europe, 2017, pp. 30–37.
[15] A. Nguembang Fadja, F. Riguzzi, Probabilistic logic programming in action, in: A. Holzinger,
R. Goebel, M. Ferri, V. Palade (Eds.), Towards Integrative Machine Learning and Knowledge
Extraction, volume 10344 of Lecture Notes in Computer Science, Springer, 2017, pp. 89–116.
doi:10.1007/978-3-319-69775-8\_5.
[16] J. Vennekens, S. Verbaeten, M. Bruynooghe, Logic Programs With Annotated Disjunctions,
in: 20th International Conference on Logic Programming (ICLP 2004), volume 3132 of
Lecture Notes in Computer Science, Springer, 2004, pp. 431–445.
[17] A. Nguembang Fadja, F. Riguzzi, Lifted discriminative learning of probabilistic logic
programs, Machine Learning 108 (2019) 1111–1135.
[18] D. Poole, First-order probabilistic inference, in: G. Gottlob, T. Walsh (Eds.), IJCAI-03,
Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence,
Acapulco, Mexico, August 9-15, 2003, Morgan Kaufmann Publishers, 2003, pp. 985–991.
[19] E. Gentili, A. Bizzarri, D. Azzolini, R. Zese, F. Riguzzi, Regularization in probabilistic
inductive logic programming, in: International Conference on Inductive Logic Programming,
2023. (in press).
[20] C. Meilicke, M. W. Chekol, P. Betz, M. Fink, H. Stuckeschmidt, Anytime bottom-up rule
learning for large-scale knowledge graph completion, The VLDB Journal (2023) 1–31.
[21] L. A. Galárraga, C. Teflioudi, K. Hose, F. Suchanek, Amie: association rule mining
under incomplete evidence in ontological knowledge bases, in: Proceedings of the 22nd
international conference on World Wide Web, 2013, pp. 413–422.
[22] L. Galárraga, C. Teflioudi, K. Hose, F. M. Suchanek, Fast rule mining in ontological
knowledge bases with AMIE++, The VLDB Journal 24 (2015) 707–730.
[23] H. Ren, W. Hu, J. Leskovec, Query2box: Reasoning over knowledge graphs in vector space
using box embeddings, arXiv preprint arXiv:2002.05969 (2020).
[24] H. D. Tran, D. Stepanova, M. H. Gad-Elrab, F. A. Lisi, G. Weikum, Towards nonmonotonic
relational learning from knowledge graphs, in: Inductive Logic Programming: 26th
International Conference, ILP 2016, London, UK, September 4-6, 2016, Revised Selected
Papers 26, Springer, 2017, pp. 94–107.
[25] A. Sadeghian, M. Armandpour, P. Ding, D. Z. Wang, Drum: End-to-end diferentiable rule
mining on knowledge graphs, Advances in Neural Information Processing Systems 32
(2019).
[26] H. Wu, Z. Wang, K. Wang, Y.-D. Shen, Learning typed rules over knowledge graphs, in:
Proceedings of the International Conference on Principles of Knowledge Representation
and Reasoning, volume 19, 2022, pp. 494–503.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E. W.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <article-title>Course modularization applied: The interface system and its implications for sequence control and data analysis</article-title>
          .,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ehrlinger</surname>
          </string-name>
          , W. Wöß,
          <article-title>Towards a definition of knowledge graphs</article-title>
          .,
          <source>SEMANTiCS</source>
          (Posters, Demos, SuCCESS)
          <volume>48</volume>
          (
          <year>2016</year>
          )
          <article-title>2</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>O.</given-names>
            <surname>Lassila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Swick</surname>
          </string-name>
          ,
          <article-title>Resource description framework (rdf) model</article-title>
          and syntax specification,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Meilicke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Chekol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rufinelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          ,
          <article-title>Anytime bottom-up rule learning for knowledge graph completion</article-title>
          .,
          <source>in: IJCAI</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>3137</fpage>
          -
          <lpage>3143</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Singhal</surname>
          </string-name>
          , et al.,
          <article-title>Introducing the knowledge graph: things, not strings</article-title>
          ,
          <source>Oficial Google blog 5</source>
          (
          <year>2012</year>
          )
          <article-title>3</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Bayer</surname>
          </string-name>
          ,
          <article-title>The Wikidata revolution is here: enabling structured data on Wikipedia</article-title>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Suchanek</surname>
          </string-name>
          , G. Kasneci, G. Weikum,
          <article-title>Yago: A large ontology from wikipedia and wordnet</article-title>
          ,
          <source>Journal of Web Semantics</source>
          <volume>6</volume>
          (
          <year>2008</year>
          )
          <fpage>203</fpage>
          -
          <lpage>217</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Bollacker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Evans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Paritosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Sturge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Taylor</surname>
          </string-name>
          , Freebase:
          <article-title>a collaboratively created graph database for structuring human knowledge</article-title>
          ,
          <source>in: Proceedings of the 2008 ACM SIGMOD international conference on Management of data</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>1247</fpage>
          -
          <lpage>1250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhao</surname>
          </string-name>
          , J. Cheng,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <article-title>Knowledge graph completion: A review</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>192435</fpage>
          -
          <lpage>192456</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Azzolini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Gentili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Riguzzi</surname>
          </string-name>
          ,
          <article-title>Link Prediction in Knowledge Graphs with Probabilistic Logic Programming: Work in Progress</article-title>
          , in: J.
          <string-name>
            <surname>Arias</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Batsakis</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Faber</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Pacenza</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Papadakis</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Robaldo</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Ruckschloss</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Salazar</surname>
            ,
            <given-names>Z. G.</given-names>
          </string-name>
          <string-name>
            <surname>Saribatur</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Tachmazidis</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Weitkamper</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Wyner (Eds.),
          <source>Proceedings of the International Conference on Logic Programming</source>
          <year>2023</year>
          <article-title>Workshops co-located with the 39th International Conference on Logic Programming (ICLP</article-title>
          <year>2023</year>
          ), volume
          <volume>3437</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Riguzzi</surname>
          </string-name>
          ,
          <article-title>Foundations of Probabilistic Logic Programming Languages, Semantics, Inference and Learning</article-title>
          , Second Edition, River Publishers, Gistrup, Denmark,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Sato</surname>
          </string-name>
          ,
          <article-title>A statistical learning method for logic programs with distribution semantics</article-title>
          , in: L.
          <string-name>
            <surname>Sterling</surname>
          </string-name>
          (Ed.),
          <string-name>
            <surname>Logic</surname>
            <given-names>Programming</given-names>
          </string-name>
          ,
          <source>Proceedings of the Twelfth International Conference on Logic Programming</source>
          , Tokyo, Japan, June 13-16,
          <year>1995</year>
          , MIT Press,
          <year>1995</year>
          , pp.
          <fpage>715</fpage>
          -
          <lpage>729</lpage>
          . doi:
          <volume>10</volume>
          .7551/mitpress/4298.003.0069.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>L. De Raedt</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kimmig</surname>
          </string-name>
          ,
          <article-title>Probabilistic (logic) programming concepts</article-title>
          ,
          <source>Machine Learning</source>
          <volume>100</volume>
          (
          <year>2015</year>
          )
          <fpage>5</fpage>
          -
          <lpage>47</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10994-015-5494-z.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>F.</given-names>
            <surname>Riguzzi</surname>
          </string-name>
          , E. Lamma,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alberti</surname>
          </string-name>
          , E. Bellodi,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zese</surname>
          </string-name>
          , G. Cota,
          <article-title>Probabilistic logic programming for natural language processing</article-title>
          , in: F. Chesani,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mello</surname>
          </string-name>
          , M. Milano (Eds.), Workshop
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>