=Paper= {{Paper |id=Vol-2954/invited-4 |storemode=property |title=Learning and Reasoning with Logic Tensor Networks: The Framework and an Application (Abstract of Invited Talk) |pdfUrl=https://ceur-ws.org/Vol-2954/invited-4.pdf |volume=Vol-2954 |authors=Luciano Serafini |dblpUrl=https://dblp.org/rec/conf/dlog/Serafini21 }} ==Learning and Reasoning with Logic Tensor Networks: The Framework and an Application (Abstract of Invited Talk)== https://ceur-ws.org/Vol-2954/invited-4.pdf
  Learning and Reasoning with Logic Tensor
 Networks: The Framework and an Application⋆

                                  Luciano Serafini

                       Fondazione Bruno Kessler, Trento, Italy

Logic Tensor Networks (LTN) is a theoretical framework and an experimental
platform that integrates learning based on tensor neural networks with reason-
ing using first-order many-valued/fuzzy logic. LTN supports a wide range of
reasoning and learning tasks with logical knowledge and data using rich sym-
bolic knowledge representation in first-order logic (FOL) to be combined with
efficient data-driven machine learning based on the manipulation of real-valued
vectors. In practice, FOL reasoning including function symbols is approximated
through the usual iterative deepening of clause depth. Given data available in
the form of real-valued vectors, logical soft and hard constraints and relations
which apply to certain subsets of the vectors can be specified compactly in FOL.
All the different tasks can be represented in LTN as a form of approximated sat-
isfiability, reasoning can help improve learning, and learning from new data may
revise the constraints thus modifying reasoning. We apply LTNs to Semantic Im-
age Interpretation (SII) in order to solve the following tasks: (i) the classification
of an image’s bounding boxes and (ii) the detection of the relevant part-of rela-
tions between objects. The results shows that the usage of background knowledge
improves the performance of pure machine learning data driven methods.




⋆
    Copyright © 2021 for this paper by its authors. Use permitted under Creative Com-
    mons License Attribution 4.0 International (CC BY 4.0).