<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>S. Badreddine);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Samy Badreddine</string-name>
          <email>badreddine.samy@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Spranger</string-name>
          <email>michael.spranger@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Many-Valued Logic, Relational Database, Querying, Aggregation, Neurosymbolic AI</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Sony AI Inc</institution>
          ,
          <addr-line>1-7-1 Konan Minato-ku, Tokyo, 108-0075</addr-line>
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Sony Computer Science Laboratories Inc</institution>
          ,
          <addr-line>3-14-13 Higashi Gotanda, Tokyo, 141-0022</addr-line>
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Real Logic is a recently introduced first-order language where formulas have fuzzy truth values in the interval [0, 1] and semantics are defined concretely with real domains. The Logic Tensor Networks (LTN) framework has applied Real Logic to many important AI tasks through querying, learning, and reasoning. Motivated by real-life relational database applications, we study adding aggregate functions, such as averaging elements of a relation table, to Real Logic. The key contribution of this paper is the formalization of such functions within Real Logic. This extension is straightforward and fits coherently in the end-to-end diferentiable language that Real Logic is. We illustrate it on FooDB, a food chemistry database, and query foods and their nutrients. The resulting framework combines strengths of descriptive statistics modeled by fuzzy predicates, FOL to write complex queries and formulas, and SQL-like expressiveness to aggregate insights from data tables.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Real Logic is introduced in Logic Tensor Networks (LTN) [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], a neurosymbolic framework that
supports querying, learning, and reasoning with rich data and abstract knowledge about the
world. Real Logic is a first-order language with concrete semantics such that every expression
has an interpretation in real and continuous domains. In particular, LTN converts Real Logic
formulas, e.g. ∀ ∃ ( is_friend( ,  ) ∧ Italian( )), into TensorFlow computational graphs. Recent
works have applied Real Logic to many of the most important AI tasks, including supervised
classification, data clustering, semi-supervised learning, embedding learning, reasoning or
query answering [
        <xref ref-type="bibr" rid="ref1 ref3 ref4 ref5 ref6">1, 3, 4, 5, 6</xref>
        ].
      </p>
      <p>
        Querying, however, remains limited by syntactic limitations of First-Order Logic (FOL).
Despite being a powerful query language on knowledge graphs, FOL cannot express many
useful queries written in the most popular database applications such as SQL. These databases
organize relations in tables defined with rows and columns. We commonly apply
aggregate
functions to extract insight features over sets of row values. For example, the following query
proposed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] over a relation R 1 with attributes ”employee” and ”department”, and a relation
R 2 with attributes ”employee” and ”salary”, is trivial in SQL:
S E L E C T R 1 . D e p t , A V G ( R 2 . S a l a r y )
F R O M R 1 , R 2
W H E R E R 1 . E m p l o y e e = R 2 . E m p l o y e e
G R O U P B Y R 1 . D e p t
H A V I N G S U M ( R 2 . S a l a r y ) &gt; 1 0 0 0 0 0 0
      </p>
      <p>Comparable queries are not adequately captured by traditional logical languages, as their
semantics are not naturally equipped with aggregate functions such as S U M or A V G .</p>
      <p>In this paper, we study adding aggregate functions to Real Logic. Real Logic can naturally
support aggregate functions for two reasons: 1) individuals and domains are grounded with real
features, 2) variables are interpreted as finite sequences of individuals. These sequences and
real features are reminiscent of the rows in SQL-like tables and constitute ranges over which
the aggregate functions can straightforwardly operate.</p>
      <p>The paper starts with an overview of Real Logic in Section 2. In Section 3, we formalize
aggregate functions as mathematical operators and as part of a Real Logic signature, the latter
being the key contribution of this paper. In Section 4, we illustrate the functions with simple
queries on FooDB, a food chemistry dataset.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], authors dress a comprehensive summary for adding aggregation functions in the
context of logics. The authors study aggregates for a 2-sorted logics. The first sort is interpreted
as a usual logical domain, with ”abstract” semantics and set relationships. The second denotes
the rationals ℚ, on which the aggregate functions operate. In contrast, the concrete semantics
of Real Logic does not require a distinction between such sorts. Aggregate operators have also
been studied in Prolog and Datalog-like languages [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9, 10</xref>
        ].
      </p>
      <p>
        Perhaps the closest fit to this paper are fuzzy querying systems for relational databases
[11, 12, 13]. Real Logic similarly grounds the semantics of formulas with fuzzy truth-values. Let
 be a variable that denotes people. To represent is_tall() , rather than using a crisp boolean
condition height() &gt; 180 , one can use a continuous truth-value in [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] whose scaling depends
on descriptive statistics of the height in the population. Notice that these truth values do not
denote probabilities like in probabilistic databases [14]. Fuzzy querying systems allow making
lfexible queries about traditional (crisp) or fuzzy data. Incorporating aggregate functions in Real
Logic gives such expressive power to LTN, and integrates coherently with its other aspects.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Real Logic</title>
      <sec id="sec-2-1">
        <title>2.1. Basics and Notations</title>
        <p>
          Real Logic [
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ] is defined on a first-order language ℒ with a signature that contains a set of
constant symbols (individuals), a set of functional symbols, a set of relational symbols (predicates),
and a set of variable symbols. It allows to specify relational knowledge about the world, e.g.
the atomic formula is_friend(bob, alice) states that alice is a friend of bob and the formula
∀∀ ( is_friend(,  ) → is_friend( , )) states that the relation is_friend is symmetric, where , 
are variables, bob, alice are constants and is_friend is a predicate.
        </p>
        <p>Contrarily to the abstract semantics of FOL, the elements of the signature are grounded to data,
mathematical functions, and neural computational graphs. The connectives are grounded to
fuzzy semantics. To emphasize that symbols are grounded using real-valued features, we use the
(x, y) ↦ ||x||||y||</p>
        <p>
          x⋅y
term grounding, denoted by  . Individuals are grounded as tensors 1 of real features. Functions
are grounded as real functions, and predicates as real functions that specifically project onto a
value in the interval [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]. Consequently, ℒ-formulas are not assigned to boolean true or false
values, but instead are grounded to a truth-value in the continous interval [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]. For example,
in the expression is_friend(bob, alice),  ( bob)and  ( alice)can be vector embeddings in ℝ .
The friendship relationship can be approximated by a cosine similarity function  ( is_friend) ∶
, or a more complex function, depending on the application.
        </p>
        <p>Individuals are typed and belong to domains. Logical domains are grounded to real domains
that shape the individuals. For example, alice, bob are individuals of the domain people, and
 ( people) = ℝ . ℒ-functions and predicates are defined with an arity and a domain for
each argument. For functions, we also define an output domain. For example, the function
best_friend() takes an argument from people and returns an individual from people.</p>
        <p>A variable  is grounded as an explicit sequence of   individuals from a domain, with
0 &lt;   &lt; ∞. 2 This difers from a FOL variable, which is a placeholder for any individual from
the universe. Consequently, a term () or a formula  ()
variable  , will be grounded to a sequence of   values too.</p>
        <p>Terms and formulas are constructed recursively in the usual way, given that functional and
predicate symbols are applied to an appropriate number of terms with appropriate domains.</p>
        <p>Complex formulas are constructed using the usual logical connectives, ∧, ∨, →, ¬, and the
quantifiers</p>
        <p>
          ∀, ∃. Real Logic interprets connectives using fuzzy semantics, and quantifiers using
special aggregator functions. In particular, [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] recommends to use the product configuration ,
which has been shown in [15] to be better suited for gradient-based optimization (see Section
2.2), with minor modifications for numerical stability. In the product configuration, conjunctions
(∧) are grounded with the product t-norm Tprod, disjunctions (∨) with the product t-conorm
Sprod, implications (→) with the Reichenbach implication IR and negations (¬) with the standard
fuzzy negation NS. Existential quantifiers ( ∃) are grounded with the generalized mean M , with

 ≥ 1 , which also corresponds to the p-norm of the inputs in this particular setting. Universal
quantifiers ( ∀) are grounded with the generalized mean w.r.t. the error values ME , with  ≥ 1 .
lim→∞ M ( 1, … ,   ) = max{ 1, … ,   } and lim→∞ ME ( 1, … ,   ) = min{ 1, … ,   }.
constructed recursively with a free
Tprod(,  ) = 
Sprod(,  ) =  +  −
        </p>
        <p>IR(,  ) = 1 −  +</p>
        <p>NS() = 1 − 
M ( 1, … ,   ) = (</p>
        <p>1</p>
        <p>∑ 
times in  . However, the order of the values in the sequence does not matter.
∑(1 −   ) ) 
1</p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], authors also introduce diagonal quantification and guarded quantifiers. Guarded
quantifiers restrict the quantification over the individuals with groundings that satisfy some
boolean condition. For example, in:
        </p>
        <p>∀ (∃ ∶ age() &gt; age( ) (is_parent(,  ) ))
the boolean condition age() &gt; age( ) restricts the quantification over  .</p>
        <p>
          We do not use diagonal quantification in this paper, but the interested reader will find more
information in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Intuitively, it is a special form of quantification that performs a Python’s z i p
over two or more variables before applying the aggregator.3
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Knowledge, Querying and Optimization</title>
        <p>Knowledge is represented not only by logical formulas but also by the grounding 
which
explicitly connects symbols occurring in formulas and what concretely holds in the domain. For
example, the grounding of alice in real features that describe her height, age, etc., is knowledge.
This grounding definition can be parametric if  ( alice)is a set of embedded features to learn.
In that case, we write  ( alice ∣  ), where  is the set of trainable parameters. Similarly,
is_friend(alice, bob) can be grounded using an explicit cosine similarity function or can be
grounded using a trainable neural network. Consequently, learning knowledge in Real Logic is
not limited to inferring new formulas, but also concerns the parameters of a grounding.</p>
        <p>
          We define the satisfaction of a formula as its grounding. Querying a formula  is therefore
equivalent to evaluating  () , which returns a truth-value in [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]. One can also query a term
 by evaluating  () , which returns an individual (or a sequence of individuals, if the term has
free variables) from some real domain.
        </p>
        <p>3https://docs.python.org/3/library/functions.html#zip
(6)
(7)</p>
        <p>Let  define a collection of formulas, as represented in traditional logical knowledge bases.
The satisfaction of  under a parametric grounding  (⋅ ∣  )is defined as the aggregation of the
satisfactions of each  ∈  . The result depends on the choice of aggregate operator, denoted by
SatAgg. Often, SatAgg is defined as ME , also used to ground the ∀ operator. In LTN, one can
search the optimal grounding, that is the optimal set of parameters, to satisfy a theory according
to the following objective:
 ∗ = argmax SatAgg  ( ∣  )
∈
(8)
Because Real Logic grounds expressions in real and continuous domains, LTN attaches gradients
to every sub-expression and consequently learns through gradient-descent optimization.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Aggregate Functions</title>
      <sec id="sec-3-1">
        <title>3.1. Definition</title>
        <p>
          An aggregate function is a mathematical operation involving a range of values that results in a
single representative value. Existing literature usually defines aggregate functions on (sub)sets
of real or rational numbers [
          <xref ref-type="bibr" rid="ref7 ref9">9, 16, 7</xref>
          ]. We adapt the definition for Real Logic, where logical
domains can be grounded to many types of real domains. Let  in and  out denote such domains.
Definition 1. An aggregate function is a collection of functions  = { A(0), A(1), … , A(∞)},
where A() ∶ ( in) →  out. Each  -ary function A() describes how the aggregator behaves on
an  -element input. In particular, A(0) is the constant produced on an empty set of inputs, and
A(∞) defines the behavior when applied to an infinite set of inputs. When no confusion can
arise, we simply write A instead of A() .
        </p>
        <p>In many cases,  in =  out. For example, if  in = ℝ, then mean, min, max are all operators that
return objects in the same domain. On the other hand, if  in = ℝ3, a function count will output
results in ℝ (more precisely, ℕ) and therefore  in ≠  out.</p>
        <p>Notice that, for an application in Real Logic, the behavior of A(∞) does not have to be specified
as variables cannot be sequences of infinite length for practical reasons.</p>
        <p>
          Notice also that the aggregators used to approximate ∀, ∃ are a special case of aggregate
functions with  in =  out = [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. As Real Logic Functions</title>
        <p>Incorporate such aggregate functions into ℒ is straightforward. The concrete, real domains of
Real Logic can be mapped to the ones in Definition 1, Additionally, ℒ-variables are sequences,
and any term recursively constructed from a variable is a sequence too. Aggregate functions
can operate over these sequences.</p>
        <p>Definition 2. An ℒ-aggregate function symbol  with input domain  in and output domain
 out, is grounded using an aggregate function A (Definition 1) such that  ( in) =  in and
 ( out) =  out. Let  be a variable symbol, and | ()| =  , that is  is grounded as a sequence
of  individuals. For any term () with free variable  , such that  (()) ∈  ( in), we have:
 ( ()) =</p>
        <p>A=1,…, ( (()) () )
(9)
where  (()) () is the evaluation of  (()) using the  -th individual of  .</p>
        <p>The syntax is similar to that of quantifiers, e.g. ∀ () . Here, ∀ is replaced by an ℒ-aggregate
function symbol  , and the formula () is replaced by a term () . For ease of notation, and if
no confusion can arise, we simply write A to denote both the ℒ-symbol  and its grounding A.
Example 1. Let  be a variable that denotes people. In particular, let  () = ⟨alice, bob, charlie⟩
denote a sequence of three individuals. We can query the average height of  , using an aggregate
function mean, with:
 ( mean height()) = mean3=1  ( height())()</p>
        <p>( height(alice)) +  ( height(bob)) +  ( height(charlie))
=
3
(10)
(11)
The output is a new term, embedding the average height in the population, which can be used
as an input to other formulas.</p>
        <p>Notice that we can combine aggregate functions with guarded quantification (see Equation 7)
by re-using the same syntax and functionality.</p>
        <p>Example 2. Following the Example in 1, let us consider that alice and bob are Italian, but
charlie is not. Let Italian() be a boolean predicate that returns either 1 (true) or 0 (false). One
can query the average height of Italians with:
 ( mean∶ ()
height()) = mean3
 = 1
..  ( Italian()) () = 1
 ( height(alice)) +  ( height(bob))
=
2
 ( height())()
(12)
(13)</p>
        <p>Most of the common aggregate functions (except count) are diferentiable. Consequently,
they still integrate nicely with the end-to-end diferentiability of Real Logic and LTN. In the
following experiments, we only explore querying with aggregate functions. However, aggregate
functions could also be integrated with optimization in LTN.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <sec id="sec-4-1">
        <title>4.1. FooDB Dataset</title>
        <p>FooDB 4 is a database that provides information on food chemistry and food constituents,
including macronutrients, micronutrients, and many of the constituents that give foods their
lfavor, color, taste, texture, and aroma. The data is compiled from existing literature and
databases, in order to provide the most comprehensive food chemical database.</p>
        <p>We focus on querying macronutrient data from FooDB. In total, FooDB lists macronutrient
information for 797 ingredients. We average the information sourced from the USDA5 and from
the Technical University of Denmark6. We also use some of the simple ontologies about food
groups defined in FooDB.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Grounding</title>
        <p>Ingredients are grounded using their macronutrient attributes and their food groups, the latter
being encoded as integer labels. Functional symbols are mostly getters of these attributes. We
use explicit vocabulary for most of the terms. For example, let  be a variable for ingredients.
The function symbol prot( ) gives the protein level of ingredients, kcal( ) gives their energy
level, and so forth.</p>
        <p>For the aggregate functions, we use traditional operators such as mean, max, or std. Predicates
such as is_cereal( ) assert if the ingredient belongs to the corresponding food group. In this
experiment, such ontology predicates evaluate to 0 (false) or 1 (true).</p>
        <p>For grounding the connectives of the language, we use the product configuration presented
in Section 2. ME (∀) is implemented with  = 20 , making it relatively close to the min operator.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.2.1. Predicate is_high</title>
        <p>We use two fuzzy predicates is_high and is_low to describe values on a scale. Let  be a real
value sampled from a distribution  . To define how high is  compared to the other values in
the distribution, we use the concept of Cumulative Distribution Functions (CDF).
is_high ( ) =   ( ≤  )
(14)
5U.S. Department of Agriculture, Agricultural Research Service. FoodData Central, 2019. https://fdc.nal.usda.gov/
6Food data (frida.fooddata.dk), version 4, 2019, National Food Institute, Technical University of Denmark
Intuitively, we use the probability that another sample will take a value less than or equal to 
as a truth degree that defines how high the value of  is.</p>
        <p>In this experiment, we assume that logistic distributions approximately fit the nutrient data. 7
Let  be the mean value of the distribution and  be its standard deviation. Using the formula of
the CDF of a logistic distribution, the model for is_high computes:
is_high(, ,  ) =</p>
        <p>S(</p>
        <p>)
 − 

(15)
1
where S( ) = 1+ − is a sigmoid function.
 and standard deviation  in Equation 15.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.3. Queries</title>
        <p>is_low is computed by querying ¬is_high. Aggregate functions are used to compute the mean
We illustrate using queries about food nutrients in ingredients, that mix descriptive statistics
and complex logical queries. In particular, we ask:</p>
        <p>Q 1. What cereal product is not high in calories?
Q 2. What cereal product is high in protein?
Q 3. What cereal product is high in protein and not high in calories?
Q 4. For every ingredient, being high in protein implies being high in calories?
Q 5. For every ingredient, being high in fats implies being high in calories?</p>
        <p>The detailed formulas are in Table 2. We evaluate if a cereal is high in, for example, calories,
using other cereals as a scale (notice the guarded quantifier with condition is_cereal() ). When
querying about free variables (Q 1 , Q 2 , Q 3 ), we present the top 3 ranking results.</p>
        <p>Using a similar approach, we could integrate information on price, seasonability, etc., to rank
alternatives on continuous scales. Virtually, there is no limitation to the complexity of queries.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this paper, primarily motived by commercial database applications, we studied adding
aggregate functions to Real Logic. As a querying system, it combines strengths of 1) descriptive
statistics, modeled through fuzzy predicates, 2) FOL syntax to write complex queries, and 3)
SQL-like expressiveness to aggregate and collect insights from data tables.</p>
      <p>As many common aggregate functions are diferentiable, Real Logic with aggregate functions
can still be end-to-end diferentiable. If the knowledge graph entries are associated with
embeddings, one can use continuous optimization techniques to eficiently answer queries
[17, 18, 19]. In particular, in [17], authors propose an approach based on gradient-descent that
uses a logical language akin to Real Logic, where fuzzy semantics approximate connectives.
The same approach could incorporate aggregate functions and support a wider range of queries.</p>
      <p>7In real-life applications, adequate statistical tests should validate the choice of distribution. In this paper, we
do not conduct such tests as our results are illustrative and not part of our contribution.
j . i n s . 2 0 1 0 . 0 8 . 0 4 3 , publisher: Elsevier.
[10] A. Van Gelder, The well-founded semantics of aggregation, in: Proceedings of the eleventh
ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, PODS
’92, Association for Computing Machinery, New York, NY, USA, 1992, pp. 127–138. URL:
https://doi.org/10.1145/137097.137854. doi:1 0 . 1 1 4 5 / 1 3 7 0 9 7 . 1 3 7 8 5 4 .
[11] S.-M. Chen, W.-T. Jong, Fuzzy query translation for relational database systems, IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 27 (1997) 714–721.
doi:1 0 . 1 1 0 9 / 3 4 7 7 . 6 0 4 1 1 7 , conference Name: IEEE Transactions on Systems, Man, and
Cybernetics, Part B (Cybernetics).
[12] J. Galindo, A. Urrutia, M. Piattini, Fuzzy Databases: Modeling, Design and Implementation,
IGI Global, 2006. URL: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/
978-1-59140-324-1. doi:1 0 . 4 0 1 8 / 9 7 8 - 1 - 5 9 1 4 0 - 3 2 4 - 1 .
[13] R. Mama, M. Machkour, A study of fuzzy query systems for relational databases, in:
Proceedings of the 4th International Conference on Smart City Applications, SCA ’19,
Association for Computing Machinery, New York, NY, USA, 2019, pp. 1–5. URL: https:
//doi.org/10.1145/3368756.3369105. doi:1 0 . 1 1 4 5 / 3 3 6 8 7 5 6 . 3 3 6 9 1 0 5 .
[14] D. Suciu, D. Olteanu, C. Ré, C. Koch, Probabilistic Databases, Synthesis Lectures on
Data Management 3 (2011) 1–180. URL: https://www.morganclaypool.com/doi/abs/10.
2200/S00362ED1V01Y201105DTM016. doi:1 0 . 2 2 0 0 / S 0 0 3 6 2 E D 1 V 0 1 Y 2 0 1 1 0 5 D T M 0 1 6 , publisher:
Morgan &amp; Claypool Publishers.
[15] E. van Krieken, E. Acar, F. van Harmelen, Analyzing Diferentiable Fuzzy Logic Operators,
arXiv:2002.06100 [cs] (2020). URL: http://arxiv.org/abs/2002.06100, arXiv: 2002.06100.
[16] E. Grädel, Y. Gurevich, Metafinite Model Theory, Information and Computation 140
(1998) 26–81. URL: https://www.sciencedirect.com/science/article/pii/S0890540197926754.
doi:1 0 . 1 0 0 6 / i n c o . 1 9 9 7 . 2 6 7 5 .
[17] E. Arakelyan, D. Daza, P. Minervini, M. Cochez, Complex Query Answering with Neural
Link Predictors, arXiv:2011.03459 [cs] (2021). URL: http://arxiv.org/abs/2011.03459, arXiv:
2011.03459.
[18] T. Friedman, G. V. d. Broeck, Symbolic Querying of Vector Spaces: Probabilistic Databases
Meets Relational Embeddings, arXiv:2002.10029 [cs] (2020). URL: http://arxiv.org/abs/2002.
10029, arXiv: 2002.10029.
[19] M. Wang, R. Wang, J. Liu, Y. Chen, L. Zhang, G. Qi, Towards Empty Answers in SPARQL:
Approximating Querying with RDF Embedding, in: D. Vrandečić, K. Bontcheva, M. C.
Suárez-Figueroa, V. Presutti, I. Celino, M. Sabou, L.-A. Kafee, E. Simperl (Eds.), The
Semantic Web – ISWC 2018, Lecture Notes in Computer Science, Springer International
Publishing, Cham, 2018, pp. 513–529. doi:1 0 . 1 0 0 7 / 9 7 8 - 3 - 0 3 0 - 0 0 6 7 1 - 6 _ 3 0 .
As LTN implements Real Logic using Tensorflow computational graphs 8, it inherits from
Tensorflow’s built-in optimizations. For example, on a local portable machine, we measure the
computation time of two queries averaged on 10000 runs:</p>
      <p>Q 6. is_high(kcal(orange),mean kcal(), std kcal()) which takes 2.58 ms to execute,
Q 7. is_high(kcal( ), mean kcal(), std kcal()) which takes 3.57ms to execute.
Q 6 returns a single result for orange, whereas Q 7 returns results for each ingredient, that is 797
results. One could expect Q 7 to be approximately 797 slower than Q 6 if LTN would need to
recompute the terms mean kcal() and std kcal() as many times. Instead, we find that Q 6 is
executed in 2.58ms and Q 7 is executed in 3.57ms. Using Tensorflow built-in optimization, LTN
can eficiently re-use common parts of the computational graph for optimal complexity.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Badreddine</surname>
          </string-name>
          , A. d. Garcez,
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Spranger</surname>
          </string-name>
          , Logic Tensor Networks, arXiv:
          <year>2012</year>
          .13635 [cs] (
          <year>2021</year>
          ). URL: http://arxiv.org/abs/
          <year>2012</year>
          .13635, arXiv:
          <year>2012</year>
          .13635.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          , A. d. Garcez, Logic Tensor Networks:
          <article-title>Deep Learning and Logical Reasoning from Data and Knowledge</article-title>
          , arXiv:
          <fpage>1606</fpage>
          .04422 [cs] (
          <year>2016</year>
          ). URL: http://arxiv.org/abs/1606. 04422, arXiv:
          <fpage>1606</fpage>
          .
          <fpage>04422</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bianchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Palmonari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hitzler</surname>
          </string-name>
          , L. Serafini,
          <article-title>Complementing Logical Reasoning with Sub-symbolic</article-title>
          <string-name>
            <surname>Commonsense</surname>
          </string-name>
          ,
          <year>2019</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>170</lpage>
          .
          <source>doi:1 0 . 1 0</source>
          <volume>0 7 / 9 7 8 - 3 - 0 3 0 - 3 1 0 9 5 - 0</volume>
          _
          <fpage>1</fpage>
          1 .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bianchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hitzler</surname>
          </string-name>
          ,
          <article-title>On the Capabilities of Logic Tensor Networks for Deductive Reasoning</article-title>
          , in: AAAI Spring Symposium:
          <source>Combining Machine Learning with Knowledge Engineering</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.</given-names>
            <surname>Donadello</surname>
          </string-name>
          , L. Serafini,
          <article-title>Compensating Supervision Incompleteness with Prior Knowledge in Semantic Image Interpretation</article-title>
          , arXiv:
          <year>1910</year>
          .00462 [cs, stat] (
          <year>2019</year>
          ). URL: http://arxiv.org/ abs/
          <year>1910</year>
          .00462, arXiv:
          <year>1910</year>
          .00462.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>E. van Krieken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Acar</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. van Harmelen</surname>
          </string-name>
          ,
          <article-title>Semi-Supervised Learning using Diferentiable Reasoning</article-title>
          , arXiv:
          <year>1908</year>
          .04700 [cs] (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1908</year>
          .04700, arXiv:
          <year>1908</year>
          .04700.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Hella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Libkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nurmonen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <article-title>Logics with aggregate operators</article-title>
          ,
          <source>Journal of the ACM</source>
          <volume>48</volume>
          (
          <year>2001</year>
          )
          <fpage>880</fpage>
          -
          <lpage>907</lpage>
          . URL: https://doi.org/10.1145/502090.502100.
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 5 0 2 0 9 0 . 5 0 2 1 0 0 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Faber</surname>
          </string-name>
          , G. Pfeifer,
          <string-name>
            <given-names>N.</given-names>
            <surname>Leone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Dell'Armi</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Ielpa, Design and Implementation of Aggregate Functions in the DLV System</article-title>
          , arXiv:
          <fpage>0802</fpage>
          .3137 [cs] (
          <year>2008</year>
          ). URL: http://arxiv. org/abs/0802.3137, arXiv:
          <fpage>0802</fpage>
          .
          <fpage>3137</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Grabisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-L.</given-names>
            <surname>Marichal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mesiar</surname>
          </string-name>
          , E. Pap, Aggregation functions: Means,
          <source>Information Sciences 181</source>
          (
          <year>2011</year>
          )
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          . URL: https://hal.archives-ouvertes.
          <source>fr/hal-00539028. doi:1 0 . 1 0</source>
          <volume>1 6 /</volume>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>