<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>From Hypertree Width to Submodular Width and Data-dependent Structural Decompositions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Francesco Scarcello</string-name>
          <email>scarcello@dimes.unical.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DIMES, University of Calabria</institution>
          ,
          <addr-line>87036, Rende</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>24</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>Identifying the frontier of tractability for conjunctive queries is a longstanding question in database theory. Recent advances in structural decomposition methods provide an important answer regarding the fixed-parameter tractability of conjunctive queries, and efficient algorithms based on decomposition methods have been implemented in database management systems. The paper shows that these techniques are useful not only for long and complex queries, but even for short and simple ones. Moreover, it sheds some light on some subtle issues that are not completely understood yet, and discuss open problems, such as the gap remaining between the theoretical fixed-parameter polynomial-time upperbound and the running times required by actual algorithms used in practice.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Conjunctive queries are defined through conjunctions of atoms (without negation), and
are known to be equivalent to Select-Project-Join queries. The problem of evaluating
such queries is NP-hard in general, but it is feasible in polynomial time on the class
of acyclic queries (we omit “conjunctive,” hereafter), which was the subject of many
seminal research works since the early ages of database theory This class contains all
queries Q whose associated query hypergraph HQ is acyclic, where HQ is a hypergraph
having the variables of Q as its nodes, and the (sets of variables occurring in the) atoms
of Q as its hyperedges.</p>
      <p>
        In fact, queries arising from real applications are hardly precisely acyclic. Yet, they
are often not very intricate and, in fact, tend to exhibit some limited degree of cyclicity,
which suffices to retain most of the nice properties of acyclic ones. Therefore, several
efforts have been spent to investigate invariants that are best suited to identify
nearlyacyclic hypergraphs, leading to the definition of a number of so-called (purely)
structural decomposition-methods [
        <xref ref-type="bibr" rid="ref12 ref9">12,9</xref>
        ], such as the tree decompositions [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and, after
some years, the (generalized) hypertree [
        <xref ref-type="bibr" rid="ref7 ref8">7,8</xref>
        ], fractional hypertree [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], component
hypertree [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], and greedy hypertree [
        <xref ref-type="bibr" rid="ref13 ref14">14,13</xref>
        ] decompositions. These methods aim at
transforming a given cyclic hypergraph into an acyclic one, by organising its edges (or its
nodes) into a polynomial number of clusters, and by suitably arranging these clusters as
a tree, called decomposition tree. The original problem instance can then be evaluated
over such a tree of subproblems (in our case, subqueries), with a cost that is exponential
with respect to a measure of the complexity of the subproblems, also called width of
the decomposition, and polynomial if this width is bounded by some constant. For
instance, according to the treewidth notion, the width of a decomposition is given by the
maximum cardinality (minus one) of the set of variables occurring in the subqueries
associated with the vertices of the decomposition tree. The (generalized) hypertree width
considers instead the number of atoms occurring in each subproblem, in other terms,
the number of hyperedges needed to cover the relevant variables in each vertex of the
decomposition tree. The fractional hypertree width [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] is similar, but it is based on a
fractional cover of such variables, instead of an integral one, as in the case of (plain)
hypertree decompositions.
      </p>
      <p>
        Besides the theoretical guarantees on the cost of evaluating class of queries having
bounded (fractional) hypertree width, experimental evidence of the benefits gained from
these structural decomposition methods in query optimization (and in the equivalent
Constraint Satisfaction Problem) has been provided in the literature by a number of
different authors and in different application domains, see, e.g., [
        <xref ref-type="bibr" rid="ref1 ref5 ref6">5,6,1</xref>
        ].
      </p>
      <p>
        A yet more general width-concept is the submodular width [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], where the cost of a
decomposition is determined by the worst possible submodular function of sets of
variables of the given hypergraph. Recall that a set function f is submodular if, for every
element x and any pair of sets S and T with S ⊆ T and x ∈/ T , f (S ∪ {x}) − f (S) ≥
f (T ∪ {x}) − f (T ) holds. That is, the marginal contribution of any element decreases
when sets become larger. This powerful notion is strictly more general than all
previous techniques, as there are classes of hypergraphs having bounded submodular width,
but unbounded fractional hypertree width. However, having bounded submodular width
(smw) does not guarantee a polynomial-time combined complexity as for hypertree
decompositions, but just fixed-parameter tractability where the query hypergraph is used
as a parameter. This means that we have a polynomial-time data complexity of the form
O(f1(HQ) · |DB|f2(smw)), where f1 is an exponential function of the query size. Marx
also proved that having bounded smw is in fact a necessary condition for the
fixedparameter tractability of conjunctive queries (unless the Exponential Time Hypothesis
fails) [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Unfortunately, it is not clear how to recognise queries of bounded
submodular width. It is worthwhile noting that, unlike the previous methods, the decompositions
used to answer the query depend not only on the hypergraph HQ, but on the actual
database DB, too. Nevertheless, the submodular width of HQ depends only on HQ,
which makes it an actual structural measure.
      </p>
      <p>The primary aim of the paper is to shed light on the recent achievement in structural
decomposition methods, in particular
– to make clear why the powerful notion of fractional hypertree width is not able to
catch the precise complexity of a given query;
– to show, by using the case study of queries whose hypergraphs are simple cycles,
that using data-dependent decompositions, as in the case of submodular width,
provides better upper bounds on the cost of queries, and effective algorithms;
– to make evident that the benefits of such sophisticated notions occur not only in
long and complex queries, but even in short and simple ones, occurring in most
real-world applications;
– to give a hint of what is the submodular width of a query, and a hint of the problems
that are still open in this area of research.</p>
    </sec>
    <sec id="sec-2">
      <title>Hypertree decompositions and the AGM bound</title>
      <p>
        Let H a hypergraph, and denote the set of its nodes by nodes (H) and the set of its
hyperedges by edges (H). Formally, a tree decomposition [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] of H is a pair hT , χi,
where T = (V, F ) is a tree, and χ is a labelling function assigning to each vertex
p ∈ V a set of vertices χ(p) ⊆ nodes (H), such that the following three conditions are
satisfied: (1) for each node b of H, there exists p ∈ V such that b ∈ χ(p); (2) for each
hyperedge h ∈ edges (H), there exists p ∈ V such that h ⊆ χ(p); and (3) for each node
b in nodes (H), the set {p ∈ V | b ∈ χ(p)} induces a connected subtree of T . The width
of hT , χi is the number maxp∈V (|χ(p)| − 1). The treewidth of H, denoted by tw(H),
is the minimum width over all its tree decompositions.
      </p>
      <p>Hypertree Decompositions A generalized hypertree decomposition of a hypergraph
H is a triple HD = hT , χ, λi, called a hypertree for H, where hT , χi is a tree
decomposition of H, and λ is a function labelling the vertices of T by sets of hyperedges of H
such that, for each vertex p of T , χ(p) ⊆ Sh∈λ(v) h. That is, all nodes in the χ labelling
are covered by hyperedges in the λ labelling.</p>
      <p>Consider a generalized hypertree decomposition HD =hT , χ, λi. The cyclicity
measure used in standard generalized hypertree decompositions, called generalized
hypertree width and denoted by ghw(H), is based on the cardinality of the set λ(p) in charge
of covering χ(p), for each vertex p of the decomposition tree. Contrasted with the
treewidth, which is based on the cardinality of χ(p), it is clear that ghw(H) ≤ tw(H)
always holds. It has been observed that more general notions can be obtained by
allowing the use of more general functions fHD(·), to measure the weight fHD(p) of any
vertex of T , and hence to define the width of a hypertree decomposition.</p>
      <p>
        A fractional hypertree decomposition [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] of a hypergraph H is a pair FHD =
hHD , γi, where HD = hT , χ, λi is a generalized hypertree decomposition of H, and γ
is a mapping associating each vertex p of T with a function γp : λ(p) 7→ R mapping
hyperedges to non-negative real numbers. For each vertex p in T , γp encodes a
fractional cover of the nodes in χ(p): for each v ∈ χ(p), Ph∈λ(p),v∈h γp(h) ≥ 1. Define
ρ(p) = Ph∈λ(p) γp(h). The width of FHD is the maximum of ρ(p) over all vertices p
in T . The fractional hypertree width of H, denoted by f hw(H), is the minimum width
over all its fractional hypertree decompositions.2
      </p>
      <p>Generalized hypertree width is obtained if in the definition of fractional hypertree
decomposition we restrict the codomain of each γp to be integral. It is thus not
surprising that there are hypergraphs where the fractional hypertree width is smaller than the
(generalized) hypertree width.</p>
      <p>
        The additional power of fractional covers in the decomposition vertices makes the
problem intractable, in general. Indeed, for a given hypergraph H and for any fixed
w ≥ 2, computing a hypertree decomposition of width at most w (if any) is feasible in
polynomial-time, while checking whether H has fractional hypertree width at most w
is NP-hard [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. However, there is a polynomial-time algorithm for deciding whether the
fractional hypertree width is f (w) for a function f in O(w3) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
2 The equivalent (original) definition in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] does not use the λ-labeling, so that γp weighs all
hyperedges. Our definition is used, e.g., in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Query evaluation Queries whose (fractional) hypertree width is bounded by some
constant k can be answered in polynomial time. Given a query Q and a hypertree
decomposition HD for the query hypergraph HQ, the query can be transformed into an
acyclic query, based on the decomposition HD . Each vertex of the decomposition tree
can be viewed as a subproblem to be solved: in our database context, such a vertex p
is associated with a subquery Qp whose atoms are the atoms in λ(p), and whose set
of output variables is χ(p). By computing the answers of these subqueries, we obtain
an acyclic query equivalent to Q that can be evaluated easily, by standard techniques.
Equivalently, we can see this transformation as a query plan for Q (where some atoms
can appear multiple times with different projections, if necessary). The crucial
observation here is that the maximum number of answers of the subquery Qp associated
with each vertex p is at most |rp|ρ(p), where rp is the largest database relation
associated with the hyperedges in λ(p) [
        <xref ref-type="bibr" rid="ref15 ref3">15,3</xref>
        ]. Moreover, such answers can be efficiently
computed within this upper bound [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Tight Bounds The so-called AGM bound [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] states that, in fact, the upper bound on
the number of solutions at each subproblem Qp provided by the best fractional cover
ρ∗ is tight: there exists some database DB such that the number of answers of Qp
over this database meets the fractional cover upper bound. This result has been refined
in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], where a bound based on the so-called coloring number C(Qp) of such a query
is defined, in order to take into account the output variables, as well as certain kinds of
functional dependencies.
      </p>
      <p>It follows that, whenever the fractional hypertree width of a query Q is w, and we
have any fractional hypertree decomposition of width w, there always exists a database
DB such that evaluating the subproblem Qp at some vertex p of the decomposition tree
requires Ω(N w) time, where N is the size of the largest relation occurring in Qp. Yet,
from the results based on the submodular width, it turns out that we could do much
better. How is that possible?
3</p>
    </sec>
    <sec id="sec-3">
      <title>When worst cases are unachievable</title>
      <p>In this section, we show that fractional hypertree decompositions do not provide a tight
bound on the cost of evaluating conjunctive queries. This is not very clear, at a first
sight, because of the results mentioned in the previous section. However, we see that
even in very simple queries the worst cases predicted by fractional hypertree width are
impossible to achieve.</p>
      <p>Let us start with a hint of what happens. Consider a conjunctive query q with an
associated hypergraph Hq having fhw (Hq) = k, and let HD be a width-k fractional
hypertree decomposition for q. Let v be a vertex of HD whose associated subproblem
has a fractional cover of weight k. Then, there exists a database DB such that solving
this subproblem takes at least O(nk) time. However, it is possible that there exists
another decomposition HD ′ with no subproblem having such an evaluation cost over
this specific database DB. In its turn, HD ′ will have a critical database DB′ for which
some of its subproblems require O(nk) time. But DB′ is not necessarily a bad case for
the previous decomposition HD . Therefore, it is possible that, for the given query q, for
every database DB there is a fractional hypertree decomposition where any subproblem
occurring in its vertices can be evaluated in less than O(nk) time.</p>
      <p>As a case study of this phenomenon, we investigate the class of conjunctive queries
whose associated hypergraphs are (simple) cycles. These queries have (fractional)
hypertree width 2, but they can be actually evaluated in subquadratic time on any given
database.</p>
      <p>A simple example: the cycles First, consider the following simple cyclic query q4,
whose (hyper)graph G4 is shown in Figure 1:</p>
      <p>ans(X¯ ) ← r1(X1, X2) ∧ r2(X2, X3) ∧ r3(X3, X4) ∧ r4(X4, X1).</p>
      <p>Consider the decomposition of this graph in the center of Figure 1, which involves
subproblems such as r1(X1, X2) ∧ r2(X2, X3) and r3(X3, X4) ∧ r4(X4, X1) having
a fractional cover of weight 2. It follows that there exists a database DB4 such that,
e.g., the subquery ans(X1, X2, X3) ← r1(X1, X2) ∧ r2(X2, X3) has O(N 2) answers,
where for the sake of simplicity we assume that the relations are the same and that they
have N tuples each.</p>
      <p>Think now of how this worst-case bound can be achieved. First assume that ri is a
cartesian product of the domain of its variables: we have N = |d(X1)| · |d(X2)| so that
|d(X1)| = N 1/2. Because ans(X1, X2, X3) cannot be larger than the cartesian
products of the domain of its variables, it follows that, in this case, |ans(X1, X2, X3)| ≤
N 3/2. In order to reach the worst-case bound of N 2, in DB4 the attributes should
have quite different domain sizes. In particular, we must have many values on the
extremal variables of the chain X1, X2, X3, that is, almost N values for X1 and X3,
and a few values for X2. For instance, X2 may have just one skew value that is
connected to all values for X1 and X3, so that ans(X1, X2, X3) = d(X1) × 1 × d(X3)
and |ans(X1, X2, X3)| = N 2. For completeness, note that if X2 has many values,
say almost N , than it should behave like a key, and the size of ans(X1, X2, X3)
will be almost linear in N . Summing-up, to achieve the O(N 2) worst-case bound
on ans(X1, X2, X3), we should have |d(X2)| = Δ, with Δ much smaller than N .
However, in this case we can use the decomposition shown on the right in Figure 1:
note that the subproblems in the vertices of this decomposition have at most N Δ
answers, achieved in the vertices in the middle where we have a cartesian product of the
form ri × d(X2). The cost of evaluating q4 over DB4 using this decomposition is thus
O(N Δ), which is smaller than the O(N 2) cost predicted by fractional hypertree width.</p>
      <p>We next show that the above reasoning can be generalized to any database, showing
that the quadratic fractional hypertree-width worst-case is not tight, in particular such a
query can be evaluated in O(N 3/2), that is, within the same bound as the triangle query.
A general bound for evaluating cycles Following the intuition described in the
previous section, we next show that actually we can guarantee a subquadratic evaluation
time for every conjunctive query having a cycle as its hypergraph and on any given
database. A further ingredient to obtain a precise upper bound is to consider horizontal
fragments of the relations, and possibly use different decompositions for evaluating the
given query on different fragments. Therefore, not only we look for the best
decomposition with regard to specific database, but also the best ones for different fragments of
the database.</p>
      <p>
        This bound is a slight improvement of the upper bound described in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and it is
quite similar to the upper bound given in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] for the special case of queries asking for
length-k simple cycles in graphs, whose techniques are indeed used both in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and
X1
X4
      </p>
      <p>X2
X3
r1(X1, X2) ∧ r2(X2, X3)</p>
      <p>r4(X4, X1) ∧ r2(X2, )
r3(X3, X4) ∧ r4(X4, X1)
r3(X3, X4) ∧ r2(X2, )
r2(X2, X3)
in the present paper. Contrasted with this latter upper bound, our result is more general
because it works with queries involving any combination of arbitrary binary relations,
while the results on graphs assume only one relation (the edge relationship of the input
graph).</p>
      <p>Theorem 1. Let C be any class of conjunctive queries whose associated hypergraphs
are (simple) cycles. Then, the answers of1any query i1n C having k atoms on a database
of size N can be computed in O((2/k) ⌈k/2⌉ · N 2− ⌈k/2⌉ + OUT), where OUT is the size
of the output.</p>
      <p>Proof. Consider the following query qk ∈ C over a database DBk of size N¯ :
ans(X¯ ) ← r1(X1, X2) ∧ r2(X2, X3) ∧ · · · ∧ rk(Xk, X1),
where r1, . . . , rk are binary relation symbols, not necessarily distinct, and for each
i ∈ [k], ri has Ni tuples. Its associated hypergraph Gk is a simple cycle having k
vertices and k edges, and both hw(Gk) = 2 and f hw(Gk) = 2 holds.</p>
      <p>Let δ &gt; 0 be a number that will be determined later. For each i ∈ [k], let ri′ ⊆ ri be
the (horizontal) fragment of ri consisting of the tuples {hv, v′i ∈ ri | v occurs in more
than δ tuples in ri}. Let qi′ be the variant of query qk where the atom ri(Xi, Xi+1) is
replaced by ri′(Xi, Xi+1), and consider a hypertree decomposition for qi′ of the form
shown on the right in Figure 1, with Xi playing the role of X2. Note that the number
of tuples in the projection ri′(Xi, ) occurring in the decomposition vertices is at most
Ni/δ, so that qi can be evaluated in O(N¯ Ni/δ + OUTi). The OUTi answers of this
modified query are all those answers of qk such that Xi is instantiated with any value
of its domain having degree at least δ (if any). After the evaluation of all such modified
queries, we get all the OUTδ answers of qk over DBk where some variable takes a value
having degree at least δ. The total time used for such computations is O(N¯ N¯ /δ +
OUTδ). Note that, having these answers, each domain value having degree at least δ is
no longer needed in any relation.</p>
      <p>Then, we take care of the remaining fragment: consider the query qk′′: ans′′(X¯ ) ←
′′
r′′(X1, X2) ∧ r′′(X2, X3) ∧ · · · ∧ r′′(Xk, X1) over the database DBk with the relations
ri1′′ = ri \ ri′, fo2r each i ∈ [k]. Wekevaluate this query by using a tree decomposition
of the form shown in the center of Figure 1 with two big vertices, each one covering
half of the query. The width w of this decomposition is ⌈k/2⌉. Let rs and rt be the
smallest relations in the database DB′k′, having number of tuples Nr and Nt,
respectively. We can always choose the decomposition in such a way that rs and rt occur
in different vertices. Consider now the evaluation of the subproblem comprising the
relation rs: because every value has degree at most δ, every tuple of rs can have at
most δ extensions to tuples in the subsequent relation in the subproblem, say rz ; the
same happens for every tuple in rz , and so on for all the atoms in this subproblem. The
evaluation of these atoms takes O(Nsδw−1) time and, similarly, we need O(Ntδw−1)
to evaluate the subproblem occurring in the other vertex of the decomposition, for a
total cost of O((Ns + Nt)δw−1). Because rs and rt are the smallest relations, it can
be seen easily that (Ns + Nt) ≤ 2N¯ /k, so that the total cost for evaluating qk′′ over
DB′k′ using such a decomposition is O(2δw−1N¯ /k + OUT′′), where OUT′′ is the size of
the output relation ans′′(X¯ ). By equating the cost of using the two kinds of
decompositions, we can compute the value δ = (k/2N¯ )1/w that guarantees that we obtain the
same evaluation cost for either kind of fragment and for every possible input database.
With this value, the total cost for evaluating qk on any given database DBk of size N¯ is
O((2/k)1/wN 2−1/w + OUT), where OUT is the size of the output relation ans(X¯ ). ⊔⊓
4</p>
    </sec>
    <sec id="sec-4">
      <title>Discussion and Open problems</title>
      <p>Recent advances have shown that powerful structural decompositions such as the
(fractional) hypertree width or the submodular width may lead to effective algorithms even
for very simple queries, and not just for long and complex ones (possibly involving
atoms with large arities), which have been the typical targets for these sophisticated
techniques. Moreover, we have seen that the worst case upper bounds of the so-called
purely structural decomposition methods, where the choice of the decomposition
depends only on the query hypergraph, are not tight.</p>
      <p>
        Contrasted with such methods, the notion of submodular width, which is based on
data-dependent decompositions, is more powerful and in fact characterises the frontier
of fixed-parameter tractability for conjunctive queries. The algorithm used in [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] is
based on the computation of almost uniform horizontal fragments of all possible
subproblems that can be evaluated in polynomial-time. Very roughly, uniform means here
that, in each fragment and for any subset of variables of a subproblem, partial
solutions over these variables extend to full solutions of the subproblem in a similar way.
Whenever a subproblem does not meet this condition, it can be split in more
subproblems leading to new fragments. The subproblems of each new instance associated with
a fragment are kept locally consistent. That is, tuples that are not useful in that
fragment are filtered out, and the procedure continues looking for new small, consistent and
uniform subproblems to deal with. This procedure works in fixed-parameter polynomial
time, where the parameter is the size of the query. Eventually, under the promise that the
submodular width is below some fixed threshold w (which is used to define precisely
the above process), there should exist some tree decomposition whose subproblems are
among those that we have computed. Furthermore, whenever for an instance associated
with some fragment its subproblems are not empty, because they are pairwise consistent
we can conclude that the given query has some answer on the input database; otherwise,
we conclude that there are no answers (see also [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] for such consistency properties).
      </p>
      <p>
        Marx’s algorithm is quite involved and its actual cost is quite high even for queries
with a few variables. The PANDA algorithm described in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] aims at obtaining a
similar worst case bound, but is more suitable to effective implementations. However,
because of its different uniformization procedure, its cost involves a (log n)h factor, where
h is the number of atoms in the query, which is not allowed in fixed-parameter
polynomial time algorithms (the parameter cannot occur as the exponent of a number that
depends on the input size). In fact, it is still open whether or not such a bad factor can
be avoided, so that an effective fixed-parameter polynomial time algorithm can be
implemented. An algorithm for computing the submodular width of a given hypergraph is
also missing, as well as a possible approximation of this notion. Furthermore, the
frontier of polynomial-time tractability for conjunctive queries involving atoms of arbitrary
arities is still unknown, and it is even unknown whether it can actually be charted or
not.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Aberger</surname>
            ,
            <given-names>C.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamb</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , No¨tzli,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Olukotun</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          , Re´,
          <string-name>
            <surname>C.</surname>
          </string-name>
          :
          <article-title>Emptyheaded: A relational engine for graph processing</article-title>
          .
          <source>ACM TODS 42(4)</source>
          ,
          <volume>20</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          :
          <fpage>44</fpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Alon</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yuster</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zwick</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Finding and counting given length cycles</article-title>
          .
          <source>Algorithmica</source>
          <volume>17</volume>
          (
          <issue>3</issue>
          ),
          <fpage>209</fpage>
          -
          <lpage>223</lpage>
          (
          <year>1997</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Atserias</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grohe</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marx</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Size bounds and query plans for relational joins</article-title>
          .
          <source>SIAM J. on Comp</source>
          .
          <volume>42</volume>
          (
          <issue>4</issue>
          ),
          <fpage>1737</fpage>
          -
          <lpage>1767</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Fischl</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pichler</surname>
          </string-name>
          , R.:
          <article-title>General and fractional hypertree decompositions: Hard and easy cases</article-title>
          .
          <source>In: Proc. of PODS'18</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Ghionna</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Granata</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greco</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Hypertree decompositions for query optimization</article-title>
          .
          <source>In: Proc. of ICDE'07</source>
          . pp.
          <fpage>36</fpage>
          -
          <lpage>45</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ghionna</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greco</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
          </string-name>
          , F.:
          <string-name>
            <surname>H-DB</surname>
          </string-name>
          :
          <article-title>A Hybrid Quantitative-structural SQL Optimizer</article-title>
          .
          <source>In: Proc. of CIKM '11</source>
          . pp.
          <fpage>2573</fpage>
          -
          <lpage>2576</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Hypertree decompositions and tractable queries</article-title>
          .
          <source>JCSS</source>
          <volume>64</volume>
          (
          <issue>3</issue>
          ),
          <fpage>579</fpage>
          -
          <lpage>627</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Robbers, marshals, and guards: Game theoretic and logical characterizations of hypertree width</article-title>
          .
          <source>JCSS</source>
          <volume>66</volume>
          (
          <issue>4</issue>
          ),
          <fpage>775</fpage>
          -
          <lpage>808</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greco</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Treewidth and hypertree width</article-title>
          . In: Tractability: Practical Approaches to Hard Problems, pp.
          <fpage>3</fpage>
          -
          <lpage>38</lpage>
          . Cambridge University Press (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>S.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Valiant</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Valiant</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Size and Treewidth Bounds for Conjunctive Queries</article-title>
          .
          <source>J.ACM</source>
          <volume>59</volume>
          (
          <issue>3</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>35</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , Miklo´s,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Schwentick</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          :
          <article-title>Generalized hypertree decompositions: NP-hardness and tractable variants</article-title>
          .
          <source>J.ACM</source>
          <volume>56</volume>
          (
          <issue>6</issue>
          ),
          <volume>30</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          :
          <fpage>32</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Greco</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Terracina</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Structural decomposition methods: Key notions and database applications</article-title>
          .
          <source>In: A Comprehensive Guide Through the Italian Database Research Over the Last 25 Years</source>
          .,
          <source>Studies in Big Data</source>
          , vol.
          <volume>31</volume>
          , pp.
          <fpage>253</fpage>
          -
          <lpage>267</lpage>
          . Springer
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Greco</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Greedy strategies and larger islands of tractability for conjunctive queries and constraint satisfaction problems</article-title>
          .
          <source>Inf. Comput</source>
          .
          <volume>252</volume>
          ,
          <fpage>201</fpage>
          -
          <lpage>220</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Greco</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>The power of local consistency in conjunctive queries and constraint satisfaction problems</article-title>
          .
          <source>SIAM J. Comput</source>
          .
          <volume>46</volume>
          (
          <issue>3</issue>
          ),
          <fpage>1111</fpage>
          -
          <lpage>1145</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Grohe</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marx</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Constraint solving via fractional edge covers</article-title>
          .
          <source>ACM Trans. on Alg</source>
          .
          <volume>11</volume>
          (
          <issue>1</issue>
          ), 4:
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          :
          <fpage>20</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Joglekar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Re´,
          <string-name>
            <surname>C.</surname>
          </string-name>
          :
          <article-title>It's all a matter of degree - using degree information to optimize multiway joins</article-title>
          .
          <source>Theory Comput. Syst</source>
          .
          <volume>62</volume>
          (
          <issue>4</issue>
          ),
          <fpage>810</fpage>
          -
          <lpage>853</lpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Khamis</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ngo</surname>
            ,
            <given-names>H.Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suciu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>What do shannon-type inequalities, submodular width, and disjunctive datalog have to do with one another?</article-title>
          <source>In: Proc. of PODS</source>
          <year>2017</year>
          , pp.
          <fpage>429</fpage>
          -
          <lpage>444</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Marx</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Approximating fractional hypertree width</article-title>
          .
          <source>ACM T. on Alg</source>
          .
          <volume>6</volume>
          (
          <issue>2</issue>
          ),
          <volume>29</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Marx</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Tractable hypergraph properties for constraint satisfaction and conjunctive queries</article-title>
          .
          <source>J.ACM</source>
          <volume>60</volume>
          (
          <issue>6</issue>
          ),
          <volume>42</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          :
          <fpage>51</fpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Robertson</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seymour</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Graph minors</article-title>
          . II.
          <article-title>Algorithmic aspects of tree-width</article-title>
          .
          <source>J. of Alg</source>
          .
          <volume>7</volume>
          (
          <issue>3</issue>
          ),
          <fpage>309</fpage>
          -
          <lpage>322</lpage>
          (
          <year>1986</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>