<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Benchmarking Approximate Consistent Query Answering</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>(Discussion Paper)</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Calautti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Console</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Pieris</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DISI, University of Trento</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Sapienza, University of Rome</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Edinburgh</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Consistent query answering (CQA) aims to deliver meaningful answers when queries are evaluated over inconsistent databases. Such answers must be certainly true in all repairs, which are consistent databases whose diference from the inconsistent one is somehow minimal. A more informative notion is the percentage of repairs in which a candidate answer is true, called relative frequency. Computing this percentage is intractable in general, but for the relevant setting of conjunctive queries and primary keys, data-eficient randomized approximation schemes exist. Our goal is to perform a thorough experimental evaluation and comparison of those approximation schemes and provide new insights on which technique is indicated depending on key characteristics of the input.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        A database is inconsistent if it does not conform to its specifications given in the form of
constraints. There is a consensus that inconsistency is a real-life phenomenon that arises
due to many reasons such as integration of conflicting sources. With the aim of obtaining
conceptually meaningful answers to queries posed over inconsistent databases, Arenas, Bertossi,
and Chomicki introduced in the late 1990s the notion of Consistent Query Answering (CQA) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
The key elements underlying CQA are (1) the notion of repair of an inconsistent database ,
that is, a consistent database that difers somehow minimally from the original database , and
(2) the notion of query answering based on certain answers, i.e., answers that are entailed by
every repair. Here is a simple example taken from [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]:
Example 1. Consider the single relation Employee(id, name, dept) where the attribute id is the
key of the relation Employee. Consider also the database  consisting of the tuples:
(1, Bob, HR)
(1, Bob, IT)
(2, Alice, IT)
(2, Tim, IT).
      </p>
      <p>Observe that the above database is inconsistent w.r.t. the key constraint since we are uncertain
about Bob’s department, and the name of the employee with id 2. In this case, to devise a repair, we
only need to keep one of the two atoms that are in a conflict. In this way, we obtain a ⊆ -maximal
subset of  that is consistent. Consider now the Boolean query that asks whether employees 1 and
2 work in the same department. This query is true only in two repairs, thus, according to certain
answers, is not entailed.</p>
      <p>
        CQA has been extensively studied both in theory (see, e.g., [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ]), and in practice (see,
e.g., [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]). Nevertheless, the CQA approach comes with a conceptual limitation. The notion of
certain answers only says that a candidate answer is entailed by all repairs, or is not entailed by
at least one repair. But, as it has been frequently observed (e.g., see [
        <xref ref-type="bibr" rid="ref2 ref8 ref9">2, 8, 9</xref>
        ]), the former is too
strict, while the latter is not very useful in a practical context.
      </p>
      <p>
        A Refined Approach. A simple step in obtaining more information for a candidate answer
is to consider its relative frequency, i.e., the percentage of repairs that entail the answer [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
In Example 1, the relative frequency of the empty tuple (i.e., the only candidate answer for
Boolean queries) is 50% since, out of four repairs in total, only two satisfy the query. However,
computing this simple measure is #P-hard, even in data complexity, i.e., when the query and
the constraints are fixed, and even if we focus on conjunctive queries and primary keys [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
Hence, the way to proceed is to give up exact solutions, and target approximations.
Data-eficient Approximations. Approximation algorithms have been crucial for tackling
intractable problems in diferent areas of Data Management. Examples are in the context of
approximating certain answers over incomplete databases, where diferent approximation
algorithms have been devised and experimentally evaluated [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11, 12, 13</xref>
        ]. In the case of inconsistent
databases, data-eficient randomized algorithms exist, in the relevant setting of conjunctive
queries and primary keys, that approximate the relative frequency of a candidate answer within
a desired error, with high probability [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. These rely on techniques that have been originally
proposed to approximate the number of satisfying assignments of DNF Boolean formulas [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
However, no corresponding infrastructure for experimentally evaluating such techniques exists.
Main Objective. The main objective of this work is to provide a benchmark for these
randomized approximation schemes, when considering primary keys and conjunctive queries, covering
a wide range of scenarios, and exploit this benchmark to asses how the performance of the
approximation schemes is afected by key parameters of the input. 1
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries</title>
      <p>Relational Databases. A schema S is a finite set of relation symbols (or predicates) with
associated arity. A database  over a schema S is a set of facts of the form (̄), where  ∈ S,
and ̄ = 1, . . . ,  is a tuple of terms that are drawn from a countably infinite set C of constants.
We write dom() for the active domain of , that is, the set of constants occurring in .
Primary Key Constraints. A key constraint (or key)  over a schema S is an expression
key() = {1, . . . , }, where  ∈ S has arity , and  ≤ ; we also call it an -key. A
database  satisfies  if, for every (̄), ()̄ ∈ , ̄[] = [̄ ] for each  ∈ {1, . . . , } implies
̄ = ̄. We say that  is consistent w.r.t. a set Σ of keys, written  |= Σ , if  satisfies each
key in Σ ; otherwise, it is inconsistent. A set of primary keys Σ is a set of keys such that,
for each predicate , Σ has at most one -key. For  = (1, . . . , ), the key value of
1An extended version of this paper has been accepted to PODS 2021, and can be found at https://tinyurl.com/
f8sev5pj. Source code and test scenarios can be found at https://gitlab.com/mcalautti/cqabench.
 w.r.t. Σ , denoted keyΣ ( ), is defined as ⟨, ⟨1, . . . , ⟩⟩, if key() = {1, . . . , } ∈ Σ ,
and ⟨, ⟨1, . . . , ⟩⟩ otherwise. Let blockΣ (,  ) =
{

∈  | keyΣ ( ) = keyΣ ( )}, and
blockΣ () = {blockΣ (,  ) |  ∈ }. A repair of  w.r.t. Σ is a database obtained by choosing
one fact from each  ∈ blockΣ (). We denote the set of repairs of  w.r.t Σ as rep(, Σ) .
Conjunctive Queries. A conjunctive query (̄) over a schema S is a first-order expression of
the form ∃̄ (︀ 1(̄1) ∧ · · · ∧</p>
      <p>(̄)︀) that mentions only atoms over S, and has free variables ̄.</p>
      <p>For a tuple ̄ ∈ C|̄|, (̄) denotes  after replacing the -th variable in ̄ with the -th constant
in ̄. A homomorphism from  to a database  is a function ℎ from the set of variables and
constants in  to dom() that is the identity over C such that (ℎ(̄)) ∈  for every  ∈ [].
We use  →ℎ  to say that ℎ is a homomorphism from  to . The answer to (̄) over a
database , denoted (), is the set of tuples {ℎ(̄) ∈ dom()|̄|
| (̄)
→ℎ }.</p>
      <p>R,Σ ,(̄) = |{′ ∈ rep(, Σ)</p>
      <p>| ̄ ∈ (′)}|/|rep(, Σ) |. Our main problem is:
Consistent Query Answering. Consider a database , a set Σ of primary keys, and a
CQ (̄) . The relative frequency of a tuple ̄ ∈ dom()|̄| w.r.t. , Σ and  is the ratio</p>
      <sec id="sec-2-1">
        <title>PROBLEM : INPUT : OUTPUT :</title>
        <p>CQA</p>
      </sec>
      <sec id="sec-2-2">
        <title>A database , a set Σ of primary keys, and a CQ (̄) .</title>
        <p>The set {︀ (̄, R,Σ ,(̄)) | ̄ ∈ dom()|̄| and R,Σ ,(̄) &gt; 0}︀ .</p>
        <p>That is, output, for each candidate answer tuple ̄, its relative frequency, when this is not zero.
We dub the problem of computing R,Σ ,(̄), given , Σ , , ̄, RelativeFreq. We know that
RelativeFreq is #P-hard, in data complexity. Clearly, having an eficient way of approximating
the relative frequency of a tuple will immediately provide an algorithm for approximately
solving CQA. Hence, in the rest, we focus on RelativeFreq.</p>
        <p>Pr (| − R,Σ ,(̄)| ≤  · R,Σ ,(̄)) ≥
1</p>
        <p>−  .</p>
        <p>Approximate CQA. A (data-eficient) randomized approximation scheme
for the problem
RelativeFreq is a randomized algorithm A that takes a database , a set Σ of primary keys,
a CQ (̄) , a tuple ̄ ∈ dom()|̄|, and numbers  &gt;
mial time in |||| + ||̄||,2 1/ and log(1/ ), and produces a random variable  such that
0 and 0 &lt;  &lt;</p>
      </sec>
      <sec id="sec-2-3">
        <title>1, runs in polyno</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Approximation Schemes</title>
      <p>In principle, an approximation scheme for RelativeFreq would require to access the input
database . However, doing this naively is prohibitive in practice since, in general,  is very
large. However, an approximation scheme for RelativeFreq only needs a small part of the
database, which we call synopsis. In what follows, let , Σ , (̄) , ̄ be a database, a set of
primary keys, a CQ and a tuple in dom()|̄|, respectively.</p>
      <p>The (Σ , )-synopsis of  for ̄ is the pair (ℋ, ℬ), where ℋ
=
{ℎ() | (̄)
, and ℎ() |= Σ } and ℬ = {blockΣ (,  ) | 
∈ ∪∈ℋ}. That is, the (Σ , )-synopsis
of  for ̄ collects all the consistent homomorphic images of (̄) in  (the set ℋ), and
the blocks of the atoms occurring in a consistent homomorphic image of (̄) in  (the
ℎ
→
2As usual, for a syntactic object , we write |||| for its size.
set ℬ). We also let db(ℬ) = {{ 1, . . . ,  } | ⟨ 1, . . . ,  ⟩ ∈ × ∈ℬ}, i.e., the set of
databases that can be formed by keeping exactly one atom from each member  of ℬ.
Finally, R(ℋ,ℬ) = |{ ∈ db(ℬ) |  ⊆  for some  ∈ ℋ}||db(ℬ)|.3</p>
      <p>We show that the (Σ , )-synopsis of a database  for a tuple ̄ can be eficiently constructed,
and it contains enough information that allows us to compute the relative frequency of ̄.
Lemma 2. If (ℋ, ℬ) is the (Σ , )-synopsis of  for ̄, then :1) (ℋ, ℬ) can be computed in
polynomial time in |||| + ||̄||; 2) R,Σ ,(̄) = R(ℋ,ℬ); 3) R,Σ ,(̄) = 0 if ℋ = ∅.</p>
      <sec id="sec-3-1">
        <title>3.1. Monte Carlo Approach</title>
        <p>
          Let (ℋ, ℬ) be the (, Σ) -synopsis of some tuple ̄ and let Sample be a randomized procedure
that with input (ℋ, ℬ), computes a random number in [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]. Being randomized, its output is
determined by the underlying sampling space obtained from (ℋ, ℬ). Sample(ℋ, ℬ) produces a
random variable, and a crucial problem for us is computing its expected value E[Sample(ℋ, ℬ)].
        </p>
        <p>E[Sample(ℋ, ℬ)] can be approximated by performing the following: (1)  := 0, (2) for 
times do the following:  :=  + Sample(ℋ, ℬ), and finally (3) return / .</p>
        <p>
          It is clear that as long as we increase the number  of iterations in the above procedure,
the ratio / is a better approximation of E[Sample(ℋ, ℬ)]. From [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], we know that we can
compute the minimum number (up to constant factors) of iterations  such that the above
algorithm approximates E[Sample(ℋ, ℬ)] within a given error  , and with probability at least
1 −  , for some given 0 &lt;  &lt; 1. By exploiting this approach in the algorithm discussed
above, we obtain the algorithm MonteCarlo[Sample], which is parametrized with a randomized
procedure Sample. Let ||ℋ, ℬ|| = |ℋ| + max∈ℋ{||||} + ||ℬ||. From [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], we know that:
(*) If Sample runs in polynomial time in ||ℋ, ℬ|| and E[Sample(ℋ, ℬ)] &gt; 1/pol(||ℋ, ℬ||), when
E[Sample(ℋ, ℬ)] &gt; 0, for some polynomial pol, then MonteCarlo[Sample] is a data-eficient
randomized approximation scheme for computing E[Sample(ℋ, ℬ)].4
        </p>
        <p>By combining the above property and Lemma 2, an approximation scheme for RelativeFreq
can be obtained by devising the right sampling procedure Sample.</p>
        <p>Natural Sampling Space. The first sampler, which we call SampleNatural, takes as input
(ℋ, ℬ), and has E[SampleNatural(ℋ, ℬ)] = R(ℋ,ℬ), i.e., its expected value precisely coincides
with the relative frequency. This sampler simply performs the following: 1) sample a database
 ∈ db(ℬ) uniformly at random (u.a.r.); 2) if  ⊆ , for some  ∈ ℋ, output 1,
otherwise output 0. We can prove that SampleNatural enjoys (*) and E[SampleNatural(ℋ, ℬ)] =
R(ℋ,ℬ). By Lemma 2, the algorithm that constructs the (, Σ) -synopsis (ℋ, ℬ) of ̄, and runs
MonteCarlo[SampleNatural] with input (ℋ, ℬ), is an approximation scheme for RelativeFreq.
This algorithm is called Natural.</p>
        <p>
          Symbolic Sampling Space. An alternative sampler is one whose expected value, on input
(ℋ, ℬ), is a number from which R(ℋ,ℬ) can be derived. We devise a slightly more complex
sampling space, called symbolic, by exploiting ideas from [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Let 1, . . . ,  be an arbitrary
order3In case ℋ = ∅, we let R(ℋ,ℬ) = 0.
        </p>
        <p>4In a data-eficient randomized approximation scheme for EV[Sample] the output to be approximated is
E[Sample((ℋ, ℬ))], and the running time must be polynomial in ||ℋ, ℬ|| (and 1/ and log(1/ )), as these are the
parameters of a synopsis that depend on the database.
ing (e.g., lexicographical) among the databases of ℋ. For each  ∈ [], ℐℋ,ℬ is set of all  ∈ db(ℬ)
such that  ⊆ . Our sampling space is defined as ℋ∙,ℬ = {︁(, ) |  ∈ [] and  ∈ ℐℋ,ℬ}︁.</p>
        <p>Our second sampler, called SampleKL, performs the following: 1) sample (, ) ∈ ℋ∙,ℬ u.a.r.,
2) if there is no  &lt;  s.t. (, ) ∈ ℋ∙,ℬ, then output 1, otherwise output 0. We can prove
that SampleKL enjoys (*) and that its expected value is /|ℋ∙,ℬ|, where  is the numerator of
R(ℋ,ℬ). Hence, we consider the algorithm KL, that constructs the (, Σ) -synopsis (ℋ, ℬ) of
̄, runs the algorithm MonteCarlo[SampleKL] with input (ℋ, ℬ), and multiplies its output by
|ℋ∙,ℬ|/|db(ℬ)|. By Lemma 2, KL is an approximation scheme for RelativeFreq.</p>
        <p>One can devise a slightly diferent sampler, called SampleKLM, that has a lower variance
in general w.r.t. SampleKL. Using SampleKLM in place of SampleKL, as discussed above, we
obtain the approximation scheme for RelativeFreq called KLM.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Union of Sets Approach</title>
        <p>
          An approximation scheme, called self-adjusting coverage algorithm was presented in [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] for
the union of sets problem, which takes a description of  ≥ 1 sets 1, . . . , , and the output is
| ⋃︀∈[] |. For space reasons, we do not present this algorithm, but only show how RelativeFreq
reduces to this problem. We first construct the (Σ , )-synopsis of  for ̄, (ℋ, ℬ). Then, since
⋃︀ ℐℋ,ℬ is the numerator of the ratio R(ℋ,ℬ), we approximate the value | ⋃︀ ℐℋ,ℬ| via the
selfadjusting coverage algorithm of [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], by seeing (ℋ, ℬ) as the description of the sets ℐℋ,ℬ, for
each . Then, dividing the result by |db(ℬ)| will give an approximation of R(ℋ,ℬ). This, together
with Lemma 2, provides an approximation scheme for RelativeFreq, which we dub Cover.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. The Benchmark</title>
      <p>
        Implementation. Note that to compute the approximate relative frequency of each candidate
tuple ̄, the corresponding synopsis must be computed first, regardless of the chosen algorithm.
We show we can construct the set synΣ ,() of all pairs (̄, ), with  the (, Σ) -synopsis of
̄ ∈ dom()|̄|, with R,Σ ,(̄) &gt; 0, via a single SQL query, obtained by a careful rewriting
of the original query . We call this initial precomputation of synΣ ,() the preprocessing
step. For implementing the approximation schemes, we extended the framework of [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] of
approximation algorithms for the number of satisfying assignments of DNF formulas.
Experimental Setting. To efectively evaluate our algorithms, our test scenarios must expose
how the running time of the approximation schemes is afected by key input parameters like
the inconsistency of the database, and the number of joins in the query.
      </p>
      <p>Data Generator. We generate databases using the dbgen tool of the TPC-H benchmark.
Databases of varied size can be generated by providing a scale factor as input. Since the
generated databases are consistent w.r.t. the underlying constraints we will later inject inconsistency
via a noise generator that we developed.</p>
      <p>
        Query-aware Noise Generator. General-purpose noise generators exist in the literature; see,
e.g., [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. However, all such tools are query-oblivious, i.e., do not take into account any query:
if we generate noise by randomly adding facts to the database, considering only the primary
keys, it is likely that we will not afect the evaluation of the query (and hence the synopses), as
in general, databases are much larger than the actual synopses.
      </p>
      <p>
        Given , Σ , (̄) , a percentage 0 &lt;  &lt; 1 and an integer interval [, ], our noise generator
constructs the set synΣ ,(), and for each (̄, (ℋ, ℬ)), randomly chooses ℓ = ⌈ · |ℬ|⌉ blocks
1, . . . , ℓ, and for each , adds to  a random number  ∈ [, ] of tuples having the same
key value as the one of . These tuples are constructed by reusing existing tuples from , and
by changing their key value. This ensures that the new tuples preserve the join patterns of .
Query Generator. To generate our stress test queries we exploit a recent query generator [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ],
which we call static query generator (SQG) since it can control static query parameters, such as
the number of joins, and the number of free variables (i.e., output variables). As we also want
to precisely control the size of the synopsis, we also consider some dynamic query parameters.
      </p>
      <p>Letting synΣ ,() = {(̄, (ℋ, ℬ))}∈[] for some  ≥ 1, we consider the homomorphic size
of  w.r.t.  defined as | ⋃︀</p>
      <p>=1 ℋ|, which measures how large is the portion of  that can afect
any (, Σ) -synopsis in synΣ ,(). Since synopses in synΣ ,() can have diferent sizes, it
is natural to consider the average size of a (Σ , )-synopsis in synΣ ,() as a key parameter,
which is given by | ⋃︀=1 ℋ| . We normalize the above to a number in the interval (0, 1) by
|synΣ ,()|
considering its inverse. We call this number the balance of  w.r.t. . The closer the balance to
1 (resp., 0) is, the smaller (resp., larger) the average size of a (Σ , )-synopsis is.</p>
      <p>We developed a query generator, called dynamic, that modifies a query generated by the SQG
so that it has a desired balance w.r.t. the given database .</p>
      <p>
        Test Scenarios. For our test scenarios we consider the TPC-H schema, denoted SH, and the
set Σ H of primary keys over SH coming with the TPC-H benchmark. We generated a large set
of database-query pairs H as follows: (1) First, we generated a (consistent) database H over
SH, using scale factor 1GB. The database contains roughly 9M tuples. (2) For each number of
joins  ∈ [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we generated 5 base CQs  , with  ∈ [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], having  joins, using SQG, producing
a set  of 25 CQs. Using our query-aware noise generator, for each  ∈ , we generated
10 inconsistent databases [], using noise levels  ∈ {0.1, 0.2, . . . , 1}, and the same block
interval [
        <xref ref-type="bibr" rid="ref2 ref5">2, 5</xref>
        ]. (4) Finally, for each  ∈ , and for each [], we generated 11 variations of ,
denoted [], having balance  ∈ {0, 0.1, 0, 2, . . . , 1} w.r.t. [], where balance  = 0 means
the query [] is Boolean. H is then the set of pairs {([], []) |  ∈ { },∈[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ],  ∈
{0.1, . . . , 1} and  ∈ {0, 0.1, . . . , 1}}, containing 2750 database-query pairs.
      </p>
      <p>We considered diferent scenarios (subsets of H) by fixing all parameters but one. Noise
scenarios: The sets Noise[, ] of all pairs of H, where the balance level () and the number of
joins () are fixed. Join scenarios: The sets Join[, ] of all pairs of H, where the noise level ()
and the balance level () are fixed. Balance scenarios: The sets Balance[, ] of all pairs of H,
where the noise level () and the number of joins () are fixed.</p>
      <p>Hardware and execution. For the experiments we used two Desktop PCs each with an Intel(R)
Core(TM) i5-8500 CPU@3.00GHz processor, 16GB RAM, 500GB mechanical drive, running
Xubuntu 19.04 64-bit. All databases are stored in a PostgreSQL 11.5 instance. The approximation
schemes and the preprocessing step are implemented in C++ and SQL, respectively.</p>
      <p>We fixed  = 0.25 and  = 0.1, i.e., 75% confidence and 10% error. We report the time
required to execute each algorithm over the synopses of all output tuples, excluding the time of
the preprocessing step since it is the same for all the approximation schemes. Each data point
in a plot of Fig. 1 is the average runtime over all the CQs for that X-axis value; due to space
constraints, here we report only some representative noise and balance scenarios (see Fig. 1).</p>
    </sec>
    <sec id="sec-5">
      <title>5. Take-home Messages</title>
      <p>Our analysis reveals a striking diference between Boolean and non-Boolean CQs.
The Boolean Case. Here, Natural is the best performer, no matter the noise and the number
of joins in the query, whereas Cover is the worst. Only in the case of CQs with many joins
Cover is comparable to KL(M), but in any case, Natural is the way to go. This is mainly due
to the fact that for Boolean queries, the (only) synopsis is very large in general, and thus the
relative frequency is very high. Hence, since Natural directly estimates the value of the relative
frequency via the natural sampling space, requires less samples to obtain a good estimate.
The Non-Boolean Case. In this case, KLM is the way to go in almost all the scenarios, i.e.,
for any level of noise and for any level of (non-zero) balance of the query. Only for CQs with
many joins, and high noise, KL is comparable to KLM. Nevertheless, KL is never going to
outperform KLM in practice. The worst algorithms for non-Boolean CQs are Natural and Cover.
They perform similarly for low levels of noise, balance and joins, but, in general, Natural is the
slowest. The reson for this outcome is that for higher values of balance, the average size of a
synopsis is low, which then implies that the relative frequency of the corresponding tuple is
usually close to 0, hence, sampling from the symbolic space helps avoiding to perform a large
number of samples like Natural does. Moreover, the low-variance sampler of KLM lets the
algorithm find the optimal number of samples earlier than KL, on average.</p>
      <p>Practical Applicability. In most of our experiments, the preprocessing step took less than 30
seconds. Furthermore, for modest scenarios, which is what we expect to face in practice, the
running time of the best performing approximation scheme is in the order of seconds. We also
considered larger databases, up to 120M tuples (15GBs), for which the overall running time of
the preprocessing step is encouraging (considering the weak machines used): it is in the order
of minutes, while the best approximation algorithm for each case always performs in the order
of seconds. Thus, we believe applying approximate CQA in practice is not an unrealistic goal.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>Part of Calautti’s and Console’s work was done while they were research associates at the
University of Edinburgh. Console has been partially supported by MIUR under the PRIN 2017</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Arenas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. E.</given-names>
            <surname>Bertossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chomicki</surname>
          </string-name>
          ,
          <article-title>Consistent query answers in inconsistent databases</article-title>
          ,
          <source>in: PODS</source>
          ,
          <year>1999</year>
          , pp.
          <fpage>68</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Calautti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Console</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pieris</surname>
          </string-name>
          ,
          <article-title>Counting database repairs under primary keys revisited</article-title>
          ,
          <source>in: PODS</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>104</fpage>
          -
          <lpage>118</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Koutris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wijsen</surname>
          </string-name>
          ,
          <article-title>Consistent query answering for primary keys in logspace</article-title>
          , in: ICDT,
          <year>2019</year>
          , pp.
          <volume>23</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          :
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>B. ten Cate</surname>
          </string-name>
          , G. Fontaine,
          <string-name>
            <given-names>P. G.</given-names>
            <surname>Kolaitis</surname>
          </string-name>
          ,
          <article-title>On the data complexity of consistent query answering</article-title>
          ,
          <source>Theory Comput. Syst</source>
          .
          <volume>57</volume>
          (
          <year>2015</year>
          )
          <fpage>843</fpage>
          -
          <lpage>891</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Molinaro</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Trubitsyna</surname>
          </string-name>
          ,
          <article-title>Computing approximate query answers over inconsistent knowledge bases</article-title>
          ,
          <source>in: IJCAI</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1838</fpage>
          -
          <lpage>1846</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Dixit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. G.</given-names>
            <surname>Kolaitis</surname>
          </string-name>
          ,
          <article-title>A sat-based system for consistent query answering</article-title>
          ,
          <source>in: SAT</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>117</fpage>
          -
          <lpage>135</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fuxman</surname>
          </string-name>
          , E. Fazli,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Conquer: Eficient management of inconsistent databases</article-title>
          ,
          <source>in: SIGMOD</source>
          ,
          <year>2005</year>
          , pp.
          <fpage>155</fpage>
          -
          <lpage>166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Molinaro</surname>
          </string-name>
          ,
          <article-title>Probabilistic query answering over inconsistent databases</article-title>
          , Ann. Math. Artif. Intell.
          <volume>64</volume>
          (
          <year>2012</year>
          )
          <fpage>185</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Calautti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Libkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pieris</surname>
          </string-name>
          ,
          <article-title>An operational approach to consistent query answering</article-title>
          ,
          <source>in: PODS</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>239</fpage>
          -
          <lpage>251</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Maslowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wijsen</surname>
          </string-name>
          ,
          <article-title>A dichotomy in the complexity of counting database repairs</article-title>
          ,
          <source>J. Comput. Syst. Sci</source>
          .
          <volume>79</volume>
          (
          <year>2013</year>
          )
          <fpage>958</fpage>
          -
          <lpage>983</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Fiorentino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Molinaro</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Trubitsyna</surname>
          </string-name>
          ,
          <article-title>ACID: A system for computing approximate certain query answers over incomplete databases</article-title>
          ,
          <source>in: SIGMOD</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1685</fpage>
          -
          <lpage>1688</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Molinaro</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Trubitsyna</surname>
          </string-name>
          ,
          <article-title>Approximation algorithms for querying incomplete databases</article-title>
          ,
          <source>Inf. Syst</source>
          .
          <volume>86</volume>
          (
          <year>2019</year>
          )
          <fpage>28</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Console</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Guagliardo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Libkin</surname>
          </string-name>
          , E. Toussaint,
          <article-title>Coping with incomplete data: Recent advances</article-title>
          ,
          <source>in: PODS</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>R. M. Karp</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Luby</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Madras</surname>
          </string-name>
          ,
          <article-title>Monte-carlo approximation algorithms for enumeration problems</article-title>
          ,
          <source>J. Algorithms</source>
          <volume>10</volume>
          (
          <year>1989</year>
          )
          <fpage>429</fpage>
          -
          <lpage>448</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Dagum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Karp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Luby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <article-title>An optimal algorithm for monte carlo estimation</article-title>
          ,
          <source>SIAM J. Comput</source>
          .
          <volume>29</volume>
          (
          <year>2000</year>
          )
          <fpage>1484</fpage>
          -
          <lpage>1496</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>K. S.</given-names>
            <surname>Meel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Shrotri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Y.</given-names>
            <surname>Vardi</surname>
          </string-name>
          ,
          <article-title>Not all FPRASs are equal: Demystifying FPRASs for DNF-counting</article-title>
          ,
          <source>Constraints</source>
          <volume>24</volume>
          (
          <year>2019</year>
          )
          <fpage>211</fpage>
          -
          <lpage>233</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P. C.</given-names>
            <surname>Arocena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Glavic</surname>
          </string-name>
          , G. Mecca,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Papotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Santoro</surname>
          </string-name>
          ,
          <article-title>Messing up with BART: Error generation for evaluating data-cleaning algorithms</article-title>
          ,
          <source>PVLDB</source>
          <volume>9</volume>
          (
          <year>2015</year>
          )
          <fpage>36</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Benedikt</surname>
          </string-name>
          , G. Konstantinidis,
          <string-name>
            <given-names>G.</given-names>
            <surname>Mecca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Papotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Santoro</surname>
          </string-name>
          , E. Tsamoura,
          <article-title>Benchmarking the chase</article-title>
          ,
          <source>in: PODS</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>37</fpage>
          -
          <lpage>52</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>