<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Investigating Binary Partition Power in Metric Query</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Richard Connor</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alan Dearle</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lucia Vadicamo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Information Science and Technologies, CNR</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computer Science, University of St Andrews</institution>
          ,
          <addr-line>St Andrews, KY16 9SS, Scotland</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>It is generally understood that, as dimensionality increases, the minimum cost of metric query tends from (log ) to () in both space and time, where  is the size of the data set. With low dimensionality, the former is easy to achieve; with very high dimensionality, the latter is inevitable. We previously described BitPart as a novel mechanism suitable for performing exact metric search in “high(er)” dimensions. The essential tradeof of BitPart is that its space cost is linear with respect to the size of the data, but the actual space required for each object may be small as log2  bits, which allows even very large data sets to be queried using only main memory. Potentially the time cost still scales with (log ). Together these attributes give exact search which outperforms indexing structures if dimensionality is within a certain range. In this article, we reiterate the design of BitPart in this context. The novel contribution is an indepth examination of what the notion of “high(er)” means in practical terms. To do this we introduce the notion of exclusion power, and show its application to some generated data sets across diferent dimensions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;similarity search</kwd>
        <kwd>metric indexing</kwd>
        <kwd>metric search</kwd>
        <kwd>exclusion power</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Searching a database for objects that are most similar to a query object is a fundamental task in
many database application domains, for example data mining, knowledge discovery, information
extraction, and machine learning [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] we described BitPart as a mechanism suitable for performing exact metric search in
“high(er)” dimensions. While we did not define “high(er)”, the evidence in the paper shows
that it gives its best relative performance for spaces with dimensionality of between 10 and 30.
BitPart’s space cost is linear with respect to the size of the data, but the actual space required
for each object may be small as log2  bits, which allows representations of very large data sets
to fit in main memory. Potentially the time cost still scales with (log ) time. Together these
attributes give exact search which outperforms indexing structures as dimensionality increases
within a certain range1.
      </p>
      <p>
        SEBD 2022: The 30th Italian Symposium on Advanced Database Systems, June 19-22, 2022, Tirrenia (PI), Italy
" rchc@st-andrews.ac.uk (R. Connor); al@st-andrews.ac.uk (A. Dearle); lucia.vadicamo@isti.cnr.it (L. Vadicamo)
0000-0003-4734-8103 (R. Connor); 0000-0002-1157-2421 (A. Dearle); 0000-0001-7182-7038 (L. Vadicamo)
© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CPWrEooUrckResehdoinpgs 1IhStNpN:/c1e6u1r3-w-0s.oo7r3gteCtEhUatRdWueotroksthheopcuPrrsoecoefeddiimnegnss(iConEaUliRty-W[3S].oerxga)ct metric search rarely outperforms a sequential scan when
dimensionality exceeds 10 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. For high-dimensional data, approximate methods are usually employed, e.g [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ].
      </p>
      <p>The BitPart mechanism uses a set of binary partitions defined over the finite data set, where
each partition is represented by a bitmap. If the total set of bitmaps is perfectly balanced and
perfectly orthogonal, then the probability of a selection of  bitmaps being able to exclude an
arbitrary element of the data is 1− 21 , and the expected number of non-excluded values becomes
very small if  ≈ log2 . While we can come close to achieving this for low-dimensional data,
this becomes increasingly challenging as dimensionality increases.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] two issues are noted as requiring further investigation. The first is how to select
appropriate pivots in order to find orthogonal exclusion partitions. The second is the interaction
between partition balance and the eficacy of exclusion; this is the subject of this article.
      </p>
      <p>To do this we introduce the notion of exclusion power, and show its application to some
generated data sets across diferent dimensions. Exclusion power provides the ability to:
fundamentally understand the exclusions that are possible in diferent datasets, to understand
when one approach will outperform another and, to tune the BitPart algorithm to optimally
accommodate data sets of diferent dimensions.</p>
      <p>The novel contribution of this article is the quantification of exclusion power, which can be
measured for a given binary partition structure with respect to a balancing factor  . We observe
that, in low dimensions, partitions are best balanced evenly, approximately splitting the search
space into two equal parts. In higher dimensions however such balanced partitions perform
very badly, and highly skewed partitions have much greater power. We are not aware of any
previous work which attempts to quantify this efect, although it is well known within the
“folklore” of metric search researchers.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Formal Context</title>
      <p>
        Formally we are interested in searching a (large) finite set of objects  which is a subset of
an infinite set  , where (, ) is a metric space [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The general requirement is to eficiently
ifnd members of  which are similar to an arbitrary member of  given as a query, where
the distance function  gives the only way by which any two objects may be compared. The
simplest type of similarity query is the range search query: for some threshold , based on a
query  ∈  , the solution set is (, ) = { ∈ | (, ) ≤ }.
      </p>
      <p>The essence of metric search is to spend time pre-processing the finite set  (i.e., indexing
it) so that solutions to queries can be eficiently calculated. In all cases distances between
members of  and selected reference objects (pivot) are calculated during pre-processing. At
query time the relative distances between the query and the same pivot objects can be used to
make deductions about which data values may, or may not, be candidate solutions to the query.</p>
      <p>Many metric indexes employ binary partitions of the space  , i.e., partitions of the form
 = {0, 1} where 0 ∪ 1 =  and 0 ∩ 1 = ∅. In the context of metric search, binary
partitions are defined by partition rules that typically rely on computing the distance  of the
data objects to a set of pivots. Notable examples include ball and sheet partitioning (see, e.g.,
Figure 1). A ball partition is defined by a pivot  ∈  and a covering radius  ∈ R+, so that
0 = { ∈  | (, ) &gt;  }
1 = { ∈  | (, ) ≤  }
A sheet partition is defined by two pivots 1, 0 ∈  so that the boundary of the partition
1
 
0
1
1
0</p>
      <p>0
(a) Ball Partitioning
(b) Sheet Partitioning
regions is the hyperplane equidistant from the two reference objects:
0 = { ∈  | (, 1) &gt; (, 0)}
1 = { ∈  | (, 1) ≤ (, 0)}
Informally we refer to 0 as "outside" and 1 as "inside".</p>
      <p>Under certain circumstances it is possible to deduce that, for a given query, one of the
subsets of the partition cannot (or conversely must) contain solutions to the query. Typically,
the triangle inequality and the distances to the pivots are exploited to determine bounds on
distances between data objects (distance constraints), which are used at query time for excluding
certain regions from the searching process.</p>
      <p>The exclusion of a region of a partition occurs when the query ball does not intersect the
partition boundary. For a ball region, the condition for non-intersection of the query ball and
the boundary is
For a sheet region, the non-intersection condition is</p>
      <p>|(, ) −  | &gt; 
|(1, ) − (0, )| &gt; 2
(1)
(2)
If the respective non-intersection condition occurs, the implication is that any possible solution
to the query lies on one or other side of the partition boundary, and therefore either 0 or 1
may be excluded from the search for solutions.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The BitPart Mechanism</title>
      <p>The core of the technique is to define a large set of binary partitions over the original space;
the data is stored as a set of bitmaps according to containment with these regions. At query
time, the query is assessed against this containment information, but without reference to the
original data representation.</p>
      <p>For each region, one of three possible conditions may be determined: (a) the solution set
(, ) must be fully contained in the region, or (b) there is no intersection between the region
and the solution set, or (c) neither of these is the case. In either case (a) or (b), the containment
information stored may be useful with respect to solving the query, and is used as part of the
query computation. In case (c) however the containment information is of no value, and is not
1
3</p>
      <p>6
1  
accessed as a part of the computation. This approach maximises the efectiveness of memory
used for calculation against the original data representation: the amount of memory required
depends on the cardinality of the original data, but not on the size of its individual objects.
An Example. Figure 2 shows a simple example within the 2D plane, comprising four reference
objects 1 to 4 and a set of six regions defined by them. The regions are respectively: 1, the
area to the left of the line between 1 and 2; ℬ1, the area above the line between 1 and 3;
and the areas within the variously sized circles drawn around each 1 to 4, labeled 1, 1, ℰ1
and ℱ1 respectively. Note that each regional boundary defines a binary partition of the total
space of the form  = {1, 0}, where 0 =  ∖ 1. Thus in Figure 2, six binary partitions
( = {1, 0}, ℬ = {ℬ1, ℬ0},  = {1, 0}, etc) are considered. For each partition  we
generate a bitmap of  bits, where  is the number of data points, which stores the logical
containment information of each data object , that is 1 if  ∈ 1 and 0 if  ∈/ 1 (Figure 2c).
Thus a bit being set (to 1) corresponds to that object being in the region and a member of 1.</p>
      <p>Figure 2a also shows a range query  drawn with a threshold . It can be seen that all solutions
to this query must lie within the area highlighted in Figure 2b. The circle around the query
intersects with two regional boundaries (1 and ℰ1), and so no information is available with
respect to these; however it is completely contained within two of the defined regions ( ℬ1 and
1), and fails to intersect with the final two ( 1 and ℱ1). Such containment and intersection is
derivable only from the measurement of distances between the query and the reference objects,
the definition of the regions, and the search radius. Here for example the possible solution area
shown is determined using only the four distance calculations (, 1).. (, 4). It can be seen
that the only possible solutions to the query are those which match on all non-intersecting fields;
in this case, the objects 1, 5 and 6 depicted in Figure 2. This is equivalent to the Conjunctive
Normal Form (CNF) expression ℬ ∧ ¬ ∧  ∧ ¬ℱ . This expression covers the set of all possible
solutions to the query and hints at how the data can be stored using bit vectors.</p>
      <sec id="sec-3-1">
        <title>3.1. Data structures</title>
        <p>Before describing the query algorithm, we describe the data structures used and how they are
initialised. A set of enumerated pivots 1 to  is selected from . We define a set of binary
partitions within  , each of the form  = {0, 1}, defined using the distance function .
There may be many more regions than reference objects; for example a set of  reference
objects defines at least (︀ )︀ sheet regions, plus  ball regions.</p>
        <p>2</p>
        <p>We now define the notion of an exclusion zone (EZ) as a containment map of  based on a
given partition; this is the information we will use at query time to perform exclusions and
derive a candidate set of solutions. We impose an ordering on , then for each  map whether
it is a member of the region 1 or otherwise. This logical containment information is best
stored in a bitmap of  bits, where  = ||. One such exclusion zone is generated per partition.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Overview of the Query Algorithm</title>
        <p>The query process comprises three distinct phases:</p>
        <p>Phase 1 in which the query is checked against the regional definitions, and the two types of
regions with non-intersecting boundaries are identified. Initially, the distance from the query 
to each pivot  is measured. For each partition, it can be established if the query ball intersects
with the boundary of the partition. If there is no intersection, any solution to the query must lie
on one or other side of the partition’s boundary, so the exclusion zone is brought into the query
calculation in one of two sets depending on whether the query solutions are fully contained
within 1 or 0. We name these sets of bitmaps 1 and 0, which are carried forwards to the
computation in Phase 2.</p>
        <p>Phase 2 in which the regions identified are used to conduct a series of logical operations,
thus identifying a set of candidate solutions for the query.</p>
        <p>The second phase comprises the manipulation of the bitmaps deriving from the first phase
to identify a set of candidate solutions. This may be eficiently achieved by a series of bitwise
operations over these bitmaps2. It can be seen now that any solution is guaranteed to be
identifiable from the bitmap deriving from the bitwise calculation:
⎛ ⎞ ⎛ ⎛ ⎞⎞
⎝ ⋀︁ ⎠ ∧ ⎝¬ ⎝ ⋁︁ ⎠⎠
∈1 ∈0
(3)</p>
        <p>Phase 3 in which the candidate solutions are checked against the original data set to identify
the exact solution to the query. This requires a sequential scan of the single bitmap resulting
from Phase 2 and checking the data corresponding to any remaining set bits using the original
space and distance metric in order to produce an exact solution to the query.</p>
        <p>2On modern processors many of these bitwise operations can be performed in single instructions, each of
which may be performed in parallel.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Partition analysis</title>
      <p>To unify our treatment of diferent kinds of binary partitions with their associated distance
constraints, we introduce the concept of a binary partition characterised by a single partition
function  :  → R and a balancing factor  ∈ R with the following properties:
Property 1: 0 = { ∈  | () &gt;  } and 1 = { ∈  | () ≤  }
Property 2: (, ′) ≥ |  () −  (′)| for all , ′ ∈  (the distance lower-bound property)</p>
      <p>
        Note that if  () =  , then  is on the partition boundary and by convention we include the
partition boundary in 1. The function  is used both for determining the regions 0, 1, and
for deriving the exclusion rules to be used at query time. By exploiting the distance lower-bound
provided by the function  , we have that | () −  | &gt;  is a suficient condition to exclude one
of the two partitions from the search. Specifically,
• if  () ≤  −  then (, ) ⊂  1 and 0 can be excluded
• if  () &gt;  +  then (, ) ⊂  0 and 1 can be excluded
Note that diferent functions  may be used to determine the same regions 0, 1 but with
diferent distance lower-bounds, and thus diferent exclusion rules. For example, let’s consider
the Euclidean space (R, ℓ2),  = 0, 1() = (,1)− 2 (,0) and 2() = (,21)(2−1,(0),0)2 for
two fixed pivot 1 and 0. Both the functions 1 and 2 produce the same sheet partitioning,
i.e., 1 = { ∈ R|(, 1) ≤ (, 0)} and 0 = { ∈ R|(, 1) &gt; (, 0)}. However, in
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] we proved that the exclusion rule determined by 1 (Hyperbolic exclusion) is weaker than
the one determined by 2 (Hilbert Exclusion), and the latter is valid only on the large class of
Supermetric Spaces [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>In the following we show how general ball partitioning and sheet partitioning can be described
within this formalism.</p>
      <p>Ball Partitions A ball partition can be characterised by the function  () = (, ) and
the constant  . In this case 1 = { ∈  | () = (, ) ≤  }, and 0 = { ∈  | () =
(, ) &gt;  }, so Property 1 is clear. Properties 2 follows from the triangle inequality:
(, ) ≤ (, ′) + (′, ),</p>
      <p>and (′, ) ≤ (′, ) + (, )
which give that (, ′) ≥ | (, ) − (′, )| = | () −  (′)| for any , ′ ∈  .</p>
      <p>The balance of the partition is afected by selecting diferent values for  which we refer to
as diferent balances.</p>
      <p>
        Sheet Partitions Sheet partitions are defined with respect to two reference points 1 and 0.
Usually, a simple binary partition is defined according to whichever of 1 and 0 is the nearer
value with respect to the metric . However it has been noted [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ] that such partitions may
also be subject to a balancing factor  , so that
1
=
{ ∈  | (, 1) ≤ (, 0) +  },0
=
{ ∈  | (, 1) &gt; (, 0) +  }
      </p>
      <p>In this case, we can use  () = ((, 1) − (, 0))/2 which gives property 1 above for the
constant  = / 2. Moreover from the triangle inequality of the space we have that
(, 1) ≤ (, ′) + (′, 1) and (′, 0) ≤ (, ′) + (, 0)
for any , ′ ∈  , which gives (, ′) ≥ (,1)− (,0)− 2(′,1)+(′,0) =  () −  (′). By
symmetry, we also have (, ′) &gt;  (′) −  (), therefore (, ′) &gt; | () −  (′)|.</p>
      <p>Like ball partitions, the balance of the partition is afected by selecting diferent values for  .</p>
      <sec id="sec-4-1">
        <title>4.1. Exclusion power</title>
        <p>The exclusion power of a partition gives a quantification of the efectiveness of that partition
with respect to a given metric search task. It is clear that this is likely to be quite diferent
depending on the partition type, the selection of reference objects, and the chosen value  .</p>
        <p>The meaning of exclusion power is based on the probability, for arbitrarily selected objects
 and , and a distance , of being able to demonstrate that (, ) &gt; . As we will show, for
low dimensional spaces the maximum exclusion power of any partition structure is likely to
coincide with a value of  which causes the partition to bisect the finite search space , i.e.
|0| ≈ | 1|. However as we will see, in higher dimensional spaces this may be a very bad
choice.</p>
        <p>For a range query (, ) we define the exclusion power of the partition  = {0, 1} as
the probability of excluding one generic element  on the basis only of the data partition to
which it belongs:
 ( ∈ 0) ·  ((, ) ∩ 0 = ∅) +  ( ∈ 1) ·  ((, ) ∩ 1 = ∅)</p>
        <p>=  ( ∈ 0) ·  ((, ) ⊂  1) +  ( ∈ 1) ·  ((, ) ⊂  0)</p>
        <p>The exact calculation of the probabilities  ((, ) ⊂  ), for  = 0 or 1, is not
straightforward for a general binary partition in a metric space, however, if we know that the partition
can be characterised by a function  and a balancing factor  , we can estimate a lower-bound
of these probabilities3 and thus obtain an approximation of the exclusion power as
 ( () &gt;  ) ·  ( () ≤  − ) +  ( () ≤  ) ·  ( () &gt;  + )
(6)</p>
        <p>Let CDF() be the cumulative distribution function of  () for  ∈  then the exclusion
power can be expressed as</p>
        <p>(,  ) = (1 − CDF( )) · CDF( − ) + CDF( ) · (1 − CDF( + ))
where  is the partition balancing factor and  in the query distance threshold</p>
        <p>Estimates of exclusion power can be made with respect to a representative sample of the data
set, and indeed of the query set if that is diferent 4. A large enough sample will give a good
estimate of the probability of an individual datum falling within each of the subsets.</p>
        <p>3In general,  () ≤  −  ⇒ (, ) ⊂  1, therefore  ( () ≤  − ) ≤  ((, ) ⊂  1). Similarly,  () &gt;
 +  ⇒ (, ) ⊂  0, and thus  ( () &gt;  + ) ≤  ((, ) ⊂  0). The equality holds for some cases, e.g., the
ball partitioning, where it can be proved that  () ≤  −  ⇔ (, ) ⊂  1 and  () &gt;  +  ⇔ (, ) ⊂  .
4We assume from here on that the distribution of data and query is the same
(4)
(5)
(7)</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Power graphs</title>
      <p>To understand the efect of diferent values of  an exclusion power graph may be constructed
which is plotted across the range of  for a fixed value of . This allows the optimum value of 
to be deduced for a range query with threshold .</p>
      <p>We illustrate the construction of power graphs for 5 dimensional data in Section 5.1 and for
20 dimensional data in 5.2.
5.1. Example: single-pivot ball partitioning over 5 dimensional data
0.09
0.08
cy0.07
n
eu0.06
q
e
rF0.05
ilead00..0043
z
m
ro0.02
N
0.01
0
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6</p>
      <p>Distance from ball centre
(a)</p>
      <p>Figure 3 shows histograms of two sets of sampled distances from a randomly generated
ifve-dimensional Euclidean space. The left-hand side of the figure shows the distribution of
5,000 witness points in terms of distance from a randomly selected pivot point, . The right-hand
side shows the distribution of the 5NN (fifth nearest-neighbour) distances for a set of 1,000
queries; this distance representing a query threshold necessary to return a small proportion of
the data, and thus useful values for .</p>
      <p>Based on this information, we will pick example values of ^ = 0.89 and ^ = 0.19, being the
median values in the distributions of  and  values, respectively. We can now consider the
exclusion power of these values applied to the partition thus formed, centered around .</p>
      <p>The left-hand side of Figure 4 shows the Cumulative Probability Density function (CDF) with
the X-axis values of ^, ^ − ^, and ^ + ^ highlighted. Given that ^ has been selected as the median
distance from , the value of the CDF at ^ is 0.5. Therefore if an exclusion occurs, half of the
data set will be excluded and its exclusion power is: CDF(^ − ^) · 0.5 + (1 − CDF(^ + ^)) · 0.5.
The more general form of this equation (i.e., Equation 7) hints how the exclusion power can
be displayed for a single partition, as shown on the right-hand side of Figure 4. Here, for a
ifxed value of  (again 0.19), the power of the partition is calculated over the whole range of
distances that have been measured between the witness points and the point . As can be seen,
the maximum power is achieved when the partition is split at the median distance, i.e. ^ = 0.89
in this example. With this partition, and this value of  , the probability of excluding an arbitrary
datum for an arbitrary query is around 0.21.</p>
      <sec id="sec-5-1">
        <title>5.2. Example: single-pivot ball partitioning over 20 dimensional data</title>
        <p>The left-hand side of Figure 5 shows a histogram of 5,000 distances measured with respect to
a randomly selected element  of a 20-dimensional uniformly generated Euclidean space. The
right-hand side shows a histogram (for 1,000 queries selected from the same set) of the distances
to the 5th-nearest neighbour with respect to this same set of 5,000 values. Figure 6 shows the
same information in the form of cumulative probability density functions. The left-hand side of
Figure 6 shows the CDF function superimposed with a line corresponding to the medium value
for  . The value of t for this data is 1.07.</p>
        <p>Finally, although we have no space here to provide in-depth analysis, Figure 7 shows exclusion
power graphs using sheet partitions over both 5D and 20D spaces. It can be seen that the graph
patterns are quite similar.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.3. Use of Exclusion Power analysis</title>
        <p>Figures 4 and 6 demonstrate the exclusion power in 5 and 20 Dimensions respectively. In the
5 dimensional case we can observe that setting a ^ value of 0.89 (close to the mean of the
distribution of measured distances) will work extremely well. With such a value the exclusion
power is maximised. By contrast, choosing the mean in the case of 20D data will work very badly.
As can be seen in Figure 6 such a value is unlikely to result in any successful exclusions. These
diagrams explain behaviour observed by many researchers into metric search that choosing
unbalanced indexing structures often works better for higher dimensional data.</p>
        <p>The establishment of a unified partition interface using  and  allows similar graphs to be
produced for sheet partitions (e.g., Figure 7). The mechanistic value of these graphs is that the
value or values of  corresponding to the maxiumum power can be automatically extracted on
a per-partition basis, and used in an implementation of BitPart. In the case of low dimensional
data a single ofset can be used for each ball where as for high(er) dimensional data two ofsets
can be used. Furthermore, as described earlier a particular implementation can use as few or as
many exclusion zones that as necessary, as may be predicted by the power value at the optimum
value.</p>
        <p>In Section 6 these ofsets are applied to a BitPart implementation to demonstrate their efect.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Experimental Validation using BitPart</title>
      <p>(a) 5 dimensional space
(b) 20 dimensional space</p>
      <p>In order to validate the above results we ran experiments against a BitPart implementation5
using uniformly generated data in 5 and 20 dimensions with Euclidean distance. Each contained
one million data points, and 1000 queries were executed. In all experiments fixed threshold
distances  were used and set to values which correspond to one-millionth of the data volume.
In order to measure exclusion power, the 1000 queries were run 25 times for difering values of
 for both sheet and ball partitions. The 5D data uses 20 pivot points and the 20D data 100.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] we described how a selection of  reference points allows for (︀ )︀ sheet and  ball
2
partitions to be produced. After the above analysis, we used this number for the 5D space, but
in the 20D space we generated twice that number, corresponding to the twin peaks in the power
graphs. Note that this does not require any extra distance calculations at query time, as the
number of reference points is not increased. Furthermore there is not necessarily any increase
in memory usage, as partitions are typically stored in secondary memory and only fetched
when the query ball does not intersect with the partition, which requires only the evaluation of
the  function to determine. For each experiment a suitable range is calculated for  for both
balls and sheets, ensuring that for all partitions both 0 and 1 are non-empty, and the whole
calculation was executed with diferent values of  across this range.
      </p>
      <p>Results are shown in Figure 8. In order to visualize these results, the number of residual (i.e.
Phase 3) distance calculations per query was measured, this number giving a good proxy for the
value of the exclusion mechanism. The plots show the inverse of these counts. Peaks similar to
those shown in the power graphs in Figures 4 and 6 can be seen in these experimental results.</p>
      <p>5The experimental code may be found in https://bitbucket.org/richardconnor/metricbitblaster.git</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusions</title>
      <p>In this paper, we have reiterated the design of the BitPart indexing and query mechanisms.
The novel contribution of this paper is an exposition of how dimensionality efects the eficacy
of this and other metric search mechanisms. We have demonstrated how the distribution of
distances changes with the dimensionality of the data and how in turn this afects the ability
to perform efective exclusions and thus eficient queries. In particular we have explored the
notion of balance and how that afects exclusion. To do this we have introduced the notion of
exclusion power, and shown its application to some Euclidean data sets in diferent dimensions.</p>
      <p>This analysis gives fundamental insights into what data can be queried efectively and
what cannot. Furthermore it demonstrates how index and search algorithms can be tuned to
compensate for data sets of arbitrary dimensions.</p>
      <p>We have demonstrated that the phenomena explored in mathematical terms are manifested
in synthetically generated data sets and queries over them using an implementation of BitPart.
We believe that the results in this paper may be applied to other metric search and indexing
mechanisms. Our future plans include investigating these phenomena in real life data sets and
exploring the outstanding issue of partition orthogonality.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Zezula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Amato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dohnal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Batko</surname>
          </string-name>
          ,
          <article-title>Similarity search: the metric space approach</article-title>
          , volume
          <volume>32</volume>
          ,
          <string-name>
            <surname>Springer</surname>
            <given-names>Science</given-names>
          </string-name>
          &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dearle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Connor</surname>
          </string-name>
          , Bitpart:
          <article-title>Exact metric search in high(er) dimensions</article-title>
          ,
          <source>Information Systems</source>
          <volume>95</volume>
          (
          <year>2021</year>
          )
          <fpage>101493</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Pestov</surname>
          </string-name>
          , Indexability, concentration, and
          <article-title>vc theory</article-title>
          ,
          <source>Journal of Discrete Algorithms</source>
          <volume>13</volume>
          (
          <year>2012</year>
          )
          <fpage>2</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Weber</surname>
          </string-name>
          , H.
          <article-title>-</article-title>
          <string-name>
            <surname>J. Schek</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Blott</surname>
          </string-name>
          ,
          <article-title>A quantitative analysis and performance study for similaritysearch methods in high-dimensional spaces</article-title>
          ,
          <source>in: Proc. of 24th VLDB</source>
          , volume
          <volume>98</volume>
          , Morgan Kaufmann,
          <year>1998</year>
          , pp.
          <fpage>194</fpage>
          -
          <lpage>205</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Amato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gennaro</surname>
          </string-name>
          , P. Savino, MI-File:
          <article-title>Using inverted files for scalable approximate similarity search</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          (
          <year>2014</year>
          )
          <fpage>1333</fpage>
          -
          <lpage>1362</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Novak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Batko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zezula</surname>
          </string-name>
          ,
          <article-title>Metric index: An eficient and scalable solution for precise and approximate similarity search</article-title>
          ,
          <source>Information Systems</source>
          <volume>36</volume>
          (
          <year>2011</year>
          )
          <fpage>721</fpage>
          -
          <lpage>733</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Connor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Cardillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vadicamo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rabitti</surname>
          </string-name>
          ,
          <article-title>Hilbert exclusion: improved metric search through finite isometric embeddings</article-title>
          ,
          <source>ACM Transactions on Information Systems (TOIS) 35</source>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Connor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vadicamo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Cardillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rabitti</surname>
          </string-name>
          , Supermetric search,
          <source>Information Systems</source>
          <volume>80</volume>
          (
          <year>2019</year>
          )
          <fpage>108</fpage>
          -
          <lpage>123</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Connor</surname>
          </string-name>
          ,
          <article-title>Reference point hyperplane trees</article-title>
          ,
          <source>in: International Conference on Similarity Search and Applications</source>
          , Springer,
          <year>2016</year>
          , pp.
          <fpage>65</fpage>
          -
          <lpage>78</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lokoč</surname>
          </string-name>
          , T. Skopal,
          <article-title>On applications of parameterized hyperplane partitioning</article-title>
          ,
          <source>in: Proceedings of the Third International Conference on SImilarity Search and APplications</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>131</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>